The use of the light-year and parsec to measure stellar distances gives a sense of the vast distances to the stars. To ancient cultures, however, the stars seemed relatively close to Earth. The Egyptians imagined them as points of light on a tented canopy, held up by mountain ranges at the corners of the kingdom. The Greeks thought of them as fiery embers carried overhead on crystalline spheres. Ancient cultures could not conceive of distances much larger than the size of the Earth. Even to a modern day observer, the stars in the night sky all look to be at the same distance – the depth of space is not obvious.
Christian Huygens. Click here for original source URL.
Early distance estimates were no more than educated guesses. In the late 17th century, Dutch scientist Christian Huygens made an image of the Sun through a pinhole in a darkened room. He varied the size of the pinhole until the image seemed equal in brightness to an image of Sirius, the brightest star. Since the pinhole admitted 1/27,000 of the light of the Sun, Huygens concluded that Sirius was 27,000 times farther away than the Sun (it is actually 543,900 times farther away and substantially more luminous than the Sun). Around the same time, Issac Newton tried to use Saturn as a sort of reflecting mirror to measure the intensity of sunlight. He guessed the percentage of the Sun's light that Saturn reflects and assumed that bright stars have similar absolute brightness to the Sun. Newton concluded that the bright stars are about 18,000 times farther away than the Sun (which is simply wrong, for the same reason Huygen was wrong; neither of them knew that most bright stars in the nigt sky are intrinsically muvh brighter than the Sun).
Both these men where trying to use a crude method for estimating the distance of stars; inverse square law for the propagation of light. The brightest stars have values of apparent brightness that are about 10 billion (or 1010) times fainter than the Sun. Unfortunately, the relative brightness of the Sun and other stars is impossible to measure accurately without equipment. Mistakes continued into more modern times. In 1829, English scientist William Wollaston used the inverse square law to estimate that most typical stars must be at least 100,000 (or 105) times more distant than the Sun, since the inverse square law indicates that dimming 10 billion times corresponds to increasing distance 100,000 times (which is accurate for some stars).
Inverse Square Law. S represents an ideal source of electromagnetic radiation and A represents an arbitrary segment of the surface of a sphere of radius r. Click here for original source URL.
Simple distance estimates based on the relative brightness of the Sun and the stars are flawed for one simple reason: stars come in many luminosities. The brightest stars in the sky are much more luminous than the Sun, so they are much more distant than we would calculate by assuming that they resemble the Sun. Also, these estimates only give typical distances rather than accurate distances for individual stars. Even so, this is an example of how a simple physical idea — the way that light dims as it spreads through space — can be used to deduce remarkable information about the distance to the stars. Galileo uses this reasoning when he pointed his telescope at the Milky Way and saw the gauzy light resolve into many pinpoints of light. Guessing that they were all stars like the Sun, he recognized the enormous three-dimensional depth of space. At the time, this reasoning implied a universe thousands of times larger than previous estimates!
One of the greatest delays in switching from a geocentric to a heliocentric model of the solar system was in part tied to the lack of an observable motion in the stars — a measurable stellar parallax — as the Earth orbited. It meant that the stars must be fantastically distant. The first successful measurement of stellar parallax did not come until 1838, when German astronomer Friedrich Bessel detected the slight seasonal shift of the star 61 Cygni — only two thirds of a second of arc. The measurement of a parallax distance is a direct trigonometric technique that is independent of any assumption about the nature of the star being observed. Recall that parallax is the angular shift in the position of an object caused by a shift in the observer’s position. The calculation of a parallax distance is another application of the small angle equation. The distance is inversely proportional to the parallax angle. A star with a parallax angle of 1 arc second is at a distance of 1 parsec.
To understand how parallax measurements are made, hold your finger in front of your face and look past it toward a bookcase on the other side of the room. Your finger represents a nearby star — the books on the far wall represent distant stars. Your right eye represents the view on one side of the Sun. Your left eye represents the view six months later, after the Earth has traveled to a point on the opposite side of the Sun (a shift in the Earth’s position by 2 A.U. as seen from the star). First wink one eye and then the other. Your finger (the nearby star) seems to shift back and forth. Hold your finger only a few centimeters from your eyes; the shift is large. Hold your finger at arm’s length; the shift is smaller. Likewise, the farther away the nearby star, the smaller the parallax angle. The parallax in this experiment can be measured in degrees, but the parallaxes of actual stars are all ten thousand time smaller, less than a second of arc! Parallaxes as small as 1/100 second of arc can be reliably measured, and the distance of such a star is 100 parsecs.
The distance of a star in parsecs is simply the inverse of its parallax angle in seconds of arc. This explains the origin of the term parsec: it is the distance to a parallax of one second of arc. If a star is too distant, its parallax is too small to be measured. (Try winking your eyes to see the shift of a distant object against an even further horizon — at some distance, you will not notice a shift anymore.) Parallaxes smaller than about 1/100 second of arc are difficult to measure accurately, due to the blurring of the Earth’s atmosphere. Therefore stars farther away than about 100 parsecs are beyond the distance limit for reliable parallaxes.
Scientists at the European Space Agency attempted to overcome this limitation when they launched the Hipparcos satellite in 1989. Its mission was to measure parallax in the precise viewing conditions of space. The unfortunate failure of a rocket motor placed the satellite in a highly elliptical orbit, which caused the detectors to degrade as they passed repeatedly through the radiation belts around the Earth. Despite this handicap, Hipparcos successfully measured the parallax of 120,000 stars. In addition to parallaxes, it accurately measured the motions of those stars, giving us information about how they were moving, if they were multiple star systems and if they varied in luminosity with time. More recently, ESA launched the Gaia mission, to greatly improve the sample size and accuracy of parallax measurements. Gaia aims to construct a 3D map of nearly a billion stars, reaching an accuracy of 20 millionths of an arc second. That's the angle subtended by the width of a human hair at a distance of 300 kilometers!
Knowing accurate distances is the most important requirement for measuring most other properties of stars. This means that the estimated 20,000 to 25,000 stars that lie within 100 parsecs are our main statistical sample for measuring stellar properties. Note that even this large sample is not sufficient for us to be able to measure the distance to certain rare stellar types. Other techniques have been devised for estimating distances to more remote stars, but they all depend ultimately on the accuracy of the parallax measures of nearby stars.