Parallax and the Scale of the Universe

For much of the 20th century, astronomy textbooks taught that ancient astronomers rejected heliocentrism because no stellar parallax was observed. We now know this is an anachronism: there is no evidence in the surviving texts that ancient astronomers expected to see such a shift or took its absence as evidence that the Earth must be at rest.

What we can say is that parallax is a powerful concept in astronomy — one of the most direct ways we have to measure distances — and that the fact it was undetectable for the stars until the 19th century tells us just how unimaginably far away those stars must be. Let’s see what parallax is, why it matters, and why it was so difficult to observe for the stars.

Demonstrating Parallax

Hold your thumb up at arm’s length and open one eye, then close it and open the other. As you wink back and forth, you should see your thumb move with respect to whatever is behind it. This phenomenon is called parallax. Pull your thumb closer to your face and try winking back and forth again. The parallax shift increases, as you see your thumb shift a greater distance with respect to the background than it did when it was at arm’s length.

There is a precise mathematical relation between an object’s distance and the degree of parallax shift that occurs when it is observed from separate locations. This is what makes parallax so useful for astronomers: by measuring the shift, we can infer the distance.

Stellar Parallax and Its Challenge

If the Earth does orbit the Sun, the apparent position of the closest stars should shift slightly as we view them from opposite sides of the orbit, six months apart. But this shift is extremely small — so small that even with a telescope, early modern astronomers could not measure it.

It was not until the 19th century that German astronomer Friedrich Bessel finally succeeded in measuring the stellar parallax of 61 Cygni. By that point, heliocentrism had already been widely accepted for two centuries, so Bessel’s discovery was a dramatic confirmation of the theory rather than the reason it was believed.

Measuring Tiny Angles

In astronomy, we measure angular distances in degrees (1°), arcminutes (1’ = 1/60°), and arcseconds (1” = 1/60’ = 1/3600°). The parsec, which is a unit of distance in astronomy (not a unit of time, as Han Solo famously got wrong), is defined as the distance to a star that exhibits a parallax angle of 1 arcsecond when the observer moves the distance from the Earth to the Sun. Over six months, such a star would trace a total parallax shift of 2 arcseconds — an incredibly tiny angle.

1/3600 of a degree is really small. The closest star, Proxima Centauri, is 1.3 parsecs away, so its parallax angle is just 0.769”. To put this in perspective, that is about half the apparent width of a pencil located 1 km away. That tiny angle is what Bessel had to measure — the difference between two lines of sight taken six months apart.

The stars really are located at staggeringly great distances, vastly greater than anything within our Solar System. Their parallax shifts are so minute that they were simply beyond the reach of observation until the development of precision instruments.

Learning Activity

Using a value of 0.5 cm for the radius of a pencil, calculate the ratio to 1 km. Do you get ~2 × 10⁵, roughly the number of Astronomical Units (AU; i.e. Earth–Sun distances) in a parsec? Can you think of another useful comparison to appreciate the disparity between these two distances?

For example: “If the Earth’s orbit were represented by the period at the end of this sentence, a star at 1 parsec would be a microscopic dot at roughly ___ m.”

You can assume the period’s height is 1/72 in (1 pt). The most distant Kuiper Belt objects are about 50 AU away: where would they fall in relation to your “period” and the star at ___ m?