Microsoft, Google, Apple, Windows, Android, iOS, Internet, Cyber Security, Hacking, Malware, Smartphone, Mobile App

Trending

What is a parsec? Definition and calculation

A parsec is a standard astronomical measurement that is often misunderstood.

A parsec is a unit of distance that is often used by astronomers as an alternative to the light-year, just as kilometers can be used as an alternative to miles. Sci-fi franchises such as “Star Wars” have been known to misuse the word “parsec”, mistakenly describing it as a measurement of time or speed.

In fact, one parsec is approximately 3.26 light-years, or almost 19 trillion miles (31 trillion km), according to the California Institute of technology(opens in new tab) (Caltech).

The term parsec is a combination of “parallax” and “arcsecond,” which derives from the use of triangulation when measuring the distance between two stars.

Astronomers use arcseconds to measure very small angles, with 3,600 seconds making up one degree, just as there are 3,600 seconds in one hour. These small angles help astronomers measure large distances using something called called the parallax effect.

If you hold a pencil up at arm’s length and alternately close your left and right eyes, you’ll notice that the pencil appears to move left and right relative to more distant objects even if you keep it perfectly stationary.

An elderly man demonstrating that parallax can be demonstrated by looking at a pencil with one eye closed.

Parallax can be demonstrated by looking at a pencil with one eye or the other. (Image credit: Getty Images)

That’s the parallax effect, and it happens because the angular direction to the pencil is slightly different when seen by your left and right eyes. If you could measure that angular difference, then knowing the distance between your eyes enables you to calculate the distance to the pencil.

The same principle enables astronomers to measure the distance to nearby stars. They take a photograph of a patch of sky containing the star they’re interested in and other, more distant objects such as galaxies.

Then six months later, when the Earth is on the other side of the sun, they take another photograph of the same bit of sky, according to NASA. The star will appear to have moved through a small angular distance relative to the background objects. Measuring that angle  and then halving it (because we have two equal and opposite offsets relative to the Sun) gives us the star’s parallax.

So that’s where the parsec comes from: it’s the hypothetical distance at which a star would show a parallax of exactly one second. In fact, real stellar parallaxes are smaller than that, meaning that their distances are always greater than a parsec.

As logical as the definition of a parsec is, it’s still likely to come across as unnecessarily complicated to most people. By contrast, the light-year is much easier to understand. It’s simply the distance that light travels in a year, and it’s been in use since at least 1838.

The light-year even has a usefulness that goes beyond simple measurement, because it tells us that when we observe an object X light-years away, we’re seeing as it was X years in the past. So why would anyone want to use parsecs instead?

The answer seems to be that, when astronomers first started measuring stellar distances using the parallax method, they simply presented their results in terms of “a parallax of X seconds” rather than converting to light-years.

Then around 1913 Herbert Hall Turner had the idea of shortening this to parsec — and the name stuck, even when other, non-parallax-based, methods of measuring stellar distance were developed. Today, the International Astronomical Union

(opens in new tab) recommends the use of parsecs over light-years in scientific papers, although the latter is still very common in popular usage.

Leave A Reply

Your email address will not be published.

This website uses cookies to improve your experience. We'll assume you're ok with this, but you can opt-out if you wish. Accept Read More

Privacy & Cookies Policy