Even to casual stargazers it’s pretty obvious that the stars are of differing brightness. Astronomers always like to catalog and classify objects in the sky, and the brightness of stars is no exception. Over 2,000 years ago, the Greek astronomer Hipparchus devised the system we use for this purpose, called the magnitude scale.
In Hipparchus’s magnitude scale, the brightest stars were known as first magnitude and the faintest stars were sixth magnitude. He gave a higher number to the faintest stars, which sounds a little topsy-turvy until you swap the word ‘magnitude’ for the word ‘class’.
Looking at it this way you start to see them as ‘first class’ stars, ‘second class’ stars and so on as the stars get fainter, putting the scale into perspective. At the brighter end of the scale, magnitudes become a little awkward as some stars and other objects are brighter than first magnitude. There are stars with zero magnitude – wrongly suggesting they have no brightness – and in cases where the stars are even brighter, they have a negative magnitude, as you can see in these examples:
The Sun
|
–27
|
Full Moon
|
–12
|
Venus (at its brightest)
|
–4.4
|
Arcturus
|
–0.04
|
Vega
|
+0.03
|
Polaris
|
+1.99
|
Pluto
|
+13.9
|
With telescopes and imaging equipment like CCD cameras, you can go way beyond the sixth-magnitude objects on Hipparchus’s original scale and capture objects like Pluto, which is far too dim to be seen with the naked eye. The Hubble Space Telescope has managed to image objects as faint as magnitude +30. Don’t forget that the magnitude doesn’t tell you how luminous an object really is in itself; it’s a measure of the apparent brightness of a star as seen from our vantage point here on Earth.