The scale used to indicate magnitude originates in the
Hellenistic practice of dividing stars visible to the naked eye into six
magnitudes. The
brightest stars in the night sky were said to be of
first magnitude ( = 1), whereas the faintest were of sixth magnitude ( = 6), which is the limit of
human visual perception (without the aid of a
telescope). Each grade of magnitude was considered twice the brightness of the following grade (a
logarithmic scale), although that ratio was subjective as no
photodetectors existed. This rather crude scale for the brightness of stars was popularized by
Ptolemy in his
Almagest and is generally believed to have originated with
Hipparchus. This cannot be proved or disproved because Hipparchus's original star catalogue is lost. The only preserved text by Hipparchus himself (a commentary to Aratus) clearly documents that he did not have a system to describe brightness with numbers: He always uses terms like "big" or "small", "bright" or "faint" or even descriptions such as "visible at full moon". In 1856,
Norman Robert Pogson formalized the system by defining a first magnitude star as a star that is 100 times as bright as a sixth-magnitude star, thereby establishing the logarithmic scale still in use today. This implies that a star of magnitude is about 2.512 times as bright as a star of magnitude . This figure, the
fifth root of 100, became known as . The
1884 Harvard Photometry and 1886
Potsdamer Durchmusterung star catalogs popularized Pogson's ratio, and eventually it became a de facto standard in modern astronomy to describe differences in brightness. Defining and calibrating what magnitude 0.0 means is difficult, and different types of measurements which detect different kinds of light (possibly by using filters) have different zero points. Pogson's original 1856 paper defined magnitude 6.0 to be the faintest star the unaided eye can see, but the true limit for faintest possible visible star varies depending on the atmosphere and how high a star is in the sky. The
Harvard Photometry used an average of 100 stars close to Polaris to define magnitude 5.0. Later, the Johnson UVB photometric system defined multiple types of photometric measurements with different filters, where magnitude 0.0 for each filter is defined to be the average of six stars with the same spectral type as Vega. This was done so the
color index of these stars would be 0. Although this system is often called "Vega normalized", Vega is slightly dimmer than the six-star average used to define magnitude 0.0, meaning Vega's magnitude is normalized to 0.03 by definition. With the modern magnitude systems, brightness is described using Pogson's ratio. In practice, magnitude numbers rarely go above 30 before stars become too faint to detect. While Vega is close to magnitude 0, there are four brighter stars in the night sky at visible wavelengths (and more at infrared wavelengths) as well as the bright planets Venus, Mars, and Jupiter, and since brighter means smaller magnitude, these must be described by
negative magnitudes. For example,
Sirius, the brightest star of the
celestial sphere, has a magnitude of −1.4 in the visible. Negative magnitudes for other very bright astronomical objects can be found in the
table below. Astronomers have developed other photometric zero point systems as alternatives to Vega normalized systems. The most widely used is the
AB magnitude system, in which photometric zero points are based on a hypothetical reference spectrum having constant
flux per unit frequency interval, rather than using a stellar spectrum or blackbody curve as the reference. The AB magnitude zero point is defined such that an object's AB and Vega-based magnitudes will be approximately equal in the V filter band. However, the AB magnitude system is defined assuming an idealized detector measuring only one wavelength of light, while real detectors accept energy from a range of wavelengths. == Measurement ==