Skip to main content
Physics LibreTexts

12.6 Absolute Brightness

The amount of light is that reaches the Earth from a distant object depends on a variety of factors; the distance to the object, the amount of light the object is giving off in all directions, and the amount of intervening material that is blocking some of the light from reaching us here on earth. By far the two most important variables are distance and luminosity, where luminosity describes the amount of radiation emitted by an object. In astronomy, the amount of light that we actually receive at the Earth is referred to as the object's apparent brightness, and the amount of light that we would see at a fixed distance, arbitrarily selected as 10 parsecs, is referred to as the absolute brightness. Equivalently, it is called luminosity. Scientifically, absolute brightness or luminosity is typically (but not always!) measured in units of Watts per meter squared and is often referred to as flux. If distance (d) is measured in parsecs, and extinction caused by intervening dust and gas can be ignored, than the relationship between apparent brightness (Fm) and absolute brightness (FM) is given by:br/> 
Flux Ratio: FM / Fm = d2 / 102

The distinction between apparent and absolute brightness would not be important if all stars had the same energy output. Imagine a large darkened room scattered with 100-Watt light bulbs. Assuming you had measured the distance and absolute brightness (the "wattage") of just one reference light bulb, you could deduce the distances of all other light bulbs by observing their apparent brightness and applying the inverse square law. For instance, a light bulb that appeared to be four times brighter than the reference bulb must be two times closer, and a light bulb that appeared nine times fainter than the reference bulb must be three times further away. In this situation, apparent brightness is an exact indicator of distance.

To understand the effect of the inverse square law, imagine three light sources which have values of apparent brightness of 50, 100 and 200 units at a distance of 1 meter. Let's ignore units for a moment since different units are often used depending on whether the light is collected by eye, with a telescope, or so on. Imagine you have three different light sources, each twice the brightness of the previous one. The apparent brightness of each source diminishes with the inverse square of the distance. Therefore, the values of apparent brightness for the three sources at a distance of 2 meters could be 50/22 = 12.5 units, 100/22 = 25 units, and 200/22 = 50 units for the 50, 100 and 200 Watt sources respectively. You will notice that at any distance, the values of apparent brightness of the three sources are in the same ratio as their values of absolute brightness.

But what if the sources are at different distances? Is it possible for all three to appear the same? Three light sources of 50, 100, and 200 Watts, which are intrinsically very different, would all appear to have a brightness of 25 units if the 50 Watt source was at a distance of √(50/25) = 1.41 meters, the 100 Watt source was at a distance of √(100/25) = 2 meters, and the 200 Watt source was at a distance of √(200/25) = 2.82 meters. There are many other distances at which the sources can be placed and have the same apparent brightness, but they will always be in this ratio of distances. Since apparent brightness is not a measure of the true output of the source, we have to inquire deeper to understand what is truly happening at the light source. With a light bulb, it is easy; we can just look at the top and check the wattage. However, stars do not have their wattage written on them! Another complication to keep in mind is that the stars are all far enough away that we cannot see their different sizes — they all look like brighter or dimmer points of light.

In the example of the light bulbs, we are ignoring the fact we can see their physical sizes, which gives us an independent measure of their distance. Even if we could see a star's size, it would not help us estimate absolute brightness or luminosity since stars have a huge range of sizes.

The reality is, we don't actually know the true absolute brightness of most stars. When we're lucky, we can measure the distances to the stars using parallax, and with known distances and measured apparent brightness we can calculate the absolute brightness. In other lucky cases, we can use pulsating variable stars, like the Cepheids, to determine distances to clusters of stars. But only in certain special scenarios (and there are more than we discuss here) can we truly know how bright star truly is.

In the simple example of 50, 100, and 200 Watt light bulbs, the range in luminosity is only a factor of four. In astronomy, stars differ enormously in luminosity or the amount of energy they emit each second. Imagine that a large room is scattered with bulbs of widely different wattage, ranging from 1-Watt night-lights to 10,000-Watt arc lamps. A Watt is a measure of energy output per second or luminosity. In this case, a 1-Watt bulb would have the same apparent brightness as a 100-Watt bulb ten times further away, or a 10,000-Watt bulb one hundred times further away. These calculations are simple applications of the inverse square law. Apparent brightness is therefore a very poor indicator of distance. Conversely, the intrinsic energy output of a star cannot be calculated without knowing the distance. Astronomers must deal with this challenging problem of studying stars with very different light outputs widely distributed through space.