# 3.1: Blackbody Radiation

- Page ID
- 94103

\( \newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \)

\( \newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}} \)

\( \newcommand{\id}{\mathrm{id}}\) \( \newcommand{\Span}{\mathrm{span}}\)

( \newcommand{\kernel}{\mathrm{null}\,}\) \( \newcommand{\range}{\mathrm{range}\,}\)

\( \newcommand{\RealPart}{\mathrm{Re}}\) \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\)

\( \newcommand{\Argument}{\mathrm{Arg}}\) \( \newcommand{\norm}[1]{\| #1 \|}\)

\( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\)

\( \newcommand{\Span}{\mathrm{span}}\)

\( \newcommand{\id}{\mathrm{id}}\)

\( \newcommand{\Span}{\mathrm{span}}\)

\( \newcommand{\kernel}{\mathrm{null}\,}\)

\( \newcommand{\range}{\mathrm{range}\,}\)

\( \newcommand{\RealPart}{\mathrm{Re}}\)

\( \newcommand{\ImaginaryPart}{\mathrm{Im}}\)

\( \newcommand{\Argument}{\mathrm{Arg}}\)

\( \newcommand{\norm}[1]{\| #1 \|}\)

\( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\)

\( \newcommand{\Span}{\mathrm{span}}\) \( \newcommand{\AA}{\unicode[.8,0]{x212B}}\)

\( \newcommand{\vectorA}[1]{\vec{#1}} % arrow\)

\( \newcommand{\vectorAt}[1]{\vec{\text{#1}}} % arrow\)

\( \newcommand{\vectorB}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \)

\( \newcommand{\vectorC}[1]{\textbf{#1}} \)

\( \newcommand{\vectorD}[1]{\overrightarrow{#1}} \)

\( \newcommand{\vectorDt}[1]{\overrightarrow{\text{#1}}} \)

\( \newcommand{\vectE}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash{\mathbf {#1}}}} \)

\( \newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \)

\( \newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}} \)

\(\newcommand{\avec}{\mathbf a}\) \(\newcommand{\bvec}{\mathbf b}\) \(\newcommand{\cvec}{\mathbf c}\) \(\newcommand{\dvec}{\mathbf d}\) \(\newcommand{\dtil}{\widetilde{\mathbf d}}\) \(\newcommand{\evec}{\mathbf e}\) \(\newcommand{\fvec}{\mathbf f}\) \(\newcommand{\nvec}{\mathbf n}\) \(\newcommand{\pvec}{\mathbf p}\) \(\newcommand{\qvec}{\mathbf q}\) \(\newcommand{\svec}{\mathbf s}\) \(\newcommand{\tvec}{\mathbf t}\) \(\newcommand{\uvec}{\mathbf u}\) \(\newcommand{\vvec}{\mathbf v}\) \(\newcommand{\wvec}{\mathbf w}\) \(\newcommand{\xvec}{\mathbf x}\) \(\newcommand{\yvec}{\mathbf y}\) \(\newcommand{\zvec}{\mathbf z}\) \(\newcommand{\rvec}{\mathbf r}\) \(\newcommand{\mvec}{\mathbf m}\) \(\newcommand{\zerovec}{\mathbf 0}\) \(\newcommand{\onevec}{\mathbf 1}\) \(\newcommand{\real}{\mathbb R}\) \(\newcommand{\twovec}[2]{\left[\begin{array}{r}#1 \\ #2 \end{array}\right]}\) \(\newcommand{\ctwovec}[2]{\left[\begin{array}{c}#1 \\ #2 \end{array}\right]}\) \(\newcommand{\threevec}[3]{\left[\begin{array}{r}#1 \\ #2 \\ #3 \end{array}\right]}\) \(\newcommand{\cthreevec}[3]{\left[\begin{array}{c}#1 \\ #2 \\ #3 \end{array}\right]}\) \(\newcommand{\fourvec}[4]{\left[\begin{array}{r}#1 \\ #2 \\ #3 \\ #4 \end{array}\right]}\) \(\newcommand{\cfourvec}[4]{\left[\begin{array}{c}#1 \\ #2 \\ #3 \\ #4 \end{array}\right]}\) \(\newcommand{\fivevec}[5]{\left[\begin{array}{r}#1 \\ #2 \\ #3 \\ #4 \\ #5 \\ \end{array}\right]}\) \(\newcommand{\cfivevec}[5]{\left[\begin{array}{c}#1 \\ #2 \\ #3 \\ #4 \\ #5 \\ \end{array}\right]}\) \(\newcommand{\mattwo}[4]{\left[\begin{array}{rr}#1 \amp #2 \\ #3 \amp #4 \\ \end{array}\right]}\) \(\newcommand{\laspan}[1]{\text{Span}\{#1\}}\) \(\newcommand{\bcal}{\cal B}\) \(\newcommand{\ccal}{\cal C}\) \(\newcommand{\scal}{\cal S}\) \(\newcommand{\wcal}{\cal W}\) \(\newcommand{\ecal}{\cal E}\) \(\newcommand{\coords}[2]{\left\{#1\right\}_{#2}}\) \(\newcommand{\gray}[1]{\color{gray}{#1}}\) \(\newcommand{\lgray}[1]{\color{lightgray}{#1}}\) \(\newcommand{\rank}{\operatorname{rank}}\) \(\newcommand{\row}{\text{Row}}\) \(\newcommand{\col}{\text{Col}}\) \(\renewcommand{\row}{\text{Row}}\) \(\newcommand{\nul}{\text{Nul}}\) \(\newcommand{\var}{\text{Var}}\) \(\newcommand{\corr}{\text{corr}}\) \(\newcommand{\len}[1]{\left|#1\right|}\) \(\newcommand{\bbar}{\overline{\bvec}}\) \(\newcommand{\bhat}{\widehat{\bvec}}\) \(\newcommand{\bperp}{\bvec^\perp}\) \(\newcommand{\xhat}{\widehat{\xvec}}\) \(\newcommand{\vhat}{\widehat{\vvec}}\) \(\newcommand{\uhat}{\widehat{\uvec}}\) \(\newcommand{\what}{\widehat{\wvec}}\) \(\newcommand{\Sighat}{\widehat{\Sigma}}\) \(\newcommand{\lt}{<}\) \(\newcommand{\gt}{>}\) \(\newcommand{\amp}{&}\) \(\definecolor{fillinmathshade}{gray}{0.9}\)## "Thermalizing" Energy

Two of the most important achievements in theoretical physics happened within only a very short span of time, in the mid-to-late 19th century. One of these was by Ludwig Boltzmann, who provided a description of thermal physics (and particularly the entropy function) in terms of a statistical model. [We made use of his advances in Physics 9HB.] The other achievement was perhaps even more profound, and occurred roughly a decade before Boltzmann's description of entropy in terms of microstate multiplicity. James Clerk Maxwell (who also had a hand in describing thermal/statistical physics, and shares credit for the "Maxwell-Boltzmann distribution") mathematically unified the electric and magnetic forces, thereby providing an electromagnetic explanation for the phenomenon of light. It's only natural that work that followed the contributions of these two giants at the end of the 19th century would veer toward the interplay between these two ideas.

It doesn't take modern science to understand that the same light that allows us to see can also make materials hot when it is absorbed. It also comes as no surprise that when objects that get very hot, they can give off light. Indeed, this is how Edison's light bulb worked – the filament gets very hot, and out comes the light! And since we know that light comes in a very wide spectrum that goes well beyond the visible in both directions, we can conclude that objects of all temperatures emit EM radiation, and that the type or amount of radiation must somehow depend upon the object's temperature. This question of the relationship between an object's temperature and the light it radiates falls squarely into this cross-over of Boltzmann's and Maxwell's work.

At the core of this question are two facts, one from each of the two field of physics:

- The energy contained in the random movements of tiny particles (in our case, we are mostly interested in electrons), is what we refer to as "thermal energy", and according to kinetic theory, temperature and thermal energy are more-or-less proportional values.
- These same randomly-moving particles often have electric charge, and according to electromagnetic theory, accelerated electric charges are the source of EM radiation.

The interesting point here is that *randomly* accelerated charged particles will vibrate at a large variety of frequencies, and since these vibrational frequencies match the frequencies of the light waves they emit, the emission of a hot object must have a variety of frequencies as well. So two questions now come to mind:

- How does the rate of energy emission from a hot object (in the form of radiation), relate to the temperature of that object?
- How is the energy that is emitted from a hot object distributed across the spectrum of frequencies of the emitted waves?

To answer these questions experimentally requires some clever steps be taken to maintain controls on the experiment. For example, if we take careful measurements of light coming from a hot object, might not some of that light simply be light that originated outside the object, and is reflected off it? If the object is blue in color, then the energy that comes from the object in the reflected blue light throw-off the calculation for the radiated power in the blue region of the spectrum due to the thermal properties alone. So ideally, what we want to measure the radiate from is a *blackbody*. This is an object that effectively doesn't reflect any light. Any light that goes into it can have any distribution of frequencies whatsoever (it can even be monochromatic), but it is totally absorbed by the blackbody, with its energy "thermalized" into random particle motions within the body. With such an object, one can measure the radiation emitted safe in the knowledge that all of it is from a thermal source of a known temperature. How do we create (or rather approximate) such a creature?

**Figure 3.1.1 – A Blackbody Cavity**

Suppose we create a cavity with irregular interior walls, and a small hole that connects the interior and exterior. Light will be reflected off the outer surface of this object, but in the region of the hole, light that goes in gets reflected around enough times that it is effectively entirely absorbed, and doesn't reemerge from the hole. This means that all the radiation that comes out of the hole can only have the thermal vibrations of the particles inside the cavity as a source – blackbody radiation! We can now stick a thermometer into the cavity, and take measurements of the light that emerges to answer questions 1 and 2 above. It should be noted that if the object is at the same temperature as its surroundings, then it is in thermal equilibrium with its environment, which means that for every joule of energy that enters the hole, one joule also emerges from it. But the light that goes into the cavity can be of any mix of frequencies, while the light that emerges must come out in a specific distribution across the spectrum that is a function of only the temperature.

Question 1 above doesn't consider at all the distribution across the various frequencies of emitted light, so it is the easier of the two questions to understand and answer. This question was answered by Josef Stefan in 1877, by analyzing data from an experiment performed by John Tyndall 13 years earlier. Stefan's empirical answer was later confirmed theoretically by a calculation performed by Boltzmann (which we will not reproduce here), yielding what is now known as the *Stefan-Boltzmann law*:

\[P_{blackbody}=\sigma AT^4~,~~~~~\sigma = 5.67\times 10^{-8}\frac{W}{m^2K^4}\]

The "\(P\)" in this equation is power, and the "\(A\)" is the surface area of the blackbody (from which the radiation emanates). For our little model above, this would be the area of the tiny hole, out of which the energy radiates. For what we generally think of as a "blackbody" which radiates in every direction, it is the full surface area of the object. The only adjustment to this law involves an additional factor \(e\) (called the *emissivity*), which is a number less than or equal to 1 that takes into account the fact that the object might not be a perfect blackbody (\(e=1\)).

This result for the total emission of the blackbody is not what interests us most here. It is question 2 that starts us down the crazy path on which we are about to embark.

## Rayleigh-Jeans Law

The cavity version of a blackbody gives us another picture that is helpful for analysis, if we ask what is going on *inside* the cavity at steady-state. Clearly EM waves are being bounced around inside, and occasionally a wave has just the right direction to head out of the hole, while occasionally a wave enters the hole as well, keeping the energy inside the cavity constant. Okay, so this language makes it sound like the emergence of a wave from the hole is a rare event, when in fact there are constantly light waves passing in and out, but the point is that inside the cavity is a flurry of light wave activity. Each of these waves carries energy, and as a finite amount of energy is distributed among a random sampling of waves, we can use what we learned in 9HB about the probability of finding that a single entity in a collection has a specific energy. We found that according to the Boltzmann distribution, the average energy per particle in a system at temperature \(T\) is:

\[\left<E\right>=\frac{\sum\limits_{\text{all E}} E~e^{-\frac{E}{k_BT}}}{\sum\limits_{\text{all E}} e^{-\frac{E}{k_BT}}}\]

[Reminder: \(k_B=1.38\times 10^{-23}~J/K\) is the *Boltzmann constant*.]

From what we have learned about waves, we know that their energy is a function of their characteristics. For example, we found that the energy in a single wavelength of a wave on a string (Equation 1.3.7) is proportional to the frequency and the square of the amplitude (as well as some characteristics of the medium, such as the string density and the speed of the wave). So given the continuum of frequencies and amplitudes possible, the possible energies lie on a continuum, and our "sums" are really integrals:

\[\left<E\right>=\frac{\int\limits_0^\infty E~e^{-\frac{E}{k_BT}}dE}{\int\limits_0^\infty e^{-\frac{E}{k_BT}}dE}\]

These are not difficult integrals to perform (or look up), and the end result is that the average energy per particle should be:

\[\left<E\right>=k_BT\]

This seems reasonable, but next we need to see what this implies for the intensity of the EM waves coming from a blackbody *as a function of their frequencies*. Let's consider what happens if we look only at light of a single frequency. This subset of all the waves still come in a random distribution, but now the only characteristic of the waves that is random is the amplitude. The average energy of each of these waves is fixed at \(k_BT\), so the total energy of this group of waves with this specific frequency is just the number of waves in the group multiplied by this average. If we look at group characterized by another frequency, once again the waves in the group have the same average energy, but there may be more or fewer than the previous group. So for example, if we see twice the number of waves at frequency \(f_1\) as we see at \(f_2\). With each wave getting the same average energy of \(k_BT\), this would mean that the light that emerges with frequency \(f_1\) would be twice as intense as the light that emerges with frequency \(f_2\).

Of course, we can't really talk about the number of waves at a *precise* frequency, since frequencies lie on a continuum (two randomly-chosen waves could never have exactly the same frequency). Instead, we can only talk about the number of waves that lie within a *range* of frequencies. So for example, we could compare the number of waves we see with a range of frequencies between \(f_1\) and \(f_1+\Delta f\) to the number we see in the same-sized frequency range between \(f_2\) and \(f_2+\Delta f\). Then the question becomes, if we take two ranges of frequencies, one of them near \(f_1\) and the other near \(f_2\) (and let's say that \(f_2>f_1\)), in which of these ranges will we find more individual waves? We need to know this, because the amount of energy in that range (which directly relates to the light intensity in that range) is the number of waves multiplied by the average energy per wave.

If this was a 1-dimensional wave, then the number of waves in each range would be equal, because such waves are uniquely-defined by their frequencies. But in three-dimensions, the wave has more degrees of freedom, and many different waves can have the same frequency. It turns out that the higher the frequency, the more such waves are possible. The rate at which the number of waves grows with respect to the increase in frequency is called the *density of states*. All densities are an amount of something *per* something else (like mass density is mass *per* volume), and this is the number of "states" one can find a wave in *per* frequency. We will not go into a derivation of this quantity for a collection of light waves with frequency \(f\) at equilibrium in a confined space of volume \(V\) (this is referred to as a *photon gas*), but it comes out to be:

\[\frac{dN}{df}=\frac{8\pi V}{c^3}f^2\]

Here one can think of \(V\) as the volume of the blackbody cavity (volume occupied by the photon gas), and of course \(c\) is the speed of light. But more generally, \(V\) can be any arbitrary volume where this radiation is present.

Okay, so returning to our original question of the intensity of blackbody radiation as a function of frequency of that radiation, we can now say how much energy there is collectively in the waves within a frequency range from \(f\) to \(f+df\). The energy in that range is the energy per wave times the number of waves. In this infinitesimal range, of course the energy is infinitesimal:

\[dE=\left<E_{wave}\right>\cdot dN=\frac{8\pi k_BT}{c^3}Vf^2df\]

It's generally easier to talk about an energy density, as it is defined at specific points in space and does not depend upon defining a volume. Dividing the energy by the volume, we get:

\[u\equiv\frac{E}{V}~~~\Rightarrow~~~\Psi\left(f,T\right)\equiv\frac{du}{df} = \frac{8\pi k_BT}{c^3}f^2\]

The function obtained after dividing through by \(df\) indicates how the energy density at a point in space relates to the range of frequencies of EM waves at that point. That is, integrating the function \(\frac{du}{df}\) over a range of frequencies gives the energy density that results from the EM waves that are in that frequency range. This function therefore not only involves a density in space, but also over frequencies. This latter density in frequency gives it its name: *spectral energy density*. This result is known as the *Rayleigh-Jeans law*.

The reader may be concerned that we seem to have used "energy density" and "intensity" interchangeably – as if the energy density of the light at a point in space is synonymous with its brightness. In fact these two quantities are very closely related to each other. Consider a small cubical region in space, through which a plane light wave is moving parallel to two sides. This energy is being carried by a light wave, and all of it passes out of the cube, through one of the cube's faces.

**Figure 3.1.2 – Relating Energy Density to Intensity**

The time it takes this to occur is the distance traveled (the length of one edge of the cube), divided by the wave speed:

\[dt=\frac{dx}{c}\]

The power delivered through the cube face is the energy divided by this time:

\[P=\frac{dE}{dt}=\frac{dE}{dx}c\]

The intensity is the power delivered divided by the area through which it passes:

\[I=\frac{P}{A}=\frac{\frac{dE}{dx}c}{dydz}=\frac{dE}{dxdydz}c\]

And the energy density in this region is the total energy divided by the volume, \(dV=dxdydz\), so the intensity and energy density only differ by a factor of \(c\).

While this is true for a plane wave, for our light coming from the tiny hole in our blackbody cavity, the light spreads as it exits. This change of geometry leads to a smaller intensity (imagine the same light wave in the diagram above coming out of five of the cube's surfaces, rather than just one. It's far to much detail to go into the effect this has on the case of the relation between the energy density and the intensity of the radiation coming from our blackbody (the reader can look up "Lambert's cosine law", if the desire to fall down a deep rabbit hole is desired), but the upshot is that it changes the energy density/intensity relation by an additional multiplicative factor.

## Wien's Contribution

Let's take a moment to ask the most important question in physics: "What does the Rayleigh-Jeans result predict, physically?" Suppose we filter the light coming from a blackbody according to frequency (for the visible spectrum, we simply could do this with a diffraction grating), and measure the intensity of the light as the frequency rises. We should see that the higher frequencies are always brighter than the lower ones, since the energy in a small range grows indefinitely in proportion to \(f^2\). In the case of visible light, we would see orange brighter than red, yellow brighter than orange, and so on, forever, *including* frequencies above the visible spectrum. There is only a finite amount of energy available, and clearly there is *some* energy at the lower frequencies that we measure, so since there is no upper-bound on the frequency the light can have, the total energy becomes unbounded. This disaster of a prediction became known as the "ultraviolet catastrophe", because for the temperatures being studied (namely the temperature of the surface of the sun, which approximates a blackbody and gives us ample light to observe), the energy content of the light actually starts to *decline* (not increase unbounded) around the high end of the visible spectrum.

Despite the "catastrophic" result for frequencies beyond a certain frequency, the Rayleigh-Jeans result actually predicts results pretty well for low frequencies. But as indicated above, the intensity of the light peaks at some value of frequency, and comes down again. This behavior, as well as a very good approximation to the observed results for high frequencies was modeled by a fellow named Wilhelm Wien. This approximated result Wien obtained was:

\[\Psi\left(f,T\right)=\frac{8\pi h}{c^3}f^3e^{-\frac{hf}{k_BT}}\]

The constant \(h\) that appears here was not computed by Wien (it is included here for later comparison) – he was just proposing the general functional form. This function certainly looks significantly closer to the experimental results for this curve, but it suffered from two ailments that Rayleigh-Jeans did not: The physics to explain its origin was not clear, and it failed to properly model the low frequency emissions of blackbodies.

**Figure 3.1.3 – Rayleigh-Jeans, Wien, and Experimental Results**

There was, however, one result that came from Wien's model that holds equally true for what we see in nature. The graph above is for a specific blackbody temperature (which we called \(T_o\) in the graph). If we look at the curve for the same blackbody at another (let's say higher) temperature, what do we see? We can already make one guess, from two things that we know:

- This is a curve that represents energy density over frequencies. If we integrate a density function over a range, we get the total amount in that range. For example, if we integrate a mass density over a volume, we get the total mass within that volume. Therefore, if we want to know the light energy density (or equivalently, the light intensity) at a point in space that comes only from a range of frequencies, we integrate over the frequency range:

\[u\left(T\right)=\int\limits_{f_1}^{f_2}\Psi\left(f,T\right)df\]

- The
*total*power coming from the blackbody grows with the fourth power of the temperature, according to Equation 3.1.1. This total power is over the entire spectrum of frequencies.

Putting these together, it is clear that the area under the *entire* curve must grow as the temperature rises, and the curve has to maintain its general features, so we expect the peak value to rise. But it turns out that there is another effect that comes from changing the temperature as well: The peak of the curve displaces to the right with an increase in temperature, and to the left with a decrease. But more precisely, it displaces an amount that is *proportional to the temperature change*.

**Figure 3.1.4 – Displacement of Blackbody Curve with Temperature**

This (not coincidentally) is a feature of both Wien's curve and what is experimentally seen, though the constant of proportionality of the frequency where the peak occurs and the temperature of the blackbody is not the same in both cases. Using the correct constant, we have what is now known as *Wien's Displacement law*:

\[f_{peak}=\alpha T~,~~~~\alpha = 5.879\times 10^{10}\frac{Hz}{K}\]

## Planck's Puzzling Fix

Max Planck knew what the spectral energy density curve for a blackbody looked like, and decided that rather than try to figure out where the physics went wrong, he would see what math was required, and then try to work backwards to the physics. The calculation leading to the ultraviolet catastrophe came in two parts – the average energy per wave, and the density of states. There's really no physics in the latter calculation, so the only place where a change can be made is in the calculation of the average energy per wave. There was no arguing with Boltzmann's probabilities either, so Planck tried making a different assumption about the distribution of energy amongst the waves.

Looking back once again at Equation 1.3.7, we make a note that the energy of one wavelength of a string wave is proportional to its frequency and its amplitude-squared. If we limit ourselves to looking at the energy of only waves of a given frequency, then the energy of these waves depends only on the amplitude, but as Rayleigh and Jeans assumed, this amplitude is variable on a continuum. That is, if you wanted a light wave of frequency \(f\) to have slightly more energy with the same frequency, you could give it as little energy as you like, because you are free to increase the amplitude by an infinitesimal amount. Plank noticed that the calculation takes a different turn if we *don't* assume this. He speculated that perhaps there was a minimum amount that you could change the energy by, and that this amount was proportional to the frequency. That is, he posited that perhaps the possible energies for a light wave, instead of being continuous, might be discrete, and simply be a multiple of a minimum energy, \(\epsilon\):

\[\epsilon=hf~~~\Rightarrow~~~E_n=n\epsilon=nhf\]

The constant \(h\), now known as *Planck's constant, is:*

\[h=6.63\times 10^{-34}~J\cdot s\]

This assumption profoundly changes the calculation for the average energy per wave. With the energies no longer on a continuum, the Boltzmann distribution calculation of the average no longer becomes an integral, but instead Equation 3.1.2 becomes:

\[\left<E\right>=\frac{\sum\limits_{n=0}^{n=\infty} E_n~e^{-\frac{E_n}{k_BT}}}{\sum\limits_{n=0}^{n=\infty} e^{-\frac{E_n}{k_BT}}}=\frac{\sum\limits_{n=0}^{n=\infty} nhf~e^{-\frac{nhf}{k_BT}}}{\sum\limits_{n=0}^{n=\infty} e^{-\frac{nhf}{k_BT}}}\]

These infinite series might seem daunting at first, but with a simple substitution, they take the appearance of common geometric series whose sums are well-known:

\[\alpha\equiv e^{-\frac{hf}{k_BT}}~~~\Rightarrow~~~\left<E\right>=hf\frac{\sum\limits_{n=0}^{n=\infty}n\alpha^n}{\sum\limits_{n=0}^{n=\infty}\alpha^n}\]

These sums can be looked-up or computed (the denominator from a couple lines of algebra, the numerator by differentiating the denominator), and they are:

\[\sum\limits_{n=0}^{n=\infty}n\alpha^n=\frac{\alpha}{\left(1-\alpha\right)^2}~~~~~~~~\sum\limits_{n=0}^{n=\infty}\alpha^n=\frac{1}{1-\alpha}\]

Plugging back in for \(\alpha\) and putting the series sums back in above gives:

\[\left<E\right>=\frac{hfe^{-\frac{hf}{k_BT}}}{1-e^{-\frac{hf}{k_BT}}}=\frac{hf}{e^{\frac{hf}{k_BT}}-1}\]

Multiplying this average energy by the density of states gives a very different spectral energy density than obtained by Rayleigh and Jeans, and differs from that produced by Wien only by a "\(-1\)" term in the denominator!

\[\Psi\left(f,T\right)=\frac{8\pi h}{c^3}\frac{f^3}{e^{\frac{hf}{k_BT}}-1}\]

While this solution perfectly matched the experimental data, even Planck himself didn't believe the physics that leads up to it. He was certain someone would find an explanation for this curve other than assuming that the energy of a light wave is *quantized* into little packets that are multiples of \(hf\).