# 2. Quantized Energies

- Page ID
- 2354

Until this unit, our model of energy allowed a particle to have any value of energy. In the quantum mechanics model, this is still true for particles moving freely through space, but the energy of a *confined* particle is **quantized** – meaning only certain values of energy are allowed, as discussed in the introduction. Like other models developed in this volume, understanding a few key ideas about quantized energy levels will enable us to make sense of a variety of phenomena, from the emission spectrum of the hydrogen atom to the unfreezing of modes in vibrating atoms.

# The Energy Spectrum

### Energy Levels

When we describe the energy of a particle as quantized, we mean that only certain values of energy are allowed. Perhaps a particle can only have 1 Joule, 4 Joules, 9 Joules, or 16 Joules of energy. In this case, whenever we measure the particle’s energy, we will find one of those values. If the particle is measured to have 4 Joules of energy, we also know how much energy the particle can gain or lose. It can only gain the exact amount of energy needed to reach one of the higher energy levels, and it can only lose the exact amount of energy needed to reach a lower energy level. In this case, the particle with 4 Joules of energy can gain either 5 Joules (to reach the 9 J level) or 12 Joules (to reach the 16 J level). No other amount of energy could be added to the particle (unless there were more available energy levels). Similarly, the only lower energy state is 1 J, so if the particle lost energy, it could only lose exactly 3 Joules. The available energy state with the least amount of energy is called the **ground state**.

How does a particle change its energy level? If a particle goes to a higher energy level, it must have gained that energy from another system. Likewise, if a particle goes to a lower energy level, that energy must be transferred into another system. We know that energy is conserved if we consider all systems involved in the process of energy transfer. So what systems gain or lose energy to our particles? The most common systems that accept and give energy to systems of small particles are electromagnetic waves (light)* *and the vibrations of molecules(heat or sound). We will talk about the energy of light in more detail below.

### Potential Determines the Energy Spectrum

The collection of allowed energies a system may have is called the **energy spectrum **of that system. The levels in the spectrum tell us the* total energy* the particle is allowed. Typically, the total energy is the sum of the kinetic and potential energies. The energies that a system is allowed to have depend on the potential energy of the system. We call potential energy \(PE(r)\) because we typically explore circumstances where \(PE\) depends on one variable \(r\). In effect, the potential energy forms a “container” of sorts that confines the particle to a specific range of \(r\). Each unique container has its own set of energy levels. We explore the different spectra of different \(PE\) "containers" in the examples below.

As discussed earlier in this volume and in previous quarters, the *exact *potential energy of a particle is not important, but the *changes *in potential energy really matter. In quantum mechanics, this is still the case. In effect, we can set "zero" potential energy to be at any level, so in our examples we will establish conventions that make interpreting the results as simple as possible.

Exercise

In the example mentioned in the text where the particle can have 1 J, 4 J, 9 J, or 16 J of energy, what is the energy spectrum? What is an example of an energy level? (there is no physics involved here, just a check that you understand the definitions)

## Light is Quantized

Just as we can think of ordinary matter being quantized (as illustrated by the example with water), we also find that light comes in indivisible quanta that we refer to as **photons**. Unlike atoms, we do not model these photons as being made up of smaller particles. We will discuss photons in more detail later, but at the moment we establish two facts about photons:

- The energy of a photon depends on its frequency \(f\): \[E_{photon} = h f\]where \(h= 6.636 \times 10^{-34} \text{ J s}\) is
**Planck's constant**, a universal constant. We know from earlier that the frequency is what determines the type of light we're discussing. The photons of different colors or different types of light have different frequencies and therefore have different energies. - At a particular frequency, one photon is the smallest amount of light that can exist.

A useful analogy to different types of light is different elements. To a good approximation we can imagine that mass is quantized; the quantum of mass is the mass of individual atoms (after all, it is hard to break apart atoms!). However the atoms of different elements have different masses. Similarly, each photon is an indivisible quantum of light, but photons of different frequencies contain different amounts of energy. So the smallest amount of energy in light is different for each type of light. The light that we experience every day is made up of many photons in a range of frequencies, so we don’t notice the quantized nature of light any more than we notice the individual atoms in everyday materials.

## Three Potentials and Their Energy Spectra

Gaining a better understanding of three important energy spectra will help us learn about a large variety of phenomena. The first spectrum we will consider is that of the *infinite potential well*. In this system, a particle is trapped between two points in one dimension, and can't escape no matter how much energy it has. We contrast this with a *harmonic oscillator* (like a mass on a spring), where the width of the "well" depends on how much energy the particle has. The third system we will explore is the system of an *electron bound to a nucleus *(a hydrogen atom). Energy spectrum examples for each of these three systems are shown below (respectively, left to right).

By convention, the vertical axis represents energy; the horizontal has no meaning. When looking at these spectra, notice how the spacing between consecutive energy levels is different in the three situations. The levels are evenly spaced for the oscillator (center), closer together at low energies for the infinite well (left) and closer together at higher energies for the hydrogen atom (right).

### The Infinite Potential Well

The infinite potential well is a system where a particle is trapped in a one-dimensional box of fixed size, but is completely free within the box. To keep the particle trapped in the same region* regardless* of the amount of energy it has, we require that the potential energy is *infinite* outside this region (hence the name "infinite potential well"). We then set "zero" potential energy to be the energy inside the box. The graph below shows the potential energy of a well with length \(L\).

The infinite well seems to be the least useful of the situations we will study; very few physical situations are similar to the infinite well. We introduce this system because it has the simplest potential available. If a particle is inside the box then it has no potential energy. If the particle is anywhere else, it has infinite potential energy. Because our particles can only have finite energy this ensure the particle stays in the box. Also, since there is zero potential energy inside the box, the total energy of the particle is equivalent to the kinetic energy of the particle. If the particle gains total energy, we know it must have gained kinetic energy.

We derive the equation for a trapped particle later, but for now, we will make sense of the equation without worrying about its derivation. Suppose that the length of the box is \(L\), and the particle trapped in this potential has a mass \(m\). Then the allowed energies are \[E_n = n^2 \dfrac{h^2}{8 m L^2}\]for any *positive* integer \(n\), where \(h\) is Planck's constant again. Because \(n \geq 1\), the minimum amount of energy the particle can have is \(E_1 = \frac{h^2}{8 m L^2}\). The potential energy is zero inside the box, so the particle *always* has some kinetic energy. For a quantum particle in a box it is *impossible* to sit at rest.

Example #2

Suppose you have a particle in the ground state of the infinite square well potential. You also have a device allowing you to add energy to the particle. You try to add an infinitesimally small amount of energy, but nothing happens, because only certain amounts of energy can be gained by the particle. What is the smallest amount of energy you can successfully transfer?

**Solution**

We know that the particle’s energy is quantized, and that the only allowed energies are \(E_n = n^2 \frac{h^2}{8mL^2}\) for different values of \(n\). If the particle begins in the ground state, then \(n_{initial} = 1\). The particle cannot gain energy unless it can transition to the next higher energy level, \(n_{final} = 2\). The energy we can add corresponds to the difference in energy levels: \[\Delta E = E_{final}-E_{initial} = E_2 - E_1\] \[=(4-1) \dfrac{h^2}{8 mL^2} = 3 \dfrac{h^2}{8 mL^2}\] The *only* way the particle will make the transition from the ground state to the first excited state is if something transfers \(3 \dfrac{h^2}{ 8mL^2}\) into it, giving the particle just the right amount of energy to make the transition.

For the infinite well potential, the energy levels are proportional to \(n^2\). This means the gaps between lower energy levels are smaller than those between higher energy levels. So as the particle gains energy, it takes *more* energy to transition to a higher level. Also, the energy gap between consecutive levels is smaller if \(L\) is bigger. So if the potential well becomes wider, it becomes easier to transition between levels.

Example #3

The protons and neutrons of an atom are confined to the nucleus. We will model the nucleus as an inescapable box of size \(10^{−15} \text{ m}\) (typical for atomic nuclei). Give an estimate of how much energy we would need to move a proton in Helium up to the next energy level.

**Solution**

From our previous example we know that the amount of energy needed to transition from the ground state to the second state is \[\Delta E_{1 \rightarrow 2} = 3 \dfrac{h^2}{8 m L^2}\] Putting in the numbers we get \[\Delta E_{1 \rightarrow 2} = 3 \dfrac{(6.626 \times 10^{-34} \text{ J s})^2}{(8)(1.67 \times 10^{-27} \text{ kg})(10^{-15} \text{ m})^2} = 9.9 \times 10^{-11} \text{ J} \approx 6 \times 10^8 \text{ eV}\] How did we know the proton would start in the ground state? We guessed! This answer is only a rough estimate, but it gives us some idea of the amount of energy involved. To make a meaningful comparison, the amount of energy it takes to break a chemical bond has a typical magnitude of 1 eV.

### Simple Harmonic Oscillator

The next potential we consider is the harmonic oscillator. A microscopic particle that is constrained by a spring-like potential (for instance, the atomic bond potential) will also have quantized energy levels. We don't actually discuss springs, but we still use a *spring constant* \(k_{spring}\) in describing the potential energy: \[PE = \dfrac{1}{2}k_{spring}x^2\] As we learned in Physics 7B, a mass on a spring with a spring constant \(k_{spring}\) oscillates with a frequency \(f\) given by \[f = \dfrac{1}{2 \pi} \sqrt{\frac{k_{spring}}{m}}\] Recall that this frequency is, to a good approximation, independent of the amplitude of the oscillation.

For this potential, the energy levels are equally spaced, and the spacing is related to the frequency of the oscillation. \[E_n = KE + PE_{spring} = \left( n- \dfrac{1}{2} \right) hf \text{; where } n = 1, 2, 3, 4, 5...\] \[ = \left( n- \dfrac{1}{2} \right) \dfrac{h}{2 \pi} \sqrt{\frac{k_{spring}}{m}}\] As before, \(n\) is a positive integer and \(h\) is Planck's constant. Because \(n\) is a positive integer we see that it is not possible for a particle to be at rest in a mass-spring system. The potential energy between bonded atoms is similar to the potential energy for mass-spring systems – atoms oscillate even at 0 K! This energy cannot transfer from the mass-spring to another system because there are no lower energy levels available to the mass-spring system.

Exercise

A friend of yours points out that a mass-spring in the lab would also have quantized energies due to this formula. Because the energy depends on amplitude, this would mean that you are only allowed certain amplitudes. Yet in the lab, it seems that you can set the amplitude to an value you want. How do you resolve this discrepancy? (Hint: you will want to consider the numerical value of \(h\)).

The quantization of energy also helps us understand the freezing of vibrational modes that we learned about in Physics 7A. Let us consider a diatomic molecule that vibrates at a frequency \(f\). From Physics 7B, recall that the “typical” amount of thermal energy available per mode is \(\frac{1}{2}k_B T\), where\(k_B\) is Boltzmann’s constant, \(1.38 \times 10^{−23} \text{ J/K}\), and \(T\) is the temperature (expressed in Kelvin). For the atoms to vibrate *two* vibrational modes must be activated – one potential and one kinetic. The amount of thermal energy available to two modes is \(k_B T\). To transfer the system to a higher state, it must gain an amount of energy \(E = hf\) (this result is derived below). If the thermal energy available \(k_B T\) is less than \(hf\), then the molecule does not have enough energy to go up an energy level. We say the vibrational modes are *frozen out* because we cannot transfer energy into them. When the amount of thermal energy is high enough to overcome the gap between energy levels, then energy can transfer into the vibrational energy of the atoms, and we say a vibrational mode has been *a**ctivated*. The thermal energy available per mode is controlled by the value of temperature \(T\).

Exercise

Oxygen gas O_{2} has a vibrational frequency of \(5 \times 10^{13} \text{ Hz}\). At roughly what temperature does the vibrational mode become activated?

There are many various systems that can be modeled as simple harmonic oscillators. Classically, we know that how stiff or loose a spring is will affect the motion. Quantum mechanically, we find the same thing.

Example #4

Compare the energy spectra of a vibrating molecule with a strong bond to one with a weak bond, assuming the masses in each case are the same.

**Solution**

As the masses are the same, the strength of the bond is the only parameter affecting the the energy spectra. The bond that is stronger has a bigger \(k_{spring}\), which results in a higher frequency. The energy of the ground state is \(E_{ground} = \frac{1}{2}hf\), so the molecule with higher frequency has a higher ground state energy.

The energy levels of a harmonic oscillator are evenly spaced, meaning that the energy required to transition to another level is the same regardless of the current energy level. We can find this spacing by subtracting the \(n^{th}\) energy from the \((n + 1)^{th}\) energy:

\[\Delta E = \left( (n+1) - \dfrac{1}{2} \right) hf - \left( N- \dfrac{1}{2} \right) hf\]

\[= \left( n+ \dfrac{1}{2} \right) hf - \left( n- \dfrac{1}{2} \right) hf\]

\[=hf\]

Energy gaps between levels is proportional to the frequency. The molecule with a higher \(k_{spring}\) has a higher frequency, and its energy levels are spaced further apart. We can use the information about ground state energies and energy spacing to graph the energy spectra:

The diagram above shows the energy levels of the weaker bond (left) compared to the stronger bond (right). Recall that \(E_1\) is the ground state energy, and note that the molecule with higher frequency vibrations has a higher ground state energy. The energy levels for the higher frequency vibrations are also spaced further apart.

The example above compares the energy levels for low frequency oscillations to those for high frequency oscillations. The energy spectra are determined by potential energy so it is worth examining the differences in the potential energies for the two systems. From earlier courses, we know the potential energy in a mass-spring system is \(PE_{spring} = \dfrac{1}{2}k_{spring}x^2\). Like before, our \(PE\) depends on one variable, this time called \(x\). The potential energy resembles a parabola in \(x\). If \(k_{spring}\) is high, the potential is steep. LIkewise the potential appears flatter for lower values of \(k_{spring}\). This is displayed visually in the graphs below.

Recall that gaps between energy levels are smaller for lower \(k_{spring}\) values, so the flatter potential has energy levels that are more closely spaced. This is similar to our earlier statement that wider infinite wells has closer energy levels than narrow ones.

Remember that we are treating the atoms themselves as our microscopic particle; additionally there are electrons in the atoms themselves that also have quantized energy levels. Make sure you understand that the quantized energy levels for the electrons in the atoms are separate and distinct from the quantized molecules for the atoms undergoing simple harmonic motion.

### Single Electron in an Atom

The final potential we will discuss here is an electron bound to an orbit around a nucleus. The total energy levels are quantized in terms of \(n\) as such: \[E_{\text{n, total}} = KE + PE_{\text{electric}}\] \[- \dfrac{1}{n^2} \left( \dfrac{2 \pi k_e Z e^2}{h} \right)^2 \dfrac{m_e}{2} = \dfrac{E_1}{n^2}\] where \(n\) is any non-zero integer (\(n=1, 2, 3, 4, 5..\)); \(k_e\) is the electrostatic constant (\(9.0 \times 10^9 \frac{Nm^2}{C^2}\)); \(Z\) is the number of protons in the nucleus; \(e\) is the charge of the electron; \(h\) is Planck's constant; and \(m_e\) is the mass of the electron. The energy levels \(E_n\) can be rewritten in terms of the ground state energy \(E_1\) as \(E_n = \frac{E_1}{n^2}\). Writing out the first few energy levels explicitly \[E_n = E_1, \dfrac{1}{4} E_1, \dfrac{1}{9} E_1,...\] Plugging in the values of the constants in the equation, we find that \(E_1 = (-2.18 \times 10^{-18} \text{ J})Z^2 = (-13.6 \text{ eV})Z^2\). The most relevant example is the hydrogen atom (\(Z = 1\)), as this is the only atom that typically has only one electron. The energy levels are represented in figure below.

We've established earlier that we could arbitrarily choose "zero" energy to be any amount of energy. Above, we have chosen zero to refer to the energy of an unbound electron, and each energy level shown has a *negative* energy in comparison. The ground state energy level (\(n = 1\)) is the most bound state. Adding (allowed) energy to the electron will increase \(n\), making its total energy less negative, or even zero (as \(n\) approaches \(\inf\)). At this point, the electron will be unbound and free, and then will be allowed to have any (positive) value of energy. Because we cannot draw a line for *every* positive energy, we have simply added a shaded region to the spectrum. Also note that the energy levels get closer and closer together for larger \(n\).

### Absorption and Emission Spectra

There are many transitions that occur that allow the electron to remain bound to the nucleus. Each allowed transition requires the electron to gain or lose a specific amount of energy. This energy transfer typically occurs by the atom absorbing or emitting individual photons. Because only certain energies can be absorbed or emitted), and the photon’s energy depends on frequency, we see that only certain frequencies of light are absorbed or emitted in these transitions.

Example #5

Light including the infrared, visible, and ultraviolet hits a bunch of hydrogen atoms at nearly 0K. The light is detected after hitting the atoms.

- Describe the detected light.
- Considering only transitions that allow the electron to remain bound, determine the longest possible wavelength absorbed.
- Considering only transitions that allow the electron to remain bound, determine the shortest possible wavelength that could be absorbed.
- Determine if either of the photons in (b) or (c) are in the visible range.

**Solution**

**a)** The light incident on the hydrogen atoms includes a full range of frequencies,and thus a full range of energies. When the light hits the hydrogen atoms, some of the photons with exactly the right energy will excite the electrons into higher energy levels. The other photons will pass through unimpeded. Thus, the light reaching the detector will no longer contain the full range of frequencies; it will not contain frequencies corresponding to transition energies in the hydrogen atom. In the case of hydrogen most of the light at high frequencies is absorbed by ionizing atoms. If you pass the light through a prism to separate the light by frequency, the missing frequencies will appear as dark bands in the light at specific colors.

**b) **Longer wavelengths of light have lower frequencies. Photons with lower frequencies have lower energies. To find the longest wavelength absorbed we must find the smallest amount of energy absorbed. We start with atoms that are very, very cold, so we can assume that all of the electrons are initially in the ground state. The lowest energy transition is from the ground state (\(n=1\)) to the \(n=2\) level. To make this transition, it must absorb a specific amount of energy:

\[\Delta E_{\text{photon}} + \Delta E_{\text{electron}} = 0\]

\[E_{\text{photon, final}} - E_{\text{photon, initial}} = -(E_{\text{electron, final}} - E_{\text{electron, initial}})\]

\[0-E_{\text{photon, initial}} = - \left( \dfrac{-13.6 \text{ eV}}{2^2} -( \dfrac{-13.6 \text{ eV}}{1^2}) \right)\]

\[E_{\text{photon, initial}} = 10.2 \text{ eV}\]

The question asks us to determine the wavelength of the absorbed light, so we must determine the wavelength of light which has photons each with energy 10.2 eV. Recall that \(v_{light} = c\), so \(f=c/\lambda\). The wavelength is then given by:

\[E_{\text{photon, initial}} = 10.2 \text{ eV}\]

\[\implies hf=\dfrac{hc}{\lambda} = 10.2 \text{ eV}\]

\[\dfrac{(6.626 \times 10^{-34} \text{ J s})(3 \times 10^8 \text{ m/s})}{\lambda} = (10.2 \text{ eV}) \dfrac{1.6 \times 10^{-19} \text{ J}}{1 \text{ eV}}\]

Solving for \(\lambda\), we find that the longest absorbed wavelength is \(1.22 \times 10^{-7} \text{ m}\) or 122 nm. The hydrogen atoms in this problem were at nearly absolute zero, so we could treat the electrons like they were in the ground state. This is not always the case. We could have an electron start in the \(n=2\) state and transition to the \(n=3\) state. An electron undergoing this \(n=2\) to \(n=3\) transition would absorb a longer wavelength than the 122 nm found above (calculate it and check!). In fact, the energy levels in a hydrogen atom are very closely packed near zero energy. There are infinitely many energies available just below zero energy, at large n, so the gap between energies can be incredibly small. Note that realistically it is unlikely that there are lots of electrons with initial states with very high \(n\), because these electrons would be easily dissociated from their nuclei. However, excited electrons in atoms, electrons with large \(n\), can absorb much longer wavelengths and make much smaller transitions.

**c) **Shorter wavelengths of light correspond to higher frequencies and higher energies. The highest energy transition available is from \(n = 1\) to a very large \(n\), just before the electron is freed from the atom. The initial state is the ground state, with energy −13.6 eV, and the final state approaches 0 eV. For the electron to gain 13.6 eV of energy, it must absorb a photon with that much energy. Mathematically,

\[\Delta E_{\text{photon}} + \Delta E_{\text{electron}} = 0\]

\[E_{\text{photon, final}} - E_{\text{photon, initial}} = -(E_{\text{electron, final}} - E_{\text{electron, initial}})\]

\[0 - E_{\text{phton, initial}} = -(0 \text{ eV} - (- 13.6 \text{ eV}))\]

\[E_{\text{photon, initial}} = 13.6 \text{ eV}\]

In part (b), we must determine the wavelength of light corresponding to 13.6 eV of energy. Recalling \(v_{light} =c\),

\[E_{\text{photon, initial}} = hf = \dfrac{hc}{\lambda} = 13.6 \text{ eV}\]

\[\dfrac{(6.626 \times 10^{-34} \text{ J s})(3 \times 10^8 \text{ m/s})}{\lambda} = (13.6 \text{ eV}) \dfrac{1.6 \times 10^{-19} \text{ J}}{1 \text{ eV}}\]

Solving for \(\lambda\), we find that the shortest absorbed wavelength is \(9.14 \times 10^{−8} \text{ m}\) or 91.4 nm.

**d) **The smallest wavelengths human eyes can see are around 400 nm, so both the longest and shortest absorbed wavelengths are also outside the visible range (longer wavelengths can be absorbed by electrons with high \(n\) though). If we convert the wavelengths back to frequencies, we find the lower frequency (c) is \(2.46 \times 10 {15} \text{ Hz}\) and the higher (d) is \(3.28 \times 10ˆ{15} \text{ Hz}\). Referring to Useful Approximations we find that both of these photons are in the ultraviolet range.

As the example pointed out, if you detect light after passing it through hydrogen, certain frequencies will be absent from the light. The above example explores the **absorption spectrum **of hydrogen, a spectrum of frequencies that correspond to energy transitions within hydrogen.

We could also explore the **emission spectrum **of hydrogen. If we heat up a tube of hydrogen case, many of the electrons are excited out of their ground states and into higher energy states. As these electrons fall to lower energy levels, they emit photons whose frequencies correspond to the energy transitions. The emission spectrum of hydrogen can be directly calculated from the energy level transitions. The emission spectrum for other elements are more complicated to calculate because other elements have multiple electrons that interact with each other and with the nucleus.

Because the atoms of each element have different energy transitions, the emission spectrum and absorption spectrum of each element is unique. This uniqueness is exploited in spectroscopy, where unknown atoms and molecules can by identified by the energies (frequencies) of photons that they emit or absorb.

Burning samples of chemicals is another way to excite electrons. We might expect to see some correlation between the emission spectrum and the color of chemical fires. In practice, we do indeed see this similarity. The color of chemical fires is due to the emitted photons. To take a specific example, burning sodium produces a bright yellow flame. We can understand this by studying the emission spectrum of sodium, which includes many photons but only two in the visible range. Both of the visible photons have wavelengths of about 590 nm which corresponds to yellow light. The color of the flame is yellow because of these yellow spectral lines. You may be familiar with sodium vapor street lamps, which operate on the principle of exciting sodium atoms, and emit yellow light!

## Photons

Previously, we devoted a whole chapter to understanding the phenomena of light waves. To review briefly, we found that light behaves like a wave in a variety of circumstances, such as when sent through small thin slits (as in two-slit interference). Prior to the two-slit experiments, physicists had been uncertain about the nature of light. Prominent physicists, including Sir Isaac Newton, strongly believed that light was more like a particle than a wave, but the two-slit interference patterns of light could be understood so well with the wave model that for a while the subject was laid to rest.

However, in the early 20th century, several circumstances involving light brought the particle model back into consideration. Eventually, enough evidence accumulated to conclude that light behaves in ways that can be explained by a particle model, but cannot be explained by a wave model. Presently, we must hold in our minds both the wave model of light and the particle model of light. In some circumstances, the behavior follows the wave model, but in other circumstances, it follows the particle model.

### Implications of the Particle Model

As we've discussed, light is quantized, composed of individual quanta called "photons." The photons can be thought of as individual packets of light, each with an energy proportional to the light's frequency: (E_{photon} = hf\), where \(h\) is Planck’s constant.

To consider the implications of the particle model, it is helpful to think about monochromatic light, many photons all with the same frequency, like light produced by a laser. We consider two properties of the light–it’s intensity (i.e. brightness) and the amount of energy the light is able to transfer into another system, like an electron orbiting a nucleus.

First, compare two beams of light with equal intensity but different frequencies. From our relationship \(E_{photon} = hf\) we see the beam with the higher frequency has photons with higher energy. Thus, the high frequency beam is capable of transferring larger amounts energy into another system. But the intensities of the beams or the same, so the total energy transfered by each beam is the same. This tells us that the beam with the higher frequency has *fewer *photons. But in the wave model, the same intensity of each beam means they must have the same amplitude. The energy in a wave is related to its amplitude, so it would seem both light beams must have equal ability to transfer energy. Clearly, the two models lead to different hypotheses.

Next, consider the action of increasing the beams' intensity . In the particle model, we would describe this as addingmore photons to the beam, but each particular photon still only carries a certain amount of energy. Using the particle model, we conclude that the brightness of the beam does not influence how much energy any particular photon can transfer to another system. In the wave model, a greater brightness would indicate a larger amplitude wave; we would conclude that greater intensity waves have the ability to transfer larger amounts of energy into another system. Again, the models make different predictions.

### The Photoelectric Effect

At this point, we have two different models for light. We know that the wave model is quite able to predict the behavior of light in two-slit interference, where the particle model can not. Yet the particle model can explain certain behaviors that the wave model cannot. One of those behaviors is exhibited as the photoelectric effect, which provides strong experimental evidence of the particle model of light. In fact, it was the photoelectric effect that first led Albert Einstein to develop the particle model of light.

In the **photoelectric effect**, a beam of incoming light shines on a metallic surface. When the beam hits the metal, photons eject electrons from the metal and sends the electrons down a tube to a collector. To do so, the photons must provide the electrons with enough energy to break their bonds to the metal, and sufficient kinetic energy to reach the collector. Reaching the collector requires a certain amount of minimum kinetic energy at emission, because an electric field exists between the collector and the emitter that acts to slow down the electrons on their path. This is shown in the figure below.

For now, focus your attention solely on the grayed tube at the top and ignore the portions of the circuit including the battery and ammeter. The photoelectric experiment allows us to test the wave model against the particle model, for this particular setup. As an experimenter, we have control over both the intensity of the light and the frequency of the light. We can independently vary one or the other, and note the effect, enabling us to determine the appropriate model for this system.

The photoelectric effect can be explained using the conservation of energy. Light brings in a certain amount of energy. If the energy is sufficiently high, it frees an electron from the metal. Different metals bind the electrons with different amounts of energy, called the work function, and given the symbol \(W_0\). If the incident light has less energy than the work function, the electrons remain attached to the plate.

Suppose the incident light has sufficient energy to free the electron from the plate. The electron is emitted, and has a kinetic energy of at least 0 J, possibly more. The energy of the incident light is split in some fashion between breaking the electrons bond to the metal and providing the electron with kinetic energy (\(E_{light} = W_0 +KE_{electron}\)). The work function is fixed for a given material and doesn't change, so higher energy light results in faster moving electrons.

Next, the electron travels from the emitter towards the collector. In this region, an electric field points from the emitter toward the collector. The electric force on the electron slows it down as it travels from the emitter to the collector. Thinking about energy again, the electron gains potential energy as it loses kinetic energy. As an experimenter, we we control the strength of the electric field, and thus the amount of potential energy the particle gains as it traverses the tube. If we stop the electron exactly as it reaches the collector, it means we transfer *all* of the kinetic energy to potential energy, and we can measure the kinetic energy the electron had just after emission. The potential required to do this is called the** stopping potential**. If we have a situation where many electrons reach the collector, we can slowly increase the voltage between the plates until we just reach the stopping potential.

Scientists who carried out experiments like this observed the following:

- Higher intensity beams free more electrons.
- Higher frequency beams result in electrons with higher speeds.
- Changing the beam intensity has no effect on electron speed.
- Changing the frequency of the beam has no effect on the number of electrons freed (provided the frequency is high enough that some electrons are freed).

These results all support the particle model of light. Beams with higher intensities contain more photons. Higher intensity beams free more electrons because more photons are present to transfer energy. However the amount of energy one photon can transfer to an electron is determined by the photon's frequency. Increasing the frequency of incoming light increases the energy transfered to the electrons, which is why higher frequency beams produce electrons with more kinetic energy.

### The Mathematics of the Photoelectric Effect

Now that we're familiar with the concepts of light quantization, we explore these concepts to quantify them mathematically. As you read, consider how the equations reflect the concepts presented above. Our goal is to determine the energy of the incident light.

There are two main processes involved in the photoelectric effect. The first involves the light transferring energy to the electron, freeing it from the metal and giving it kinetic energy. Next, the electron travels down the tube, gaining potential energy and losing kinetic energy. As stated above, we adjust the voltage between the plates until the electron just barely stops short of the collector.

First, we will look at the second process, of slowing the electron down as it traverses the tube. The change in total energy (0) is given by the sum of the change in the kinetic and electrical potential energies:

\[\Delta PE + \Delta KE = qV_{final}-qV_{initial} +\dfrac{1}{2} m v^2_{final} - \dfrac{1}{2} m v^2_{initial} = 0\]

The electrons just barely stop at the collector, so the final speed must be \(v_{final}=0\). Also, the change in potential \(V_{final} − V_{initial} = \Delta V\) is defined as \(V_{stopping}\). We can rewrite our above equation as

\[qV_{stopping}+KE_{initial} =0\]

\[ \implies KE_{initial} = -qV_{stopping}\]

We know the charge of the electron (\(q = −1.6 \times 10^{−19} \text{ C}\)) and as experimentalists we know stopping potential; we have determined the kinetic energy of the electrons just after they emerge from the plate. Our goal is to relate this mathematically to the energy of the incident light. Recall that incident energy is split; it frees the electron *and* gives the electron kinetic energy. Recalling it takes \(W_0\) energy to free the electron, we can write

\[E_{light} = W_0 + KE_{initial}\] \[KE_{initial} = E_{light} - W_0\]

The kinetic energy is marked “initial” to remind us that the electron has this kinetic energy only just after emission. Combining this with our results above, we have:

\[-q V_{stopping} = E_{light} - W_0\]

\[E_{light} = W_0 - q V_{stopping}\]

We have found an equation for the amount of energy transfered by the light for a given stopping potential. If we adjust the energy of the incoming light, we must also adjust the stopping potential. If we change some aspect of the light and find that we don't need to change the stopping potential, it means the energy transfered by the light has not changed.

This setup is useful because the wave model and particle model hypothesize different ways of adjusting the energy of the light (as discussed previously in this section). To test the models, we can try each method of adjusting light energy and note whether or not we needed to change the stopping potential to compensate (which would indicate different electron kinetic energy). Experimentally, we find that adjusting the frequency of the incident light requires us to adjust the stopping potential, but adjusting the intensity does not.

Recalling the energy of one photon is \(E_{photon} = hf\), we can rewrite our earlier result; each electron gains an energy

\[E_{light}=hf = W_0 - qV_{stopping}\]

The electron charge \(q\) is negative, so we always find that \(hf> W_0\) in the photoelectric effect. With this relationship, we could determine an experimental value for \(h\), determine the work function of different metals, and more.

### Which Model is "Correct"?

At this point, you might wonder which model is the “correct” model of light. The answer is neither. In more sophisticated treatments physicists have developed a “quantum model” to explain light, which incorporates all the examples we have discussed so far. However we should refrain from saying that light is actually this quantum stuff, because future experiments may require us to replace this model with something else.

If neither model of light is correct, why do we teach them? Ultimately the full quantum model is just too difficult to explore in Physics 7. Furthermore, we can answer many questions about light by using the particle model or the wave model of light; both of these simpler models correctly capture aspects of light’s behavior. Many books perpetrate confusion by claiming that light is somehow “both a particle and a wave;" many physicists are also guilty of perpetrating this myth. We have a good quantum model for light (and electrons, and even whole atoms); in some situations we can simplify this and use the wave model, while in others we can use the particle model. In other situations the quantum model does not fit into either a wave or particle description. Light and other microscopic phenomena often behave in unfamiliar ways completely outside human experience. Even if we cannot shoehorn quantum mechanics into our regular familiar notions of "particles" and "waves," this does not mean quantum mechanics is contradictory; it just means that the microscopic world is highly counter-intuitive.

Because the wave-particle “duality” or “contradiction” is bought up so often, it bears repeating. Arguments about whether light is really a particle or a wave are a waste of oxygen, or worse yet, trees. Light can be modeled as particles when it behaves as such and it can be modeled as a wave likewise. We use these models when more complicated behaviors of light can be ignored or simplified, and we recognize that each model has limits and only applies under specific conditions. People who argue about whether light is a particle *or* a wave do not understand the concept of modeling and making approximations, and you should be hesitant to accept their advice about physics.

## Contributors

Authors of Phys7C (UC Davis Physics Department)