Skip to main content
\(\require{cancel}\)
Physics LibreTexts

13.3 Matter As a Wave

zero_13_3.png

[In] a few minutes I shall be all melted... I have been wicked in my day, but I never thought a little girl like you would ever be able to melt me and end my wicked deeds. Look out --- here I go! -- The Wicked Witch of the West

As the Wicked Witch learned the hard way, losing molecular cohesion can be unpleasant. That's why we should be very grateful that the concepts of quantum physics apply to matter as well as light. If matter obeyed the laws of classical physics, molecules wouldn't exist.

Consider, for example, the simplest atom, hydrogen. Why does one hydrogen atom form a chemical bond with another hydrogen atom? Roughly speaking, we'd expect a neighboring pair of hydrogen atoms, A and B, to exert no force on each other at all, attractive or repulsive: there are two repulsive interactions (proton A with proton B and electron A with electron B) and two attractive interactions (proton A with electron B and electron A with proton B). Thinking a little more precisely, we should even expect that once the two atoms got close enough, the interaction would be repulsive. For instance, if you squeezed them so close together that the two protons were almost on top of each other, there would be a tremendously strong repulsion between them due to the \(1/r^2\) nature of the electrical force. The repulsion between the electrons would not be as strong, because each electron ranges over a large area, and is not likely to be found right on top of the other electron. Thus hydrogen molecules should not exist according to classical physics.

Quantum physics to the rescue! As we'll see shortly, the whole problem is solved by applying the same quantum concepts to electrons that we have already used for photons.

13.3.1 Electrons as waves

We started our journey into quantum physics by studying the random behavior of matter in radioactive decay, and then asked how randomness could be linked to the basic laws of nature governing light. The probability interpretation of wave-particle duality was strange and hard to accept, but it provided such a link. It is now natural to ask whether the same explanation could be applied to matter. If the fundamental building block of light, the photon, is a particle as well as a wave, is it possible that the basic units of matter, such as electrons, are waves as well as particles?

A young French aristocrat studying physics, Louis de Broglie (pronounced “broylee”), made exactly this suggestion in his 1923 Ph.D. thesis. His idea had seemed so farfetched that there was serious doubt about whether to grant him the degree. Einstein was asked for his opinion, and with his strong support, de Broglie got his degree.

Only two years later, American physicists C.J. Davisson and L. Germer confirmed de Broglie's idea by accident. They had been studying the scattering of electrons from the surface of a sample of nickel, made of many small crystals. (One can often see such a crystalline pattern on a brass doorknob that has been polished by repeated handling.) An accidental explosion occurred, and when they put their apparatus back together they observed something entirely different: the scattered electrons were now creating an interference pattern! This dramatic proof of the wave nature of matter came about because the nickel sample had been melted by the explosion and then resolidified as a single crystal. The nickel atoms, now nicely arranged in the regular rows and columns of a crystalline lattice, were acting as the lines of a diffraction grating. The new crystal was analogous to the type of ordinary diffraction grating in which the lines are etched on the surface of a mirror (a reflection grating) rather than the kind in which the light passes through the transparent gaps between the lines (a transmission grating).

a / A double-slit interference pattern made with neutrons. (A. Zeilinger, R. Gähler, C.G. Shull, W. Treimer, and W. Mampe, Reviews of Modern Physics, Vol. 60, 1988.)

 

Although we will concentrate on the wave-particle duality of electrons because it is important in chemistry and the physics of atoms, all the other “particles” of matter you've learned about show wave properties as well. Figure a, for instance, shows a wave interference pattern of neutrons.

It might seem as though all our work was already done for us, and there would be nothing new to understand about electrons: they have the same kind of funny wave-particle duality as photons. That's almost true, but not quite. There are some important ways in which electrons differ significantly from photons:

  1. Electrons have mass, and photons don't.
  2. Photons always move at the speed of light, but electrons can move at any speed less than \(c\).
  3. Photons don't have electric charge, but electrons do, so electric forces can act on them. The most important example is the atom, in which the electrons are held by the electric force of the nucleus.
  4. Electrons cannot be absorbed or emitted as photons are. Destroying an electron or creating one out of nothing would violate conservation of charge.

(In section 13.4 we will learn of one more fundamental way in which electrons differ from photons, for a total of five.)

Because electrons are different from photons, it is not immediately obvious which of the photon equations from chapter 11 can be applied to electrons as well. A particle property, the energy of one photon, is related to its wave properties via \(E=hf\) or, equivalently, \(E=hc/\lambda \). The momentum of a photon was given by \(p=hf/c\) or \(p=h/\lambda \). Ultimately it was a matter of experiment to determine which of these equations, if any, would work for electrons, but we can make a quick and dirty guess simply by noting that some of the equations involve \(c\), the speed of light, and some do not. Since \(c\) is irrelevant in the case of an electron, we might guess that the equations of general validity are those that do not have \(c\) in them:

\[\begin{align*} E &= hf \\ p &= h/\lambda \end{align*}\]

 

This is essentially the reasoning that de Broglie went through, and experiments have confirmed these two equations for all the fundamental building blocks of light and matter, not just for photons and electrons.

The second equation, which I soft-pedaled in the previous chapter, takes on a greater important for electrons. This is first of all because the momentum of matter is more likely to be significant than the momentum of light under ordinary conditions, and also because force is the transfer of momentum, and electrons are affected by electrical forces.

 

Example 12: The wavelength of an elephant

\(\triangleright\) What is the wavelength of a trotting elephant?

\(\triangleright\) One may doubt whether the equation should be applied to an elephant, which is not just a single particle but a rather large collection of them. Throwing caution to the wind, however, we estimate the elephant's mass at \(10^3\) kg and its trotting speed at 10 m/s. Its wavelength is therefore roughly

\[\begin{align*} \lambda &= \frac{h}{p} \\ &= \frac{h}{mv} \\ &= \frac{6.63\times10^{-34}\ \text{J}\!\cdot\!\text{s}}{(10^3\ \text{kg})(10\ \text{m}/\text{s})} \\ &\sim 10^{-37}\ \frac{\left(\text{kg}\!\cdot\!\text{m}^2/\text{s}^2\right)\!\cdot\!\text{s}}{\text{kg}\!\cdot\!\text{m}/\text{s}} \\ &= 10^{-37}\ \text{m} \end{align*}\]

The wavelength found in this example is so fantastically small that we can be sure we will never observe any measurable wave phenomena with elephants or any other human-scale objects. The result is numerically small because Planck's constant is so small, and as in some examples encountered previously, this smallness is in accord with the correspondence principle.

Although a smaller mass in the equation \(\lambda =h/mv\) does result in a longer wavelength, the wavelength is still quite short even for individual electrons under typical conditions, as shown in the following example.

 

Example 13: The typical wavelength of an electron

\(\triangleright\) Electrons in circuits and in atoms are typically moving through voltage differences on the order of 1 V, so that a typical energy is \((e)(1\ \text{V})\), which is on the order of \(10^{-19}\ \text{J}\). What is the wavelength of an electron with this amount of kinetic energy?

\(\triangleright\) This energy is nonrelativistic, since it is much less than \(mc^2\). Momentum and energy are therefore related by the nonrelativistic equation \(K=p^2/2m\). Solving for \(p\) and substituting in to the equation for the wavelength, we find

\[\begin{align*} \lambda &= \frac{h}{\sqrt{2mK}} \\ &= 1.6\times10^{-9}\ \text{m} . \end{align*}\]

This is on the same order of magnitude as the size of an atom, which is no accident: as we will discuss in the next chapter in more detail, an electron in an atom can be interpreted as a standing wave. The smallness of the wavelength of a typical electron also helps to explain why the wave nature of electrons wasn't discovered until a hundred years after the wave nature of light. To scale the usual wave-optics devices such as diffraction gratings down to the size needed to work with electrons at ordinary energies, we need to make them so small that their parts are comparable in size to individual atoms. This is essentially what Davisson and Germer did with their nickel crystal.

 

self-check:

These remarks about the inconvenient smallness of electron wavelengths apply only under the assumption that the electrons have typical energies. What kind of energy would an electron have to have in order to have a longer wavelength that might be more convenient to work with?

(answer in the back of the PDF version of the book)

 

What kind of wave is it?

If a sound wave is a vibration of matter, and a photon is a vibration of electric and magnetic fields, what kind of a wave is an electron made of? The disconcerting answer is that there is no experimental “observable,” i.e., directly measurable quantity, to correspond to the electron wave itself. In other words, there are devices like microphones that detect the oscillations of air pressure in a sound wave, and devices such as radio receivers that measure the oscillation of the electric and magnetic fields in a light wave, but nobody has ever found any way to measure the electron wave directly.

b / These two electron waves are not distinguishable by any measuring device.

 

We can of course detect the energy (or momentum) possessed by an electron just as we could detect the energy of a photon using a digital camera. (In fact I'd imagine that an unmodified digital camera chip placed in a vacuum chamber would detect electrons just as handily as photons.) But this only allows us to determine where the wave carries high probability and where it carries low probability. Probability is proportional to the square of the wave's amplitude, but measuring its square is not the same as measuring the wave itself. In particular, we get the same result by squaring either a positive number or its negative, so there is no way to determine the positive or negative sign of an electron wave.

Most physicists tend toward the school of philosophy known as operationalism, which says that a concept is only meaningful if we can define some set of operations for observing, measuring, or testing it. According to a strict operationalist, then, the electron wave itself is a meaningless concept. Nevertheless, it turns out to be one of those concepts like love or humor that is impossible to measure and yet very useful to have around. We therefore give it a symbol, \(\Psi \) (the capital Greek letter psi), and a special name, the electron wavefunction (because it is a function of the coordinates \(x\), \(y\), and \(z\) that specify where you are in space). It would be impossible, for example, to calculate the shape of the electron wave in a hydrogen atom without having some symbol for the wave. But when the calculation produces a result that can be compared directly to experiment, the final algebraic result will turn out to involve only \(\Psi^2\), which is what is observable, not \(\Psi \) itself.

Since \(\Psi \), unlike \(E\) and \(B\), is not directly measurable, we are free to make the probability equations have a simple form: instead of having the probability density equal to some funny constant multiplied by \(\Psi^2\), we simply define \(\Psi \) so that the constant of proportionality is one:

\[\begin{equation*} (\text{probability distribution}) = \Psi ^2 . \end{equation*}\]

Since the probability distribution has units of \(\text{m}^{-3}\), the units of \(\Psi \) must be \(\text{m}^{-3/2}\).

Discussion Question

◊ Frequency is oscillations per second, whereas wavelength is meters per oscillation. How could the equations \(E=hf\) and \(p=h/\lambda\) be made to look more alike by using quantities that were more closely analogous? (This more symmetric treatment makes it easier to incorporate relativity into quantum mechanics, since relativity says that space and time are not entirely separate.)

13.3.2 Dispersive waves

 

A colleague of mine who teaches chemistry loves to tell the story about an exceptionally bright student who, when told of the equation \(p=h/\lambda \), protested, “But when I derived it, it had a factor of 2!” The issue that's involved is a real one, albeit one that could be glossed over (and is, in most textbooks) without raising any alarms in the mind of the average student. The present optional section addresses this point; it is intended for the student who wishes to delve a little deeper.

Here's how the now-legendary student was presumably reasoning. We start with the equation \(v=f\lambda \), which is valid for any sine wave, whether it's quantum or classical. Let's assume we already know \(E=hf\), and are trying to derive the relationship between wavelength and momentum:

\[\begin{align*} \lambda &= \frac{v}{f} \\ &= \frac{vh}{E} \\ &= \frac{vh}{\frac{1}{2}mv^2} \\ &= \frac{2h}{mv} \\ &= \frac{2h}{p} . \end{align*}\]

 

The reasoning seems valid, but the result does contradict the accepted one, which is after all solidly based on experiment.

c / Part of an infinite sine wave.

 

The mistaken assumption is that we can figure everything out in terms of pure sine waves. Mathematically, the only wave that has a perfectly well defined wavelength and frequency is a sine wave, and not just any sine wave but an infinitely long sine wave, c. The unphysical thing about such a wave is that it has no leading or trailing edge, so it can never be said to enter or leave any particular region of space. Our derivation made use of the velocity, \(v\), and if velocity is to be a meaningful concept, it must tell us how quickly stuff (mass, energy, momentum, ...) is transported from one region of space to another. Since an infinitely long sine wave doesn't remove any stuff from one region and take it to another, the “velocity of its stuff” is not a well defined concept.

Of course the individual wave peaks do travel through space, and one might think that it would make sense to associate their speed with the “speed of stuff,” but as we will see, the two velocities are in general unequal when a wave's velocity depends on wavelength. Such a wave is called a dispersive wave, because a wave pulse consisting of a superposition of waves of different wavelengths will separate (disperse) into its separate wavelengths as the waves move through space at different speeds. Nearly all the waves we have encountered have been nondispersive. For instance, sound waves and light waves (in a vacuum) have speeds independent of wavelength. A water wave is one good example of a dispersive wave. Long-wavelength water waves travel faster, so a ship at sea that encounters a storm typically sees the long-wavelength parts of the wave first. When dealing with dispersive waves, we need symbols and words to distinguish the two speeds. The speed at which wave peaks move is called the phase velocity, \(v_p\), and the speed at which “stuff” moves is called the group velocity, \(v_g\).

d / A finite-length sine wave.

 

An infinite sine wave can only tell us about the phase velocity, not the group velocity, which is really what we would be talking about when we refer to the speed of an electron. If an infinite sine wave is the simplest possible wave, what's the next best thing? We might think the runner up in simplicity would be a wave train consisting of a chopped-off segment of a sine wave, d. However, this kind of wave has kinks in it at the end. A simple wave should be one that we can build by superposing a small number of infinite sine waves, but a kink can never be produced by superposing any number of infinitely long sine waves.

e / A beat pattern created by superimposing two sine waves with slightly different wavelengths.

 

Actually the simplest wave that transports stuff from place to place is the pattern shown in figure e. Called a beat pattern, it is formed by superposing two sine waves whose wavelengths are similar but not quite the same. If you have ever heard the pulsating howling sound of musicians in the process of tuning their instruments to each other, you have heard a beat pattern. The beat pattern gets stronger and weaker as the two sine waves go in and out of phase with each other. The beat pattern has more “stuff” (energy, for example) in the areas where constructive interference occurs, and less in the regions of cancellation. As the whole pattern moves through space, stuff is transported from some regions and into other ones.

If the frequency of the two sine waves differs by 10%, for instance, then ten periods will be occur between times when they are in phase. Another way of saying it is that the sinusoidal “envelope” (the dashed lines in figure e) has a frequency equal to the difference in frequency between the two waves. For instance, if the waves had frequencies of 100 Hz and 110 Hz, the frequency of the envelope would be 10 Hz.

To apply similar reasoning to the wavelength, we must define a quantity \(z=1/\lambda \) that relates to wavelength in the same way that frequency relates to period. In terms of this new variable, the \(z\) of the envelope equals the difference between the \(z's\) of the two sine waves.

The group velocity is the speed at which the envelope moves through space. Let \(\Delta f\) and \(\Delta z\) be the differences between the frequencies and \(z's\) of the two sine waves, which means that they equal the frequency and \(z\) of the envelope. The group velocity is \(v_g=f_{envelope}\lambda_{envelope}=\Delta f/\Delta \)z. If \(\Delta f\) and \(\Delta z\) are sufficiently small, we can approximate this expression as a derivative,

\[\begin{equation*} v_g = \frac{df}{dz} . \end{equation*}\]

This expression is usually taken as the definition of the group velocity for wave patterns that consist of a superposition of sine waves having a narrow range of frequencies and wavelengths. In quantum mechanics, with \(f=E/h\) and \(z=p/h\), we have \(v_g=dE/dp\). In the case of a nonrelativistic electron the relationship between energy and momentum is \(E=p^2/2m\), so the group velocity is \(dE/dp=p/m=v\), exactly what it should be. It is only the phase velocity that differs by a factor of two from what we would have expected, but the phase velocity is not the physically important thing.

13.3.3 Bound states

Electrons are at their most interesting when they're in atoms, that is, when they are bound within a small region of space. We can understand a great deal about atoms and molecules based on simple arguments about such bound states, without going into any of the realistic details of atom. The simplest model of a bound state is known as the particle in a box: like a ball on a pool table, the electron feels zero force while in the interior, but when it reaches an edge it encounters a wall that pushes back inward on it with a large force. In particle language, we would describe the electron as bouncing off of the wall, but this incorrectly assumes that the electron has a certain path through space. It is more correct to describe the electron as a wave that undergoes 100% reflection at the boundaries of the box.

Like a generation of physics students before me, I rolled my eyes when initially introduced to the unrealistic idea of putting a particle in a box. It seemed completely impractical, an artificial textbook invention. Today, however, it has become routine to study electrons in rectangular boxes in actual laboratory experiments. The “box” is actually just an empty cavity within a solid piece of silicon, amounting in volume to a few hundred atoms. The methods for creating these electron-in-a-box setups (known as “quantum dots”) were a by-product of the development of technologies for fabricating computer chips.

f / Three possible standing-wave patterns for a particle in a box.

 

For simplicity let's imagine a one-dimensional electron in a box, i.e., we assume that the electron is only free to move along a line. The resulting standing wave patterns, of which the first three are shown in the figure, are just like some of the patterns we encountered with sound waves in musical instruments. The wave patterns must be zero at the ends of the box, because we are assuming the walls are impenetrable, and there should therefore be zero probability of finding the electron outside the box. Each wave pattern is labeled according to \(n\), the number of peaks and valleys it has. In quantum physics, these wave patterns are referred to as “states” of the particle-in-the-box system.

The following seemingly innocuous observations about the particle in the box lead us directly to the solutions to some of the most vexing failures of classical physics:

The particle's energy is quantized (can only have certain values). Each wavelength corresponds to a certain momentum, and a given momentum implies a definite kinetic energy, \(E=p^2/2m\). (This is the second type of energy quantization we have encountered. The type we studied previously had to do with restricting the number of particles to a whole number, while assuming some specific wavelength and energy for each particle. This type of quantization refers to the energies that a single particle can have. Both photons and matter particles demonstrate both types of quantization under the appropriate circumstances.)

The particle has a minimum kinetic energy. Long wavelengths correspond to low momenta and low energies. There can be no state with an energy lower than that of the \(n=1\) state, called the ground state.

The smaller the space in which the particle is confined, the higher its kinetic energy must be. Again, this is because long wavelengths give lower energies.

 

Example 14: Spectra of thin gases

A fact that was inexplicable by classical physics was that thin gases absorb and emit light only at certain wavelengths. This was observed both in earthbound laboratories and in the spectra of stars. The figure on the left shows the example of the spectrum of the star Sirius, in which there are “gap teeth” at certain wavelengths. Taking this spectrum as an example, we can give a straightforward explanation using quantum physics.

g / The spectrum of the light from the star Sirius.

 

Energy is released in the dense interior of the star, but the outer layers of the star are thin, so the atoms are far apart and electrons are confined within individual atoms. Although their standing-wave patterns are not as simple as those of the particle in the box, their energies are quantized.

When a photon is on its way out through the outer layers, it can be absorbed by an electron in an atom, but only if the amount of energy it carries happens to be the right amount to kick the electron from one of the allowed energy levels to one of the higher levels. The photon energies that are missing from the spectrum are the ones that equal the difference in energy between two electron energy levels. (The most prominent of the absorption lines in Sirius's spectrum are absorption lines of the hydrogen atom.)

Example 15: The stability of atoms

In many Star Trek episodes the Enterprise, in orbit around a planet, suddenly lost engine power and began spiraling down toward the planet's surface. This was utter nonsense, of course, due to conservation of energy: the ship had no way of getting rid of energy, so it did not need the engines to replenish it.

Consider, however, the electron in an atom as it orbits the nucleus. The electron does have a way to release energy: it has an acceleration due to its continuously changing direction of motion, and according to classical physics, any accelerating charged particle emits electromagnetic waves. According to classical physics, atoms should collapse!

The solution lies in the observation that a bound state has a minimum energy. An electron in one of the higher-energy atomic states can and does emit photons and hop down step by step in energy. But once it is in the ground state, it cannot emit a photon because there is no lower-energy state for it to go to.

Example 16: Chemical bonds

I began this section with a classical argument that chemical bonds, as in an \(\text{H}_2\) molecule, should not exist. Quantum physics explains why this type of bonding does in fact occur. When the atoms are next to each other, the electrons are shared between them. The “box” is about twice as wide, and a larger box allows a smaller energy. Energy is required in order to separate the atoms. (A qualitatively different type of bonding is discussed on page 891. Example 23 on page 887 revisits the \(\text{H}_2\) bond in more detail.)

h / Two hydrogen atoms bond to form an \(\text{H}_2\) molecule. In the molecule, the two electrons' wave patterns overlap , and are about twice as wide.

Discussion Questions

◊ Neutrons attract each other via the strong nuclear force, so according to classical physics it should be possible to form nuclei out of clusters of two or more neutrons, with no protons at all. Experimental searches, however, have failed to turn up evidence of a stable two-neutron system (dineutron) or larger stable clusters. These systems are apparently not just unstable in the sense of being able to beta decay but unstable in the sense that they don't hold together at all. Explain based on quantum physics why a dineutron might spontaneously fly apart.

◊ The following table shows the energy gap between the ground state and the first excited state for four nuclei, in units of picojoules. (The nuclei were chosen to be ones that have similar structures, e.g., they are all spherical in shape.)

nucleus energy gap (picojoules)
4He 3.234
16O 0.968
40Ca 0.536
208Pb 0.418

Explain the trend in the data.

13.3.4 The uncertainty principle and measurement

Eliminating randomness through measurement?

A common reaction to quantum physics, among both early-twentieth-century physicists and modern students, is that we should be able to get rid of randomness through accurate measurement. If I say, for example, that it is meaningless to discuss the path of a photon or an electron, one might suggest that we simply measure the particle's position and velocity many times in a row. This series of snapshots would amount to a description of its path.

A practical objection to this plan is that the process of measurement will have an effect on the thing we are trying to measure. This may not be of much concern, for example, when a traffic cop measure's your car's motion with a radar gun, because the energy and momentum of the radar pulses are insufficient to change the car's motion significantly. But on the subatomic scale it is a very real problem. Making a videotape through a microscope of an electron orbiting a nucleus is not just difficult, it is theoretically impossible. The video camera makes pictures of things using light that has bounced off them and come into the camera. If even a single photon of visible light was to bounce off of the electron we were trying to study, the electron's recoil would be enough to change its behavior significantly.

The Heisenberg uncertainty principle

i / Werner Heisenberg (1901-1976). Heisenberg helped to develop the foundations of quantum mechanics, including the Heisenberg uncertainty principle. He was the scientific leader of the Nazi atomic-bomb program up until its cancellation in 1942, when the military decided that it was too ambitious a project to undertake in wartime, and too unlikely to produce results.

 

This insight, that measurement changes the thing being measured, is the kind of idea that clove-cigarette-smoking intellectuals outside of the physical sciences like to claim they knew all along. If only, they say, the physicists had made more of a habit of reading literary journals, they could have saved a lot of work. The anthropologist Margaret Mead has recently been accused of inadvertently encouraging her teenaged Samoan informants to exaggerate the freedom of youthful sexual experimentation in their society. If this is considered a damning critique of her work, it is because she could have done better: other anthropologists claim to have been able to eliminate the observer-as-participant problem and collect untainted data.

The German physicist Werner Heisenberg, however, showed that in quantum physics, any measuring technique runs into a brick wall when we try to improve its accuracy beyond a certain point. Heisenberg showed that the limitation is a question of what there is to be known, even in principle, about the system itself, not of the ability or inability of a specific measuring device to ferret out information that is knowable but not previously hidden.

Suppose, for example, that we have constructed an electron in a box (quantum dot) setup in our laboratory, and we are able to adjust the length \(L\) of the box as desired. All the standing wave patterns pretty much fill the box, so our knowledge of the electron's position is of limited accuracy. If we write \(\Delta x\) for the range of uncertainty in our knowledge of its position, then \(\Delta x\) is roughly the same as the length of the box:

\[\begin{equation*} \Delta x \approx L \end{equation*}\]

If we wish to know its position more accurately, we can certainly squeeze it into a smaller space by reducing \(L\), but this has an unintended side-effect. A standing wave is really a superposition of two traveling waves going in opposite directions. The equation \(p=h/\lambda \) really only gives the magnitude of the momentum vector, not its direction, so we should really interpret the wave as a 50/50 mixture of a right-going wave with momentum \(p=h/\lambda \) and a left-going one with momentum \(p=-h/\lambda \). The uncertainty in our knowledge of the electron's momentum is \(\Delta p=2h/\lambda\), covering the range between these two values. Even if we make sure the electron is in the ground state, whose wavelength \(\lambda =2L\) is the longest possible, we have an uncertainty in momentum of \(\Delta p=h/L\). In general, we find

\[\begin{equation*} \Delta p \gtrsim h/L , \end{equation*}\]

with equality for the ground state and inequality for the higher-energy states. Thus if we reduce \(L\) to improve our knowledge of the electron's position, we do so at the cost of knowing less about its momentum. This trade-off is neatly summarized by multiplying the two equations to give

\[\begin{equation*} \Delta p\Delta x \gtrsim h . \end{equation*}\]

Although we have derived this in the special case of a particle in a box, it is an example of a principle of more general validity:

The Heisenberg uncertainty principle

It is not possible, even in principle, to know the momentum and the position of a particle simultaneously and with perfect accuracy. The uncertainties in these two quantities are always such that \(\Delta p\Delta x \gtrsim h\).

(This approximation can be made into a strict inequality, \(\Delta p\Delta x>h/4\pi\), but only with more careful definitions, which we will not bother with.)

Note that although I encouraged you to think of this derivation in terms of a specific real-world system, the quantum dot, no reference was ever made to any specific laboratory equipment or procedures. The argument is simply that we cannot know the particle's position very accurately unless it has a very well defined position, it cannot have a very well defined position unless its wave-pattern covers only a very small amount of space, and its wave-pattern cannot be thus compressed without giving it a short wavelength and a correspondingly uncertain momentum. The uncertainty principle is therefore a restriction on how much there is to know about a particle, not just on what we can know about it with a certain technique.

 

Example 17: An estimate for electrons in atoms

\(\triangleright\) A typical energy for an electron in an atom is on the order of \((\text{1 volt})\cdot e\), which corresponds to a speed of about 1% of the speed of light. If a typical atom has a size on the order of 0.1 nm, how close are the electrons to the limit imposed by the uncertainty principle?

\(\triangleright\) If we assume the electron moves in all directions with equal probability, the uncertainty in its momentum is roughly twice its typical momentum. This only an order-of-magnitude estimate, so we take \(\Delta p\) to be the same as a typical momentum:

\[\begin{align*} \Delta p \Delta x &= p_{typical} \Delta x \\ &= (m_{electron}) (0.01c) (0.1\times10^{-9}\ \text{m}) \\ &= 3\times 10^{-34}\ \text{J}\!\cdot\!\text{s} \end{align*}\]

This is on the same order of magnitude as Planck's constant, so evidently the electron is “right up against the wall.” (The fact that it is somewhat less than \(h\) is of no concern since this was only an estimate, and we have not stated the uncertainty principle in its most exact form.)

 

self-check:

If we were to apply the uncertainty principle to human-scale objects, what would be the significance of the small numerical value of Planck's constant?

(answer in the back of the PDF version of the book)

 

Measurement and Schrödinger's cat

On p. 847 I briefly mentioned an issue concerning measurement that we are now ready to address carefully. If you hang around a laboratory where quantum-physics experiments are being done and secretly record the physicists' conversations, you'll hear them say many things that assume the probability interpretation of quantum mechanics. Usually they will speak as though the randomness of quantum mechanics enters the picture when something is measured. In the digital camera experiments of section 13.2, for example, they would casually describe the detection of a photon at one of the pixels as if the moment of detection was when the photon was forced to “make up its mind.” Although this mental cartoon usually works fairly well as a description of things they experience in the lab, it cannot ultimately be correct, because it attributes a special role to measurement, which is really just a physical process like all other physical processes.4

If we are to find an interpretation that avoids giving any special role to measurement processes, then we must think of the entire laboratory, including the measuring devices and the physicists themselves, as one big quantum-mechanical system made out of protons, neutrons, electrons, and photons. In other words, we should take quantum physics seriously as a description not just of microscopic objects like atoms but of human-scale (“macroscopic”) things like the apparatus, the furniture, and the people.

The most celebrated example is called the Schrödinger's cat experiment. Luckily for the cat, there probably was no actual experiment --- it was simply a “thought experiment” that the German theorist Schrödinger discussed with his colleagues. Schrödinger wrote:

One can even construct quite burlesque cases. A cat is shut up in a steel container, together with the following diabolical apparatus (which one must keep out of the direct clutches of the cat): In a Geiger tube [radiation detector] there is a tiny mass of radioactive substance, so little that in the course of an hour perhaps one atom of it disintegrates, but also with equal probability not even one; if it does happen, the counter [detector] responds and ... activates a hammer that shatters a little flask of prussic acid [filling the chamber with poison gas]. If one has left this entire system to itself for an hour, then one will say to himself that the cat is still living, if in that time no atom has disintegrated. The first atomic disintegration would have poisoned it.

 

Now comes the strange part. Quantum mechanics describes the particles the cat is made of as having wave properties, including the property of superposition. Schrödinger describes the wavefunction of the box's contents at the end of the hour:

The wavefunction of the entire system would express this situation by having the living and the dead cat mixed ... in equal parts [50/50 proportions]. The uncertainty originally restricted to the atomic domain has been transformed into a macroscopic uncertainty...

At first Schrödinger's description seems like nonsense. When you opened the box, would you see two ghostlike cats, as in a doubly exposed photograph, one dead and one alive? Obviously not. You would have a single, fully material cat, which would either be dead or very, very upset. But Schrödinger has an equally strange and logical answer for that objection. In the same way that the quantum randomness of the radioactive atom spread to the cat and made its wavefunction a random mixture of life and death, the randomness spreads wider once you open the box, and your own wavefunction becomes a mixture of a person who has just killed a cat and a person who hasn't.5

Discussion Questions

◊ Compare \(\Delta p\) and \(\Delta x\) for the two lowest energy levels of the one-dimensional particle in a box, and discuss how this relates to the uncertainty principle.

◊ On a graph of \(\Delta p\) versus \(\Delta \)x, sketch the regions that are allowed and forbidden by the Heisenberg uncertainty principle. Interpret the graph: Where does an atom lie on it? An elephant? Can either \(p\) or \(x\) be measured with perfect accuracy if we don't care about the other?

13.3.5 Electrons in electric fields

 

So far the only electron wave patterns we've considered have been simple sine waves, but whenever an electron finds itself in an electric field, it must have a more complicated wave pattern. Let's consider the example of an electron being accelerated by the electron gun at the back of a TV tube. Newton's laws are not useful, because they implicitly assume that the path taken by the particle is a meaningful concept. Conservation of energy is still valid in quantum physics, however. In terms of energy, the electron is moving from a region of low voltage into a region of higher voltage. Since its charge is negative, it loses electrical energy by moving to a higher voltage, so its kinetic energy increases. As its electrical energy goes down, its kinetic energy goes up by an equal amount, keeping the total energy constant. Increasing kinetic energy implies a growing momentum, and therefore a shortening wavelength, j.

j / An electron in a gentle electric field gradually shortens its wavelength as it gains energy.

 

The wavefunction as a whole does not have a single well-defined wavelength, but the wave changes so gradually that if you only look at a small part of it you can still pick out a wavelength and relate it to the momentum and energy. (The picture actually exaggerates by many orders of magnitude the rate at which the wavelength changes.)

But what if the electric field was stronger? The electric field in a TV is only \(\sim10^5\) N/C, but the electric field within an atom is more like \(10^{12}\) N/C. In figure l, the wavelength changes so rapidly that there is nothing that looks like a sine wave at all. We could get a rough idea of the wavelength in a given region by measuring the distance between two peaks, but that would only be a rough approximation. Suppose we want to know the wavelength at point \(P\). The trick is to construct a sine wave, like the one shown with the dashed line, which matches the curvature of the actual wavefunction as closely as possible near \(P\). The sine wave that matches as well as possible is called the “osculating” curve, from a Latin word meaning “to kiss.” The wavelength of the osculating curve is the wavelength that will relate correctly to conservation of energy.

l / A typical wavefunction of an electron in an atom (heavy curve) and the osculating sine wave (dashed curve) that matches its curvature at point P.

Tunneling

k / The wavefunction's tails go where classical physics says they shouldn't.

 

We implicitly assumed that the particle-in-a-box wavefunction would cut off abruptly at the sides of the box, k/1, but that would be unphysical. A kink has infinite curvature, and curvature is related to energy, so it can't be infinite. A physically realistic wavefunction must always “tail off” gradually, k/2. In classical physics, a particle can never enter a region in which its interaction energy \(U\) would be greater than the amount of energy it has available. But in quantum physics the wavefunction will always have a tail that reaches into the classically forbidden region. If it was not for this effect, called tunneling, the fusion reactions that power the sun would not occur due to the high electrical energy nuclei need in order to get close together! Tunneling is discussed in more detail in the following subsection.

13.3.6 The Schrödinger equation

 

In subsection 13.3.5 we were able to apply conservation of energy to an electron's wavefunction, but only by using the clumsy graphical technique of osculating sine waves as a measure of the wave's curvature. You have learned a more convenient measure of curvature in calculus: the second derivative. To relate the two approaches, we take the second derivative of a sine wave:

\[\begin{align*} \frac{d^2}{dx^2}\sin(2\pi x/\lambda) &= \frac{d}{dx}\left(\frac{2\pi}{\lambda}\cos\frac{2\pi x}{\lambda}\right) \\ &= -\left(\frac{2\pi}{\lambda}\right)^2 \sin\frac{2\pi x}{\lambda} \end{align*}\]

 

Taking the second derivative gives us back the same function, but with a minus sign and a constant out in front that is related to the wavelength. We can thus relate the second derivative to the osculating wavelength:

\[\begin{equation*} \frac{d^2\Psi}{dx^2} = -\left(\frac{2\pi}{\lambda}\right)^2\Psi \tag{1}\end{equation*}\]

 

This could be solved for \(\lambda \) in terms of \(\Psi \), but it will turn out below to be more convenient to leave it in this form.

Applying this to conservation of energy, we have

\[\begin{align*} \begin{split} E &= K + U \\ &= \frac{p^2}{2m} + U \\ &= \frac{(h/\lambda)^2}{2m} + U \end{split} \tag{2} \end{align*}\]

 

Note that both equation (1) and equation (2) have \(\lambda^2\) in the denominator. We can simplify our algebra by multiplying both sides of equation (2) by \(\Psi \) to make it look more like equation (1):

\[\begin{align*} E \cdot \Psi &= \frac{(h/\lambda)^2}{2m}\Psi + U \cdot \Psi \\ &= \frac{1}{2m}\left(\frac{h}{2\pi}\right)^2\left(\frac{2\pi}{\lambda}\right)^2\Psi + U \cdot \Psi \\ &= -\frac{1}{2m}\left(\frac{h}{2\pi}\right)^2 \frac{d^2\Psi}{dx^2} + U \cdot \Psi \end{align*}\]

 

Further simplification is achieved by using the symbol \(\hbar\) (\(h\) with a slash through it, read “h-bar”) as an abbreviation for \(h/2\pi \). We then have the important result known as the \labelimportantintext{Schrödinger equation}:

 

\[\begin{equation*} E \cdot \Psi = -\frac{\hbar^2}{2m}\frac{d^2\Psi}{dx^2} + U \cdot \Psi \end{equation*}\]

 

(Actually this is a simplified version of the Schrödinger equation, applying only to standing waves in one dimension.) Physically it is a statement of conservation of energy. The total energy \(E\) must be constant, so the equation tells us that a change in interaction energy \(U\) must be accompanied by a change in the curvature of the wavefunction. This change in curvature relates to a change in wavelength, which corresponds to a change in momentum and kinetic energy.

 

self-check:

Considering the assumptions that were made in deriving the Schrödinger equation, would it be correct to apply it to a photon? To an electron moving at relativistic speeds?

(answer in the back of the PDF version of the book)

 

Usually we know right off the bat how \(U\) depends on \(x\), so the basic mathematical problem of quantum physics is to find a function \(\Psi (x\)) that satisfies the Schrödinger equation for a given interaction-energy function \(U(x)\). An equation, such as the Schrödinger equation, that specifies a relationship between a function and its derivatives is known as a differential equation.

The detailed study of the solution of the Schrödinger equation is beyond the scope of this book, but we can gain some important insights by considering the easiest version of the Schrödinger equation, in which the interaction energy \(U\) is constant. We can then rearrange the Schrödinger equation as follows:

\[\begin{align*} \frac{d^2\Psi}{dx^2} &= \frac{2m(U-E)}{\hbar^2} \Psi , \text{which boils down to} \frac{d^2\Psi}{dx^2} &= a\Psi , \end{align*}\]

where, according to our assumptions, \(a\) is independent of \(x\). We need to find a function whose second derivative is the same as the original function except for a multiplicative constant. The only functions with this property are sine waves and exponentials:

\[\begin{align*} \frac{d^2}{dx^2}\left[\:q\sin(rx+s)\:\right] &= -qr^2\sin(rx+s) \\ \frac{d^2}{dx^2}\left[qe^{rx+s}\right] &= qr^2e^{rx+s} \end{align*}\]

 

The sine wave gives negative values of \(a\), \(a=-r^2\), and the exponential gives positive ones, \(a=r^2\). The former applies to the classically allowed region with \(U\lt E\).

 

m / Tunneling through a barrier.

 

This leads us to a quantitative calculation of the tunneling effect discussed briefly in the preceding subsection. The wavefunction evidently tails off exponentially in the classically forbidden region. Suppose, as shown in figure m, a wave-particle traveling to the right encounters a barrier that it is classically forbidden to enter. Although the form of the Schrödinger equation we're using technically does not apply to traveling waves (because it makes no reference to time), it turns out that we can still use it to make a reasonable calculation of the probability that the particle will make it through the barrier. If we let the barrier's width be \(w\), then the ratio of the wavefunction on the left side of the barrier to the wavefunction on the right is

\[\begin{equation*} \frac{qe^{rx+s}}{qe^{r(x+w)+s}} = e^{-rw} . \end{equation*}\]

Probabilities are proportional to the squares of wavefunctions, so the probability of making it through the barrier is

\[\begin{align*} P &= e^{-2rw} \\ &= \exp\left(-\frac{2w}{\hbar}\sqrt{2m(U-E)}\right) \end{align*}\]

 

n / The electrical, nuclear, and total interaction energies for an alpha particle escaping from a nucleus.

 

 

self-check:

If we were to apply this equation to find the probability that a person can walk through a wall, what would the small value of Planck's constant imply?

(answer in the back of the PDF version of the book)

 

 

Example 18: Tunneling in alpha decay

Naively, we would expect alpha decay to be a very fast process. The typical speeds of neutrons and protons inside a nucleus are extremely high (see problem 20). If we imagine an alpha particle coalescing out of neutrons and protons inside the nucleus, then at the typical speeds we're talking about, it takes a ridiculously small amount of time for them to reach the surface and try to escape. Clattering back and forth inside the nucleus, we could imagine them making a vast number of these “escape attempts” every second.

Consider figure n, however, which shows the interaction energy for an alpha particle escaping from a nucleus. The electrical energy is \(kq_1q_2/r\) when the alpha is outside the nucleus, while its variation inside the nucleus has the shape of a parabola, as a consequence of the shell theorem. The nuclear energy is constant when the alpha is inside the nucleus, because the forces from all the neighboring neutrons and protons cancel out; it rises sharply near the surface, and flattens out to zero over a distance of \(\sim 1\) fm, which is the maximum distance scale at which the strong force can operate. There is a classically forbidden region immediately outside the nucleus, so the alpha particle can only escape by quantum mechanical tunneling. (It's true, but somewhat counterintuitive, that a repulsive electrical force can make it more difficult for the alpha to get out.)

In reality, alpha-decay half-lives are often extremely long --- sometimes billions of years --- because the tunneling probability is so small. Although the shape of the barrier is not a rectangle, the equation for the tunneling probability on page 870 can still be used as a rough guide to our thinking. Essentially the tunneling probability is so small because \(U-E\) is fairly big, typically about 30 MeV at the peak of the barrier.

Example 19: The correspondence principle for \(E>U\)

The correspondence principle demands that in the classical limit \(h\rightarrow0\), we recover the correct result for a particle encountering a barrier \(U\), for both \(E\lt U\) and \(E>U\). The \(E\lt U\) case was analyzed in self-check H on p. 870. In the remainder of this example, we analyze \(E>U\), which turns out to be a little trickier.

The particle has enough energy to get over the barrier, and the classical result is that it continues forward at a different speed (a reduced speed if \(U>0\), or an increased one if \(U\lt0\)), then regains its original speed as it emerges from the other side. What happens quantum-mechanically in this case? We would like to get a “tunneling” probability of 1 in the classical limit. The expression derived on p. 870, however, doesn't apply here, because it was derived under the assumption that the wavefunction inside the barrier was an exponential; in the classically allowed case, the barrier isn't classically forbidden, and the wavefunction inside it is a sine wave.

o / A particle encounters a step of height \(U\lt E\) in the interaction energy. Both sides are classically allowed. A reflected wave exists, but is not shown in the figure.

 

We can simplify things a little by letting the width \(w\) of the barrier go to infinity. Classically, after all, there is no possibility that the particle will turn around, no matter how wide the barrier. We then have the situation shown in figure o. The analysis is the same as for any other wave being partially reflected at the boundary between two regions where its velocity differs, and the result is the same as the one found on p. 367. The ratio of the amplitude of the reflected wave to that of the incident wave is \(R = (v_2-v_1)/(v_2+v_1)\). The probability of reflection is \(R^2\). (Counterintuitively, \(R^2\) is nonzero even if \(U\lt0\), i.e., \(v_2>v_1\).)

This seems to violate the correspondence principle. There is no \(m\) or \(h\) anywhere in the result, so we seem to have the result that, even classically, the marble in figure p can be reflected!

p / The marble has zero probability of being reflected from the edge of the table. (This example has \(U\lt0\), not \(U>0\) as in figures o and q).

 

The solution to this paradox is that the step in figure o was taken to be completely abrupt --- an idealized mathematical discontinuity. Suppose we make the transition a little more gradual, as in figure q. As shown in problem 17 on p. 380, this reduces the amplitude with which a wave is reflected. By smoothing out the step more and more, we continue to reduce the probability of reflection, until finally we arrive at a barrier shaped like a smooth ramp. More detailed calculations show that this results in zero reflection in the limit where the width of the ramp is large compared to the wavelength.

q / Making the step more gradual reduces the probability of reflection.

Three dimensions

For simplicity, we've been considering the Schrödinger equation in one dimension, so that \(\Psi\) is only a function of \(x\), and has units of \(\text{m}^{-1/2}\) rather than \(\text{m}^{-3/2}\). Since the Schrödinger equation is a statement of conservation of energy, and energy is a scalar, the generalization to three dimensions isn't particularly complicated. The total energy term \(E\cdot\Psi\) and the interaction energy term \(U\cdot\Psi\) involve nothing but scalars, and don't need to be changed at all. In the kinetic energy term, however, we're essentially basing our computation of the kinetic energy on the squared magnitude of the momentum, \(p_x^2\), and in three dimensions this would clearly have to be generalized to \(p_x^2+p_y^2+p_z^2\). The obvious way to achieve this is to replace the second derivative \(d^2\Psi/dx^2\) with the sum \(\partial^2\Psi/\partial x^2+ \partial^2\Psi/\partial y^2+ \partial^2\Psi/\partial z^2\). Here the partial derivative symbol \(\partial\), introduced on page 216, indicates that when differentiating with respect to a particular variable, the other variables are to be considered as constants. This operation on the function \(\Psi\) is notated \(\nabla^2\Psi\), and the derivative-like operator \(\nabla^2=\partial^2/\partial x^2+ \partial^2/\partial y^2+ \partial^2/\partial z^2\) is called the Laplacian. It occurs elswehere in physics. For example, in classical electrostatics, the voltage in a region of vacuum must be a solution of the equation \(\nabla^2V=0\). Like the second derivative, the Laplacian is essentially a measure of curvature.

 

Example 20: Examples of the Laplacian in two dimensions

\(\triangleright\) Compute the Laplacians of the following functions in two dimensions, and interpret them: \(A=x^2+y^2\), \(B=-x^2-y^2\), \(C=x^2-y^2\).

\(\triangleright\) The first derivative of function \(A\) with respect to \(x\) is \(\partial A/\partial x=2x\). Since \(y\) is treated as a constant in the computation of the partial derivative \(\partial/\partial x\), the second term goes away. The second derivative of \(A\) with respect to \(x\) is \(\partial^2 A/\partial x^2=2\). Similarly we have \(\partial^2 A/\partial y^2=2\), so \(\nabla^2 A=4\).

All derivative operators, including \(\nabla^2\), have the linear property that multiplying the input function by a constant just multiplies the output function by the same constant. Since \(B=-A\), and we have \(\nabla^2 B=-4\).

For function \(C\), the \(x\) term contributes a second derivative of 2, but the \(y\) term contributes \(-2\), so \(\nabla^2 C=0\).

The interpretation of the positive sign in \(\nabla^2 A=4\) is that \(A\)'s graph is shaped like a trophy cup, and the cup is concave up. The negative sign in the result for \(\nabla^2 B\) is because \(B\) is concave down. Function \(C\) is shaped like a saddle. Since its curvature along one axis is concave up, but the curvature along the other is down and equal in magnitude, the function is considered to have zero concavity over all.

Example 21: A classically allowed region with constant \(U\)

In a classically allowed region with constant \(U\), we expect the solutions to the Schrödinger equation to be sine waves. A sine wave in three dimensions has the form

\[\begin{equation*} \Psi = \sin\left( k_x x + k_y y + k_z z \right) . \end{equation*}\]

When we compute \(\partial^2\Psi/\partial x^2\), double differentiation of \(\sin\) gives \(-\sin\), and the chain rule brings out a factor of \(k_x^2\). Applying all three second derivative operators, we get

\[\begin{align*} \nabla^2\Psi &= \left(-k_x^2-k_y^2-k_z^2\right)\sin\left( k_x x + k_y y + k_z z \right) \\ &= -\left(k_x^2+k_y^2+k_z^2\right)\Psi . \end{align*}\]

The Schrödinger equation gives

\[\begin{align*} E\cdot\Psi &= -\frac{\hbar^2}{2m}\nabla^2\Psi + U\cdot\Psi \\ &= -\frac{\hbar^2}{2m}\cdot -\left(k_x^2+k_y^2+k_z^2\right)\Psi + U\cdot\Psi \\ E-U &= \frac{\hbar^2}{2m}\left(k_x^2+k_y^2+k_z^2\right) , \end{align*}\]

which can be satisfied since we're in a classically allowed region with \(E-U>0\), and the right-hand side is manifestly positive.

Use of complex numbers

In a classically forbidden region, a particle's total energy, \(U+K\), is less than its \(U\), so its \(K\) must be negative. If we want to keep believing in the equation \(K=p^2/2m\), then apparently the momentum of the particle is the square root of a negative number. This is a symptom of the fact that the Schrödinger equation fails to describe all of nature unless the wavefunction and various other quantities are allowed to be complex numbers. In particular it is not possible to describe traveling waves correctly without using complex wavefunctions. Complex numbers were reviewed in subsection 10.5.5, p. 603.

This may seem like nonsense, since real numbers are the only ones that are, well, real! Quantum mechanics can always be related to the real world, however, because its structure is such that the results of measurements always come out to be real numbers. For example, we may describe an electron as having non-real momentum in classically forbidden regions, but its average momentum will always come out to be real (the imaginary parts average out to zero), and it can never transfer a non-real quantity of momentum to another particle.

r / 1. Oscillations can go back and forth, but it's also possible for them to move along a path that bites its own tail, like a circle. Photons act like one, electrons like the other.


2. Back-and-forth oscillations can naturally be described by a segment taken from the real number line, and we visualize the corresponding type of wave as a sine wave. Oscillations around a closed path relate more naturally to the complex number system. The complex number system has rotation built into its structure, e.g., the sequence 1, \(i\), \(i^2\), \(i^3\), ... rotates around the unit circle in 90-degree increments.
3. The double slit experiment embodies the one and only mystery of quantum physics. Either type of wave can undergo double-slit interference.

A complete investigation of these issues is beyond the scope of this book, and this is why we have normally limited ourselves to standing waves, which can be described with real-valued wavefunctions. Figure r gives a visual depiction of the difference between real and complex wavefunctions. The following remarks may also be helpful.

Neither of the graphs in r/2 should be interpreted as a path traveled by something. This isn't anything mystical about quantum physics. It's just an ordinary fact about waves, which we first encountered in subsection 6.1.1, p. 340, where we saw the distinction between the motion of a wave and the motion of a wave pattern. In both examples in r/2, the wave pattern is moving in a straight line to the right.

The helical graph in r/2 shows a complex wavefunction whose value rotates around a circle in the complex plane with a frequency \(f\) related to its energy by \(E=hf\). As it does so, its squared magnitude \(|\Psi|^2\) stays the same, so the corresponding probability stays constant. Which direction does it rotate? This direction is purely a matter of convention, since the distinction between the symbols \(i\) and \(-i\) is arbitrary --- both are equally valid as square roots of \(-1\). We can, for example, arbitrarily say that electrons with positive energies have wavefunctions whose phases rotate counterclockwise, and as long as we follow that rule consistently within a given calculation, everything will work. Note that it is not possible to define anything like a right-hand rule here, because the complex plane shown in the right-hand side of r/2 doesn't represent two dimensions of physical space; unlike a screw going into a piece of wood, an electron doesn't have a direction of rotation that depends on its direction of travel.

 

Example 22: Superposition of complex wavefunctions

\(\triangleright\) The right side of figure r/3 is a cartoonish representation of double-slit interference; it depicts the situation at the center, where symmetry guarantees that the interference is constuctive. Suppose that at some off-center point, the two wavefunctions being superposed are \(\Psi_1=b\) and \(\Psi_2=bi\), where \(b\) is a real number with units. Compare the probability of finding the electron at this position with what it would have been if the superposition had been purely constructive, \(b+b=2b\).

\(\triangleright\) The probability per unit volume is proportional to the square of the magnitude of the total wavefunction, so we have

\[\begin{equation*} \frac{P_{\text{off center}}}{P_{\text{center}}} = \frac{|b+bi|^2}{|b+b|^2} = \frac{1^2+1^2}{2^2+0^2} = \frac{1}{2} . \end{equation*}\]

 

Discussion Questions

◊ The zero level of interaction energy \(U\) is arbitrary, e.g., it's equally valid to pick the zero of gravitational energy to be on the floor of your lab or at the ceiling. Suppose we're doing the double-slit experiment, r/3, with electrons. We define the zero-level of \(U\) so that the total energy \(E=U+K\) of each electron is positive. and we observe a certain interference pattern like the one in figure i on p. 844. What happens if we then redefine the zero-level of \(U\) so that the electrons have \(E\lt0\)?

◊ 

The figure shows a series of snapshots in the motion of two pulses on a coil spring, one negative and one positive, as they move toward one another and superpose. The final image is very close to the moment at which the two pulses cancel completely. The following discussion is simpler if we consider infinite sine waves rather than pulses. How can the cancellation of two such mechanical waves be reconciled with conservation of energy? What about the case of colliding electromagnetic waves?

Quantum-mechanically, the issue isn't conservation of energy, it's conservation of probability, i.e., if there's initially a 100% probability that a particle exists somewhere, we don't want the probability to be more than or less than 100% at some later time. What happens when the colliding waves have real-valued wavefunctions \(\Psi\)? Complex ones? What happens with standing waves?

The figure shows a skateboarder tipping over into a swimming pool with zero initial kinetic energy. There is no friction, the corners are smooth enough to allow the skater to pass over the smoothly, and the vertical distances are small enough so that negligible time is required for the vertical parts of the motion. The pool is divided into a deep end and a shallow end. Their widths are equal. The deep end is four times deeper. (1) Classically, compare the skater's velocity in the left and right regions, and infer the probability of finding the skater in either of the two halves if an observer peeks at a random moment. (2) Quantum-mechanically, this could be a one-dimensional model of an electron shared between two atoms in a diatomic molecule. Compare the electron's kinetic energies, momenta, and wavelengths in the two sides. For simplicity, let's assume that there is no tunneling into the classically forbidden regions. What is the simplest standing-wave pattern that you can draw, and what are the probabilities of finding the electron in one side or the other? Does this obey the correspondence principle?

Contributors