# 6.5: The Quantum Harmonic Oscillator

- Page ID
- 94132

\( \newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \)

\( \newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}} \)

\( \newcommand{\id}{\mathrm{id}}\) \( \newcommand{\Span}{\mathrm{span}}\)

( \newcommand{\kernel}{\mathrm{null}\,}\) \( \newcommand{\range}{\mathrm{range}\,}\)

\( \newcommand{\RealPart}{\mathrm{Re}}\) \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\)

\( \newcommand{\Argument}{\mathrm{Arg}}\) \( \newcommand{\norm}[1]{\| #1 \|}\)

\( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\)

\( \newcommand{\Span}{\mathrm{span}}\)

\( \newcommand{\id}{\mathrm{id}}\)

\( \newcommand{\Span}{\mathrm{span}}\)

\( \newcommand{\kernel}{\mathrm{null}\,}\)

\( \newcommand{\range}{\mathrm{range}\,}\)

\( \newcommand{\RealPart}{\mathrm{Re}}\)

\( \newcommand{\ImaginaryPart}{\mathrm{Im}}\)

\( \newcommand{\Argument}{\mathrm{Arg}}\)

\( \newcommand{\norm}[1]{\| #1 \|}\)

\( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\)

\( \newcommand{\Span}{\mathrm{span}}\) \( \newcommand{\AA}{\unicode[.8,0]{x212B}}\)

\( \newcommand{\vectorA}[1]{\vec{#1}} % arrow\)

\( \newcommand{\vectorAt}[1]{\vec{\text{#1}}} % arrow\)

\( \newcommand{\vectorB}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \)

\( \newcommand{\vectorC}[1]{\textbf{#1}} \)

\( \newcommand{\vectorD}[1]{\overrightarrow{#1}} \)

\( \newcommand{\vectorDt}[1]{\overrightarrow{\text{#1}}} \)

\( \newcommand{\vectE}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash{\mathbf {#1}}}} \)

\( \newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \)

\( \newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}} \)

\(\newcommand{\avec}{\mathbf a}\) \(\newcommand{\bvec}{\mathbf b}\) \(\newcommand{\cvec}{\mathbf c}\) \(\newcommand{\dvec}{\mathbf d}\) \(\newcommand{\dtil}{\widetilde{\mathbf d}}\) \(\newcommand{\evec}{\mathbf e}\) \(\newcommand{\fvec}{\mathbf f}\) \(\newcommand{\nvec}{\mathbf n}\) \(\newcommand{\pvec}{\mathbf p}\) \(\newcommand{\qvec}{\mathbf q}\) \(\newcommand{\svec}{\mathbf s}\) \(\newcommand{\tvec}{\mathbf t}\) \(\newcommand{\uvec}{\mathbf u}\) \(\newcommand{\vvec}{\mathbf v}\) \(\newcommand{\wvec}{\mathbf w}\) \(\newcommand{\xvec}{\mathbf x}\) \(\newcommand{\yvec}{\mathbf y}\) \(\newcommand{\zvec}{\mathbf z}\) \(\newcommand{\rvec}{\mathbf r}\) \(\newcommand{\mvec}{\mathbf m}\) \(\newcommand{\zerovec}{\mathbf 0}\) \(\newcommand{\onevec}{\mathbf 1}\) \(\newcommand{\real}{\mathbb R}\) \(\newcommand{\twovec}[2]{\left[\begin{array}{r}#1 \\ #2 \end{array}\right]}\) \(\newcommand{\ctwovec}[2]{\left[\begin{array}{c}#1 \\ #2 \end{array}\right]}\) \(\newcommand{\threevec}[3]{\left[\begin{array}{r}#1 \\ #2 \\ #3 \end{array}\right]}\) \(\newcommand{\cthreevec}[3]{\left[\begin{array}{c}#1 \\ #2 \\ #3 \end{array}\right]}\) \(\newcommand{\fourvec}[4]{\left[\begin{array}{r}#1 \\ #2 \\ #3 \\ #4 \end{array}\right]}\) \(\newcommand{\cfourvec}[4]{\left[\begin{array}{c}#1 \\ #2 \\ #3 \\ #4 \end{array}\right]}\) \(\newcommand{\fivevec}[5]{\left[\begin{array}{r}#1 \\ #2 \\ #3 \\ #4 \\ #5 \\ \end{array}\right]}\) \(\newcommand{\cfivevec}[5]{\left[\begin{array}{c}#1 \\ #2 \\ #3 \\ #4 \\ #5 \\ \end{array}\right]}\) \(\newcommand{\mattwo}[4]{\left[\begin{array}{rr}#1 \amp #2 \\ #3 \amp #4 \\ \end{array}\right]}\) \(\newcommand{\laspan}[1]{\text{Span}\{#1\}}\) \(\newcommand{\bcal}{\cal B}\) \(\newcommand{\ccal}{\cal C}\) \(\newcommand{\scal}{\cal S}\) \(\newcommand{\wcal}{\cal W}\) \(\newcommand{\ecal}{\cal E}\) \(\newcommand{\coords}[2]{\left\{#1\right\}_{#2}}\) \(\newcommand{\gray}[1]{\color{gray}{#1}}\) \(\newcommand{\lgray}[1]{\color{lightgray}{#1}}\) \(\newcommand{\rank}{\operatorname{rank}}\) \(\newcommand{\row}{\text{Row}}\) \(\newcommand{\col}{\text{Col}}\) \(\renewcommand{\row}{\text{Row}}\) \(\newcommand{\nul}{\text{Nul}}\) \(\newcommand{\var}{\text{Var}}\) \(\newcommand{\corr}{\text{corr}}\) \(\newcommand{\len}[1]{\left|#1\right|}\) \(\newcommand{\bbar}{\overline{\bvec}}\) \(\newcommand{\bhat}{\widehat{\bvec}}\) \(\newcommand{\bperp}{\bvec^\perp}\) \(\newcommand{\xhat}{\widehat{\xvec}}\) \(\newcommand{\vhat}{\widehat{\vvec}}\) \(\newcommand{\uhat}{\widehat{\uvec}}\) \(\newcommand{\what}{\widehat{\wvec}}\) \(\newcommand{\Sighat}{\widehat{\Sigma}}\) \(\newcommand{\lt}{<}\) \(\newcommand{\gt}{>}\) \(\newcommand{\amp}{&}\) \(\definecolor{fillinmathshade}{gray}{0.9}\)## Basic Features

As we did with the particle-in-a-box, we'll start with a review of the basic features of the quantum harmonic oscillator. Unlike the particle-in-a-box, the first treatment of this potential didn't include the position-space wave functions (other than their general features), so this review will be quite brief. Let's start with the stationary-state Schrödinger equation in position-space:

\[ -\dfrac{\hbar^2}{2m} \dfrac{d^2}{dx^2} \psi_n\left(x\right) + \frac{1}{2}\kappa x^2 \; \psi_n\left(x\right) = E_n\; \psi_n\left(x\right) \]

Alert

*Note that the spring constant for this potential is represented by the greek letter kappa (\(\kappa\)), to distinguish it from the ubiquitous variable \(k\) that we use to represent the wave number.*

We can employ many of the properties of wave functions and their energy spectra to get some sense of what the wave functions for the energy eigenstates look like:

**potential is infinite**– Like the infinite square well, this potential will have an infinite number of energy levels.**energy levels will be quantized and ground state is non-zero**– Something we see for all bound states. As with the other wells we have seen, this comes about because we have to fit the interior wave function perfectly between the barriers while matching boundary conditions. This is why we introduced the "\(n\)" as a subscript to the wave function and the energy eigenvalues.**parity flips every time we go up another energy level**– The ground state should be an even function, the first excited state an odd function, etc.**potential grows to infinity, but for any given energy level the “wall” is finite**– The boundary conditions for wave function does not require that it vanish at the classical stopping points, as it did for the box, because the walls are not infinitely-high at the points where the classically-forbidden region begins. The wave function should therefore “leak” into the walls, giving the particle a non-zero probability of being found in the classically-forbidden region.

There are some ways that these wave functions should differ from those for the infinite square well:

**gap between the walls grows as the energy level grows**– As usual, an antinode is added every time we go up an energy level, in order to alternate between even and odd functions. For the infinite square well, this was easy to account for, since the distance between the walls never changed. The wavelength change in going from level \(n\) to level \(n+1\) was a reduction by a factor of \(\frac{n}{n+1}\). But for the harmonic oscillator potential, the classical turning points get farther apart as the energy grows. So while each energy level requires an additional half-wavelength, those wavelengths don't need to shrink as fast as the levels rise, in order to fit between the turning points. This means that the jumps between energy levels will not be as great for the harmonic oscillator as they were for the infinite square well (which was proportional to \(n^2\)).**classical limit (very high energy levels) is different from infinite square well**– As the energy levels in the box get higher, the number of antinodes increase, and at very high energies, there are antinodes virtually everywhere within the box. Every antinode corresponds to an equal probability amplitude, so at very high energies the probability distribution is uniform. The high-energy limit is called the*classical limit*, and indeed for the box we get the correct result. We don't yet know what the wave functions will look like for the harmonic oscillator, but classically we do not expect the probability distribution to be uniform for a mass on a spring, as the mass spends significantly more time near the turning points than near the center. So the stationary-state wave functions for very high energies will not converge to a uniform distribution, and in fact should peak at the classical turning points.**potential is not constant within the well**– For the infinite square well, within the well, the potential is constant (zero), making the solution in that region a combination of two opposite-moving plane waves. In this case, the potential changes continually, so we expect that we’ll need to sum an infinite number of plane waves. It isn’t clear what the spectral content of the full stationary state wave function will be, and we won’t solve this problem from scratch, but we will examine the solution nonetheless, as it has some illuminating features.

## Wave Functions

A solution of Equation 6.5.1 with proper boundary conditions yields stationary-state wave functions and an energy spectrum consistent with the above observations. Solving this differential equation "from scratch" gets too far into the weeds mathematically, but we can make some educated guesses, and work our way "backwards" to the rest. As a start, we note that the we need two derivatives to give back the wave function itself, multiplied by a constant (the energy eigenvalue) plus a function of \(x^2\) (the potential energy term). Whenever the wave function mus return, we think of an exponential. The problem is getting an \(x^2\) factor from two derivatives. Note that if the wave function has an \(x^2\) *in the exponent*, then a single derivative will bring down a\(2x\) from the chain rule. Another such derivative will bring down a second factor of \(2x\), giving us the \(x^2\) we need, and the product rule resulting from the second derivative will also give a constant factor. So let's try this wave function:

\[\psi\left(x\right)=Ae^{-\alpha x^2}\]

This is just the general form that we are trying. To extract more information, we need to plug it into Schrödinger's equation and see what comes out. We will also need to eventually normalize it, so it can be used to compute probabilities, expectations values, etc. So putting this into Equation 6.5.1 gives:

\[-\frac{\hbar^2}{2m} \frac{d^2}{dx^2} \left[Ae^{-\alpha x^2}\right] + \frac{1}{2}\kappa x^2 \left[Ae^{-\alpha x^2}\right] = E_n \left[Ae^{-\alpha x^2}\right]\]

Taking the derivatives and canceling the \(e^{-\alpha x^2}\) functions that appear in every term gives:

\[-\frac{\hbar^2}{2m}\left(-2\alpha+2\alpha^2x^2\right)+\frac{1}{2}\kappa x^2 = E_n\]

For \(\psi\left(x\right)\) to be a solution to the Schrödinger equation, this equation has to hold for all values of \(x\). This means that the coefficients of \(x^2\) must cancel:

\[0=-\frac{\hbar^2}{m}\alpha^2+\frac{1}{2}\kappa~~~\Rightarrow~~~\alpha = \frac{\sqrt{\kappa m}}{2\hbar}\]

With the \(x^2\) terms canceling, the constant terms are left behind, giving:

\[E_n=\frac{\hbar^2\alpha}{m}\]

Putting together these two results gives the energy in terms of given values:

\[E_n=\frac{\hbar^2 \frac{\sqrt{\kappa m}}{2\hbar}}{m}=\frac{1}{2}\hbar\sqrt{\frac{\kappa}{m}}\]

Wait a second. Where is the \(n\) on the right side of the equation? The answer is that this choice of \(\psi\left(x\right)\) solves the differential equation, so it is a wave function for an eigenstate of energy, but only one – unlike the particle-in-a-box, we are not able to use the "nodes must exist at both ends" criterion to get all of the eigenstates at once. Okay, so which eigenstate is this? We can answer this using the knowledge that the eigenstatehave a unique number of antinodes for each eigenstate, starting with a single antinode for the ground state. Well, the function we have chosen (known as a *gaussian*), has only a single antinode, located at \(x=0\), so it must be the ground state!

**Figure 6.5.1 – A Gaussian Wave Function in a Spring Potential Energy Well**

There are a few of things to note from this result and the figure above. First, we see that in fact the ground state energy is above the bottom of the well, as we expected it to be. Second, the wave function has a built-in exponential decay in the classically forbidden region – there's no need to stitch together two different functions as we did for the finite square well with the sinusoidal and exponential functions. Third, setting the total energy of this state equal to the potential energy and solving for \(x\) gives us the classical turnaround points in terms of the particle mass and spring constant. And finally, it should be noted that unlike the previous cases, it is conventional to designate the ground state of the quantum harmonic oscillator with a zero subscript, rather than a one, for reasons that will become clear when we discuss the energy spectrum shortly.

There is still a bit of unfinished work to be done on this particular eigenstate. We found the value of the parameter \(\alpha\), but we do not yet have the value of \(A\) – we have not yet normalized the wave function. This is a definite integral we can just look up:

\[1=\int\limits_{-\infty}^{+\infty}\left|\psi\left(x\right)\right|^2dx=A^2\int\limits_{-\infty}^{+\infty}e^{-2\alpha x^2}dx=A^2\left(\sqrt{\frac{\pi}{2\alpha}}\right)~~~\Rightarrow~~~A=\left(\frac{2\alpha}{\pi}\right)^{\frac{1}{4}}\]

What about the other energy eigenstates? The ground state must have even symmetry about the origin, and indeed the gaussian wave function given above has this property. All the odd-numbered excited states must have odd symmetry, while all the even-numbered excited states have even symmetry (remember, the ground state is \(n=0\)). It turns out that all of the excited states only differ from the ground state by multiplying the gaussian by a polynomial, known as a *Hermite polynomial*. We won't worry about how to generate these polynomials, but it is possible to understand a few of their features just using what we know about wave functions of bound particles.

First, the polynomials for odd-numbered states must include powers of \(x\) that are odd only. That way, when they multiply the symmetric gaussian, they will have the proper odd symmetry. Similarly, even-numbered states must involve only even powers of \(x\) (the ground state polynomial includes \(x^0\), which is an even power).

Second, the number of nodes must go up by one with every increase in level. Nodes are crossings of the \(x\)-axis, and this tracks the order of the polynomial. Therefore, the Hermite polynomials must look like:

\[ \begin{array}{l} n=0: & H_0\left(x\right)=Ax^0 \\ n=1: & H_1\left(x\right)=Bx^1 \\ n=2: & H_2\left(x\right)=Cx^0+Dx^2 \\ n=3: & H_3\left(x\right)=Ex^1+Fx^3 \\ \vdots & \vdots \end{array} \]

One can derive the unknown constants in these polynomials in a brute-force manner by multiplying them by the gaussian, and plugging the result into the differential equation, just as we essentially did for the ground state above. We should probably be a bit more precise about what we mean by the Hermite polynomial "multiplying the gaussian," so here is the actual normalized wave function of the \(n^{th}\) energy eigenstate in position space, in terms of \(H_n\left(x\right)\):

\[ \psi_n\left(x\right)=\left(\dfrac{\beta}{2^n n! \sqrt{\pi}}\right)^\frac{1}{2} H_n\left(\beta x\right) e^{-\dfrac{\left(\beta x\right)^2}{2}} = \dfrac{1}{\sqrt{2^n n!}} H_n\left(\beta x\right) \psi_o\left(x\right)\]

For reasons of simplicity in some other constants, we have replaced the constant \(\alpha\) with \(\frac{1}{2}\beta^2\), so:

\[\beta\equiv\sqrt{2\alpha} = \left(\frac{\kappa m}{\hbar^2}\right)^{\frac{1}{4}}\]

And for the sake of having a few of the lower energy eigenfunctions available to work with, here are a few of the Hermite polynomials:

\[\begin{array}{l} H_o\left(\beta x\right) & = 1 \\ H_1\left(\beta x\right) & = 2\beta x \\ H_2\left(\beta x\right) & = 4\beta^2x^2-2 \\ H_3\left(\beta x\right) & =8\beta^3x^3 - 12\beta x\end{array}\]

It's pretty obvious that \(\psi_{n_1}\left(x\right)\) is orthogonal to \(\psi_{n_2}\left(x\right)\) when \(n_1\) is odd and \(n_2\) is even, or vice-versa, since the overlap integral will be between an odd and even function and will therefore vanish. But what is truly amazing about these polynomials is that the property of *all* eigenstates being orthogonal holds, which means the integral is zero if \(n_1\ne n_2\), even when they are both odd or both even.

## Energy Spectrum

If we plug \(\psi_n\left(x\right)\) into Schrödinger's equation, the eigenvalues \(E_n\) come out. As complicated as the eigenfunctions and the operators in the Schrödinger equation are, the energy spectrum comes out remarkably simple:

\[E_n = \left(n+\frac{1}{2}\right)\hbar\;\omega_c, ~~~~~ n=0,1,2,\dots,~~~~~ \omega_c \equiv \sqrt\frac{\kappa}{m} = \text{angular frequency of classical oscillator} \]

Alert

*The symbol \(\omega_c\) should not be confused with the other Greek letter omega that we use in the quantum phase time-dependence: \(\omega_n=\frac{E_n}{\hbar}\).*

## Uncertainties

We can compute uncertainties for the usual suspects (position, momentum, and energy) in the energy eigenstates in the standard way – by performing the expectation integrals and plugging them into the formula for uncertainty. But we are more clever than that. We start by noting that in the kinetic energy operator in the Schrödinger equation is quadratic in \(\widehat p\), and the potential energy operator is quadratic in \(\widehat x\). Given the reciprocal relationship we know exists between these two quantities (think of the Fourier transform and its inverse!), it not a stretch (though we will not show it mathematically here) to claim that the average potential energy equals the average kinetic energy. This is in fact even true over a full oscillation of a mass on a spring in classical physics. Given this, we can take some shortcuts.

Using the fact that the expectation value of the total energy for a given energy eigenstate is simply the energy eigenvalue, we can deduce that the average kinetic and potential energies are half the energy eigenvalue:

\[ E_n = \left<E\right>_n = \left<KE+PE\right>_n = \left<KE\right>_n + \left<PE\right>_n \;\;\; \Rightarrow \;\;\; \left<KE\right>_n = \left<PE\right>_n =\frac{1}{2} E_n = \frac{1}{2}\left(n+\frac{1}{2}\right)\hbar\;\omega_c \]

We can carry this result into finding the uncertainty in position and momentum as well. Start by noting that symmetry demands that the expectation value of position and momentum are both zero, since the probability density (in both position and momentum space) is symmetric about the origin. This means that the uncertainty in this values depends only upon the expectation value of their squares. But these squares are proportional to the potential and kinetic energies, so we get answers without ever performing a gaussian integral:

\[ \begin{array}{l} \left<x^2\right>_n = \dfrac{2}{\kappa} \left<PE\right>_n = \left(n+\frac{1}{2}\right)\hbar\left(\dfrac{\omega_c}{\kappa}\right) = \left(n+\frac{1}{2}\right)\hbar\dfrac{1}{\sqrt{\kappa m}} \\ \left<p^2\right>_n = 2m \left<KE\right>_n = \left(n+\frac{1}{2}\right)\hbar\left(m{\omega_c} \right)= \left(n+\frac{1}{2}\right)\hbar\sqrt{\kappa m} \end{array} \]

Plugging into the uncertainty equations:

\[ \begin{array}{l} \Delta x = \sqrt{\left<x^2\right>} = \sqrt{\left(n+\frac{1}{2}\right)\hbar}\;\left(\kappa m\right)^{-\frac{1}{4}} \\ \Delta p = \sqrt{\left<p^2\right>} = \sqrt{\left(n+\frac{1}{2}\right)\hbar}\;\left(\kappa m\right)^\frac{1}{4} \end{array} \]

Whenever we have the uncertainties for position and momentum, it is natural to want to test the uncertainty principle. So multiplying these together gives:

\[ \Delta x \Delta p = \left(n+\frac{1}{2}\right)\hbar \]

The uncertainties both get bigger as the energy level goes up, so the ground state represents the smallest value of this product, and it turns out that the ground state of the harmonic oscillator (\(n=0\)) provides the very limit of the uncertainty principle!

## Why This Potential?

It is natural to ask why we are studying this potential at all. After all, quantum particles are not attached to each other by tiny springs. Is this just an exercise to solve a problem with no practical application? Not at all! In fact this is probably the *most* applicable of the models we look at in introductory quantum theory. The reason is that when particles are bound to each other, the potential energy curve forms a well that is quite similar to that of a spring potential. We actually covered this fact already in Physics 9HA, when we discussed modeling particle bonds as springs. We can use this process to estimate the energy spectrum for bonds between particles for which we have a good idea of the potential energy function. We simply find the equivalent spring constant for the bond in question, call that value "\(\kappa\)", and use the results that we derived here.