Skip to main content
[ "article:topic", "authorname:tweideman", "license:ccbysa", "showtoc:no" ]
Physics LibreTexts

3.3: Harmonic Oscillator

  • Page ID
  • Basic Features

    As we did with the particle-in-a-box, we'll start with a review of the basic features of the quantum harmonic oscillator. Unlike the particle-in-a-box, the first treatment of this potential didn't include the position-space wave functions (other than their general features), so this review will be quite brief. Let's start with the stationary-state Schrödinger equation in position-space:

    \[ -\dfrac{\hbar^2}{2m} \dfrac{d^2}{dx^2} \psi_n\left(x\right) + \frac{1}{2}\kappa x^2 \; \psi_n\left(x\right) = E_n\; \psi_n\left(x\right) \]


    Note that the spring constant for this potential is represented by the greek letter kappa (\(\kappa\)), to distinguish it from the ubiquitous variable \(k\) that we use to represent the wave number.

    We can employ many of the properties of wave functions and their energy spectra to get some sense of what the position-space wave functions for the energy eigenstates look like:

    1. potential is infinite – Like the infinite square well, this potential will have an infinite number of energy levels.
    2. energy levels will be quantized and ground state is non-zero – Something we see for all bound states.  As with the other wells we have seen, this comes about because we have to fit the interior wave function perfectly between the barriers while matching boundary conditions.
    3. symmetry flips every time we go up another energy level – The ground state should be an even function, the first excited state an odd function, etc.
    4. potential grows to infinity, but for any given energy level the “wall” is finite – The boundary conditions for wave function does not require that it vanish at the classical stopping points, as it did for the box, because the walls are not infinitely-steep. The wave function should therefore “leak” into the walls, giving the particle a non-zero probability of being found in the classically-forbidden region.

    There are some ways that these wave functions should differ from those for the infinite square well:

    1. gap between the walls grows as the energy level grows – As usual, an antinode is added every time we go up an energy level, in order to alternate between even and odd functions. For the infinite square well, this was easy to account for, since the distance between the walls never changed. The wavelength change in going from level \(n\) to level \(n+1\) was a reduction by a factor of \(\frac{n}{n+1}\). But for the harmonic oscillator potential, the classical turning points get farther apart as the energy grows. So while each energy level requires an additional half-wavelength, those wavelengths don't need to shrink as fast as levels rise to fit between the turning points. This means that the jumps between energy levels will not be as great for the harmonic oscillator as they were for the infinite square well (which was proportional to \(n^2\)).
    2. classical limit (very high energy levels) is different from infinite square well – As the energy levels in the box get higher, the number of antinodes increase, and at very high energies, there are antinodes virtually everywhere within the box. Every antinode corresponds to an equal probability amplitude, so at very high energies the probability distribution is uniform. The high-energy limit is called the classical limit, and indeed for the box we get the correct result. We don't yet know what the wave functions will look like for the harmonic oscillator, but classically we do not expect the probability distribution to be uniform for a mass on a spring, as the mass spends significantly more time near the turning points than near the center. So the stationary-state wave functions for very high energies will not converge to a uniform distribution, and in fact should peak at the classical turning points.
    3. potential is not constant within the well – For the infinite square well, within the well the potential is constant (zero), making the solution in that region a combination of two opposite-moving plane waves.  In this case, the potential changes continually, so we expect that we’ll need to sum an infinite number of plane waves.  It isn’t clear what the spectral content of the full stationary state wave function will be, and we won’t solve this problem from scratch, but we will examine solution nonetheless, as it has some illuminating features.

    A solution of Equation 3.3.1 with proper boundary condition yields stationary-state wave functions and an energy spectrum consistent with the above observations. We'll return to the wave functions shortly, but here's a reminder of the energy spectrum – the energy levels turn out to be equally-spaced, with (of course) a non-zero ground state:

    \[E_n = \left(n+\frac{1}{2}\right)\hbar\;\omega_c, \;\;\;\;\; n=0,1,2,\dots, \;\;\;\;\; \omega_c \equiv \sqrt\frac{\kappa}{m} = angular\;frequency\;of\;classical\;oscillator \]


    It is standard practice for the energy spectrum of the harmonic oscillator to denote the ground state as \(n=0\), rather than \(n=1\), as we did with the particle-in-a-box. Also, the symbol \(\omega_c\) should not be confused with the other greek letter omega that we use in the quantum phase time-dependence: \(\omega_n=\frac{E_n}{\hbar}\).

    Wave Functions

    We will discuss a clever way of deriving the stationary-state wave functions below, but we will start here by simply stating the ground state wave function in position space. The functional form is that of a gaussian (\(f\left(x\right)=e^{-\alpha x^2}\)), which when normalized looks like:

    \[\psi _o\left(x\right) = \left(\frac{\beta}{\sqrt\pi} \right)^{\frac{1}{2}} e^{-\dfrac{\left(\beta x\right)^2}{2}},\;\;\;\;\;\beta \equiv \left( \frac{m\kappa}{\hbar ^2} \right)^{\frac{1}{4}} \]

    We might now ask what the ground state momentum-space wave function looks like. The thought of taking the fourier transform of the wave function above may be a daunting one, but suppress your angst for just a moment, and consider what the Schrödinger equation looks like in momentum space. In this case, the position and momentum operators swap roles – the position operator becomes a derivative, and the momentum operator a function:

    \[ \widehat x = i\dfrac{d}{dk}, \;\; \widehat p = \hbar k \;\;\; \Rightarrow \;\;\; \widehat H = \dfrac{\widehat p^2}{2m} + \frac{1}{2}\kappa \widehat x^2  = \dfrac{\hbar^2 k^2}{2m} - \frac{1}{2}\kappa \dfrac{d^2}{dk^2} \]

    Plugging this into Schrödinger's equation for stationary states and rearranging things a little gives:

    \[ - \frac{\kappa}{2} \dfrac{d^2}{dk^2} \phi_n\left(k\right) + \frac{1}{2}\left(\dfrac{\hbar^2}{m}\right) k^2\; \phi_n\left(k\right) = E_n\; \phi_n\left(k\right) \]

    This differential equation bears a striking resemblance to the position space version, Equation 3.3.1. The is a negative second-derivative of the wave function, and a term where the wave function is multiplied by the square of the independent variable. In short, Schrödinger's equation in momentum space – up to a few constants – is the same differential equation in position and momentum space. Indeed, it turns out that a fourier transform of a gaussian in one space results in a gaussian in the other space.

    What about the other energy eigenstates? The ground state must have even symmetry about the origin, and indeed the gaussian wave function given above has this property. All the odd-numbered excited states must have odd symmetry, while all the even-numbered excited states have even symmetry (remember, the ground state is \(n=0\)). It turns out that all of the excited states only differ from the ground state by multiplying the gaussian by a polynomial, known as a Hermite polynomial. We won't worry yet about how to generate this polynomial, but it is possible to understand a few of their features just using what we know about wave functions of bound particles.

    First, the polynomials for odd-numbered states must include powers of \(x\) that are odd only. That way, when they multiply the symmetric gaussian, they will have the proper odd symmetry. Similarly, even-numbered states must involve only even powers of \(x\) (the ground state polynomial includes \(x^0\), which is an even power).

    Second, the number of nodes must go up by one with every increase in level. Nodes are crossings of the \(x\)-axis, and this tracks the order of the polynomial. Therefore, the Hermite polynomials must look like:

    \[ \begin{array}{l} n=0: & H_0\left(x\right)=Ax^0 \\ n=1: & H_1\left(x\right)=Bx^1 \\ n=2: & H_2\left(x\right)=Cx^0+Dx^2 \\ n=3: & H_3\left(x\right)=Ex^1+Fx^3 \\ \vdots & \vdots \end{array} \]

    One can derive the unknown constants in these polynomials in a brute-force manner by multiplying them by the gaussian, and plugging the result into the differential equation, along with their corresponding energy eigenvalues. We should probably be a bit more precise about what we mean by the Hermite polynomial "multiplying the gaussian," so here is the actual formula for the wave function of the \(n^{th}\) energy eigenstate in position space, in terms of \(H_n\left(x\right)\):

    \[ \psi_n\left(x\right)=\left(\dfrac{\beta}{2^n n! \sqrt{\pi}}\right)^\frac{1}{2} H_n\left(x\right) e^{-\dfrac{\left(\beta x\right)^2}{2}} = \dfrac{1}{\sqrt{2^n n!}} H_n\left(x\right) \psi_o\left(x\right), \]

    where \(\beta\) is the same constant that is defined above.


    We can compute uncertainties for the usual suspects (position, momentum, and energy) in the energy eigenstates in the standard way – by performing the expectation integrals and plugging into to the formula for uncertainty. But we are more clever than that. We start by recalling that the position and momentum energy eigenstate wave functions are essentially the same, the potential energy in position space looks like the kinetic energy in momentum space, and so on. It is therefore not a stretch (though we will not show it mathematically here) to claim that the average potential energy equals the average kinetic energy (this is even true classically!).

    With that being the case, we use the fact that the expectation value of the total energy for a given energy eigenstate is simply the energy eigenvalue (not much to average when only one result is ever measured) to claim that the average kinetic and potential energies are half the energy eigenvalue:

    \[ E_n = \left<E\right>_n = \left<KE+PE\right>_n = \left<KE\right>_n + \left<PE\right>_n \;\;\; \Rightarrow \;\;\;  \left<KE\right>_n =  \left<PE\right>_n =\frac{1}{2} E_n = \frac{1}{2}\left(n+\frac{1}{2}\right)\hbar\;\omega_c \]

    We take carry this result into finding the uncertainty in position and momentum as well. Start by noting that symmetry demands that the expectation value of position and momentum are both zero, since the probability density (in both position and momentum space) is symmetric about the origin. This means that the uncertainty in this values depends only upon the expectation value of their squares. But these squares are proportional to the potential and kinetic energies, so we get answers without ever performing a gaussian integral:

    \[ \begin{array}{l} \left<x^2\right>_n = \dfrac{2}{\kappa} \left<PE\right>_n = \left(n+\frac{1}{2}\right)\hbar\left(\dfrac{\omega_c}{\kappa}\right) = \left(n+\frac{1}{2}\right)\hbar\dfrac{1}{\sqrt{\kappa m}} \\ \left<p^2\right>_n = 2m \left<KE\right>_n = \left(n+\frac{1}{2}\right)\hbar\left(m{\omega_c} \right)= \left(n+\frac{1}{2}\right)\hbar\sqrt{\kappa m} \end{array} \]

    Plugging into the uncertainty equations:

    \[ \begin{array}{l} \Delta x = \sqrt{\left<x^2\right>} = \sqrt{\left(n+\frac{1}{2}\right)\hbar}\;\left(\kappa m\right)^{-\frac{1}{4}} \\ \Delta p = \sqrt{\left<p^2\right>} = \sqrt{\left(n+\frac{1}{2}\right)\hbar}\;\left(\kappa m\right)^\frac{1}{4} \end{array} \]

    Whenever we have the uncertainties for position and momentum, it is natural to want to test the uncertainty principle:

    \[ \Delta x  \Delta p = \left(n+\frac{1}{2}\right)\hbar \]

    The uncertainties both get bigger as the energy level goes up, so the ground state represents the smallest value of this product, and it turns out that the ground state of the harmonic oscillator (\(n=0\)) provides the very limit of the uncertainty principle!

    It's also interesting to note that the probability density (square of the wave function) of the ground state is the classic "normal" distribution" from statistics, with the uncertainty in \(x\) being the standard deviation. Furthermore, if one performs the calculation in classical mechanics of the average distance that an object in simple harmonic motion is from the equilibrium point, one finds that it is \(\frac{1}{\sqrt 2}\) times the maximum distance (amplitude of the motion). The quantum-mechanical uncertainty in \(x\) for the ground state is that same fraction of the distance from the center to the classical turning point.

    Energy Basis

    There exists a clever approach to describing the quantum harmonic oscillator that can save a great deal of work compared to the alternative of dealing with ugly gaussian integrals in either position or momentum space. This method has broad applications in theoretical physics, since so many advanced topics are modeled after the harmonic oscillator. This approach starts with the Hilbert space operators for position and momentum. We found (Equation 2.5.16) that these two operators satisfy the commutation relation:

    \[ \left[\;X\;,\;P\;\right] = XP-PX = i\hbar\]

    Consider the following dimensionless operator built from the position and momentum operators:

    \[ a \equiv \dfrac{1}{\sqrt{2\hbar}}\left[\left(\kappa m\right)^\frac{1}{4} X + i\; \left(\kappa m\right)^{-\frac{1}{4}} P\right] \]

    A quick glance at Equation 3.3.3 should convince the reader that this constructed operator is indeed dimensionless. The position and momentum operators give real eigenvalues, so they are self-adjoint (i.e. \(X^\dagger = X\) and \(P^\dagger = P\), which means that the adjoint of our constructed operator is:

    \[ a^\dagger = \dfrac{1}{\sqrt{2\hbar}}\left[\left(\kappa m\right)^\frac{1}{4} X - i\; \left(\kappa m\right)^{-\frac{1}{4}} P\right] \]

    Now let's construct another operator \(N\) from these, by taking their product:

    \[ N \equiv a^\dagger a = \dfrac{1}{\sqrt{2\hbar}}\left[\left(\kappa m\right)^\frac{1}{4} X - i\; \left(\kappa m\right)^{-\frac{1}{4}} P\right] \cdot \dfrac{1}{\sqrt{2\hbar}}\left[\left(\kappa m\right)^\frac{1}{4} X + i\; \left(\kappa m\right)^{-\frac{1}{4}} P\right] \]

    Multiplying this out, and remembering that the \(X\) and \(P\) operators don't commute, we get:

    \[ N = \dfrac{1}{2\hbar}\left[ \sqrt{\kappa m}\; X^2 + \dfrac{1}{\sqrt{\kappa m}}\; P^2 + i\left(XP-PX\right) \right] \]

    Plugging Equation 3.3.12 in gives:

    \[ N = \dfrac{1}{2\hbar}\left[ \sqrt{\kappa m}\; X^2 + \dfrac{1}{\sqrt{\kappa m}}\; P^2 \right] - \frac{1}{2} \]

    So where is this going, you ask? Multiply both sides by \(\hbar \omega_c\), and use the definition of \(\omega_c\) given in Equation 3.3.1:

    \[ N \;\hbar\omega_c = \frac{1}{2} \omega_c\left[ \sqrt{\kappa m}\; X^2 + \dfrac{1}{\sqrt{\kappa m}}\; P^2 \right] - \frac{1}{2}\hbar \omega_c \;\;\; \Rightarrow \;\;\; \left(N+\frac{1}{2}\right)\hbar \omega_c =  \frac{1}{2}\kappa\; X^2 + \frac{1}{2m} P^2 \]

    We have constructed the hamiltonian operator for the harmonic oscillator potential in terms of a single operator \(N\) known as the number operator. We know that when operated on the \(n^{th}\) energy eigenstate, the hamiltonian operator creates an eigenvalue equation, meaning:

    \[ H\left|\;E_n\;\right> = E_n\left|\;E_n\;\right> = \left(n+\frac{1}{2}\right)\hbar \omega_c\left|\;E_n\;\right> \;\;\; \Rightarrow \;\;\; \left( \frac{1}{2}\kappa\; X^2 + \frac{1}{2m} P^2\right)\left|\;E_n\;\right> = \left(N+\frac{1}{2}\right)\hbar \omega_c \left|\;E_n\;\right> \]

    This shows that the energy eigenstate is also an eigenstate of the number operator, with the eigenvalue being the energy level:

    \[ N\left|\;E_n\;\right> =n\left|\;E_n\;\right>\]

    As cute as this is, its true value comes from reversing the process: We can invert Equations 3.3.13 and 3.3.14 to write the operators \(X\) and \(P\) in terms of \(a\) and \(a^\dagger\). This allows us to work entirely within the energy basis, even when computing expectation values of functions of position and momentum – no more gaussian integrals! To see how this works, we need a bit more information about how these dimensionless operators act on the energy eigenvector.

    We start with the commutation relation between \(a\) and \(a^\dagger\). We have already computed the product in one order (\(a^\dagger a\)), it gives \(N\), shown in Equation 3.3.16. In the opposite order (\(a a^\dagger\)), the \(X^2\) and \(P^2\) terms in are unchanged, but the constant \(-\frac{1}{2}\) term changes sign. Forming the commutator then gives a very simple result:

    \[ \left[\;a\;,\;a^\dagger\;\right] = 1\]

    Using this commutator and the effect of the number operator on the energy eigenstates, we can show (sparing you the tedious algebra details) that the following operations hold:

    \[ \begin{array}{l} a\;\left|\;E_n\;\right> = \sqrt{n}\;\left|\;E_{n-1}\;\right> \\ a^\dagger\;\left|\;E_n\;\right> = \sqrt{n+1}\;\left|\;E_{n+1}\;\right> \end{array} \]

    The operator \(a\) is called the lowering operator, as it has the effect of lowering the energy eigenstate to the next lowest one, while \(a^\dagger\) is called the raising operator.

    Example \(\PageIndex{1}\)

    Confirm that the actions of the raising and lowering operators given in Equation 3.3.22 are consistent with Equations 3.3.20 and 3.3.21.


    Operating on an eigenstate first with \(a\) and then with \(a^\dagger\) gives:

    \[ N\;\left|\;E_n\;\right> = a^\dagger\left[ \;a\;\left|\;E_n\;\right>\right] = a^\dagger \left[ \sqrt{n}\;\left|\;E_{n-1}\;\right>\right] = \sqrt{n}\left[\;a^\dagger\;\left|\;E_{n-1}\;\right>\right] = \sqrt{n}\left[\sqrt{\left(n-1\right)+1}\;\left|\;E_{\left(n-1\right)+1}\;\right>\right] = n\left|\;E_n\;\right> \nonumber \]

    Repeating in the opposite order gives:

    \[ a\left[ \;a^\dagger\;\left|\;E_n\;\right>\right] = a \left[ \sqrt{n+1}\;\left|\;E_{n+1}\;\right>\right] = \sqrt{n+1}\left[\;a\;\left|\;E_{n+1}\;\right>\right] = \sqrt{n+1}\left[\sqrt{\left(n+1\right)}\;\left|\;E_{\left(n+1\right)-1}\;\right>\right] = \left(n+1\right)\left|\;E_n\;\right> \nonumber \]

    Subtracting the first operation from the second gives:

    \[ \left(a a^\dagger - a^\dagger a\right)\left|\;E_n\;\right> = \left(n+1 - n\right)\left|\;E_n\;\right> = 1 \; \left|\;E_n\;\right> \nonumber \]

    With such simple operations \(a\) and \(a^\dagger\) in this basis, the advantage of writing the energy, momentum, and position operators in terms of these operators should be clear – this basis requires no integrating. For the sake of completeness, here are the position and momentum operators in terms of the raising and lowering operators:

    \[ \begin{array}{l} X = \sqrt{\dfrac{\hbar}{2}}\left(\kappa m\right)^{-\frac{1}{4}} \left( a + a^\dagger \right) \\ P = -i\;\sqrt{\dfrac{\hbar}{2}}\left(\kappa m\right)^\frac{1}{4} \left( a - a^\dagger \right) \end{array} \]

    Using the Energy Basis

    After a short time playing around with the energy basis, the question naturally arises, "What happens if I keep applying the lowering operator? Specifically, what happens if I apply the lowering operator to the ground state?" Well, the answer is to just plug in (remember, the ground state is \(n=0\)):

    \[ a \;\left|\;E_o\;\right> = \sqrt{0}\;\left|\;E_{-1}\;\right> = 0\]

    Naturally there is no state of \(n=-1\), but that's okay, because we get zero anyway. Knowing that the lowering operator acting on the ground state gives zero makes solving for the ground state wave function in the position basis easier than trying to solve Schrödinger's equation, because it reduces the second-order differential equation to one that is first order:

    \[ \begin{array}{l} 0 & = & \left<\;x\;\right| \; a \; \left|\;E_o\;\right> = \widehat a_x\; \psi_o\left(x\right) = \dfrac{1}{\sqrt{2\hbar}}\left[\left(\kappa m\right)^\frac{1}{4} \widehat x + i\; \left(\kappa m\right)^{-\frac{1}{4}} \widehat p\right]\psi_o\left(x\right) \\ 0 & = & \sqrt{\kappa m}\; x \;\psi_o\left(x\right) +\hbar \dfrac{d}{dx} \psi_o\left(x\right)  \end{array}\]

    This differential equation is not hard to solve, as the wave function can be brought to one side of the equation, and the variable \(x\) to the other, and both sides integrated:

    \[ \dfrac {d\psi_o}{\psi_o} = -\dfrac{\sqrt{\kappa m}}{\hbar}\;x\;dx \;\;\; \Rightarrow \;\;\; \ln \psi_o\left(x\right) = -\frac{1}{2} \dfrac{\sqrt{\kappa m}}{\hbar} x^2 + const\]

    Using both sides of this equation as an exponent for \(e\) and normalizing the resulting wave function (which takes care of the integration constant) results in Equation 3.3.3.

    We can also use the energy basis to derive the wave functions of the higher energy eigenstates, in a manner much easier than guessing the form of the polynomials and solving for constants, as described earlier. All that is required is to operate on the ground state wave function with the raising operator, which again includes the position-basis operators \(\widehat x\) and \(\widehat p\).