Skip to main content
Physics LibreTexts

The Simple Harmonic Oscillator

The Classical Simple Harmonic Oscillator 

The classical equation of motion for a one-dimensional simple harmonic oscillator with a particle of mass m attached to a spring having spring constant k is


\[ m\frac{dx^2}{dt^2} = -kx \]


The solution is


\[ x = x_{o} \sin( \alpha x  + \delta)  ,  \omega = \sqrt{\frac{k}{m}} \]


and the momentum p = mv has time dependence


\[ p = mx_{o}\omega\cos(\omega t + \delta) \]



The total energy


\[E = \frac{1}{2}m(p^2 + m^2\omega^2x^2) \]


is clearly constant in time. 


It is often useful to picture the time-development of a system in phase space, in this case a two-dimensional plot with position on the x-axis, momentum on the y-axis.  Actually, to have \((x,y)\) coordinates with the same dimensions, we use \((m \omega x, p) \).


It is evident from the above expression for the total energy that in these variables the point representing the system in phase space moves clockwise around a circle of radius \( \sqrt{2mE} \) centered at the origin.


Note that in the classical problem we could choose any point \((m \omega x, p) \) place the system there and it would then move in a circle about the origin.  In the quantum problem, on the other hand, we cannot specify the initial coordinates \((m \omega x, p) \) precisely, because of the uncertainly principle. The best we can do is to place the system initially in a small cell in phase space, of size \(\Delta x \cdot \Delta p = \frac{\hbar}{2}\) .  In fact, we shall find that in quantum mechanics phase space is always divided into cells of essentially this size for each pair of variables.

Einstein’s Solution of the Specific Heat Puzzle

The simple harmonic oscillator, a nonrelativistic particle in a potential \(\frac{1}{2}kx^2\)  is an excellent model for a wide range of systems in nature.  In fact, not long after Planck’s discovery that the black body radiation spectrum could be explained by assuming energy to be exchanged in quanta, Einstein applied the same principle to the simple harmonic oscillator, thereby solving a long-standing puzzle in solid state physics—the mysterious drop in specific heat of all solids at low temperatures. Classical thermodynamics, a very successful theory in many ways, predicted no such drop—with the standard equipartition of energy, kT in each mode (potential plus kinetic), the specific heat should remain more or less constant as the temperature was lowered (assuming no phase change). 


To explain the anomalous low temperature behavior, Einstein assumed each atom to be an independent (quantum) simple harmonic oscillator, and, just as for black body radiation, he assumed the oscillators could only absorb or emit energy in quanta.  Consequently, at low enough temperatures there is rarely sufficient energy in the ambient thermal excitations to excite the oscillators, and they freeze out, just as blue oscillators do in low temperature black body radiation.  Einstein’s picture was later somewhat refined—the basic set of oscillators was taken to be standing sound wave oscillations in the solid rather than individual atoms (making the picture even more like black body radiation in a cavity) but the main conclusion—the drop off in specific heat at low temperatures—was not affected.

Schrödinger’s Equation and the Ground State Wave Function

From the classical expression for total energy given above, the Schrödinger equation for the quantum oscillator follows in standard fashion:

\[ -\frac{\hbar^2}{2m} \frac{d^2\psi(x) }{d x^2} + \frac{1}{2}m\omega^2x^2\psi(x) = E \psi(x) \]


What will the solutions to this Schrödinger equation look like?  Since the potential \(\frac{1}{2} m \omega^2x^2\) increases without limit on going away from x = 0, it follows that no matter how much kinetic energy the particle has, for sufficiently large x the potential energy dominates, and the (bound state) wavefunction decays with increasing rapidity for further increase in x.  (Obviously, for a real physical oscillator there is a limit on the height of the potential—we will assume that limit is much greater than the energies of interest in our problem.) 


We know that when a particle penetrates a barrier of constant height \(V_0\) (greater than the particle’s kinetic energy) the wave function decreases exponentially into the barrier, as \(e^{\alpha x}\) , where \(\alpha = \sqrt{\frac{2m(V_0 - E)}{\hbar^2}}\) .  But, in contrast to this constant height barrier, the “height” of the simple harmonic oscillator potential continues to increase as the particle penetrates to larger x.  Obviously, in this situation the decay will be faster than exponential.  If we (rather naïvely) assume it is more or less locally exponential, but with a local \(\alpha\)  varying with \(V_0\), neglecting E relative to \(V_0\) in the expression for \(\alpha\)  suggests that \(\alpha\)  itself is proportional to x(since \(V \propto x^2\) , and \(\alpha \propto \sqrt{V}\) ) so maybe the wavefunction decays as \(e^{-(constant) x^2}\)? 


To check this idea, we insert \(\psi(x) = e^{\frac{x^2}{2b^2}}  in the Schrödinger equation, using


\[\frac{d^2 \psi(x)}{dx^2} = -\frac{1}{b^2} \psi(x) + \frac{x^2}{b^4} \psi(x)\]


to find


\[-\frac{\hbar^2}{2m} \left (-\frac{1}{b^2} + \frac{x^2}{b^4} \right ) \psi(x) + \frac{1}{2}m\omega^2x^2\psi(x) = E\psi(x)\].


The \(\psi(x)\) is just a factor here, and it is never zero, so can be cancelled out.  This leaves a quadratic expression which must have the same coefficients of x0, x2 on the two sides, that is, the coefficient of \(x^2\) on the left hand side must be zero:


\[\frac{\hbar^2}{2mb^4}=\frac{1}{2}m\omega^2 \; \Rightarrow \; b=\sqrt{\frac{\hbar}{m\omega}}\].


This fixes the wave function.  Equating the constant terms fixes the energy:


\[E=\frac{\hbar^2}{2mb^2} = \frac{1}{2}\hbar\omega\].


So the conjectured form for the wave function is in fact the exact solution for the lowest energy state!  (It’s the lowest state because it has no nodes.)


Also note that even in this ground state the energy is nonzero, just as it was for the square well.  The central part of the wave function must have some curvature to join together the decreasing wave function on the left to that on the right.  This “zero point energy” is sufficient in one physical case to melt the lattice—helium is liquid even down to absolute zero temperature (checked down to microkelvins!) because the wave function spread destabilizes the solid lattice that will form with sufficient external pressure.  

Higher Energy States

It is clear from the above discussion of the ground state that \(v=\sqrt{\frac{\hbar}{m\omega}}\) is the natural unit of length in this problem, and \(\hbar \omega\) that of energy, so to investigate higher energy states we reformulate in dimensionless variables,

\[\xi = \frac{x}{b} = x\sqrt{\frac{m\omega}{\hbar}} \; and \;  \epsilon = \frac{E}{\hbar \omega}\].

Schrödinger’s equation becomes

\[\frac{d^2\psi(\xi)}{d\xi^2} = (\xi^2 - 2\epsilon)\psi(\xi)\].

Deep in the barrier, the \(\epsilon)\ term will become negligible, and just as for the ground state wave function, higher bound state wave functions will have \(e^{-\frac{\xi^2}{2}\)  behavior, multiplied by some more slowly varying factor (it turns out to be a polynomial).


Exercise: find the relative contributions to the second derivative from the two terms in \(x^n e^{-\frac{x^2}{2}}\)   For given n, when do the contributions involving the first term become small?  Define “small”.

The standard approach to solving the general problem is to factor out the \(e^{-\frac{\xi^2}{2}}\) term,

\[\psi(\xi) = h(\xi)e^{-\frac{\xi^2}{2}}\]                                                                      

giving a differential equation for \(h(\xi)\) :

\[\frac{d^2 h}{d \xi ^2} - 2 \xi \frac{dh}{d\xi} + (2\xi -1)h = 0\]

We try solving this with a power series in \(\xi\)

\(h(\xi) = h_0 + h_1\xi + h_2 \xi^2 = ...\)


Inserting this in the differential equation, and requiring that the coefficient of each power \(\xi^n\)  vanish identically, leads to a recurrence formula for the coefficients \(h_n\) hn:

\[h_{n+2} = \frac{(2n+1-2\epsilon)}{(n+1)(n+2)}h_n\]



Evidently, the series of odd powers and that of even powers are independent solutions to Schrödinger’s equation.  (Actually this isn’t surprising: the potential is even in x, so the parity operator P commutes with the Hamiltonian. Therefore, unless states are degenerate in energy, the wave functions will be even or odd in x.)  For large n, the recurrence relation simplifies to

\[h_n+2 \approx \frac{2}{n}h_n \; where \; n \gg \epsilon\].

The series therefore tends to

\[\sum\frac{2^n\xi^{2n}}{(2n-2)(2n-4)...2}=2\xi^2\sum\frac{\xi^{2(n-1)}}{(n-1)!} = e^{\xi^2\].

Multiply this by the \(e^\frac{-\xi^2}{2}\)  factor to recover the full wavefunction, we find \(\psi\)  diverges for large x as \(e^\frac{+\xi^2}{2}\)


Actually we should have expected this—for a general value of the energy, the Schrödinger equation has the solution \(\approx Ae^\frac{+\xi^2}{2} + Be^\frac{-\xi^2}{2}\)  at large distances, and only at certain energies does the coefficient A vanish to give a normalizable bound state wavefunction.


So how do we find the nondiverging solutions?  It is clear that the infinite power series must be stopped!  The key is in the recurrence relation.


If the energy satisfies


\[2\epsilon = 2n + 1 \; where \; n \; \; is \; an \; integer \]



 then \(h_n+2\) hn+2 and all higher coefficients vanish


This requirement in fact completely determines the polynomial (except for an overall constant) because with \(2\epsilon = 2n + 1\)  the coefficients \(h_m\) hm for \(m < n\) are determined by


\[h_{m+2} = \frac{(2m + 1 - 2\epsilon)}{(m+1)(m+2)}h_m = \frac{(2m+1-(2n+1))}{(m+1)(m+2)}h_m\]




This nth order polynomial is called a Hermite polynomial and written   The standard normalization of the Hermite polynomials  is to take the coefficient of the highest power  to be 2n .  The other coefficients then follow using the recurrence relation above, giving:



So the bottom line is that the wavefunction for the nth excited state, having energy , is , where Cn is a normalization constant to be determined in the next section.


It can be shown (see exercises at the end of this lecture) that .  Using this, beginning with the ground state, one can easily convince oneself that the successive energy eigenstates each have one more node—the nth state has n nodes.  This is also evident from numerical solution using the spreadsheet, watching how the wave function behaves at large x as the energy is cranked up.


The spreadsheet can also be used to plot the wave function for large n, say n = 200.  It is instructive to compare the probability distribution with that for a classical pendulum, one oscillating with fixed amplitude and observed many times at random intervals. For the pendulum, the probability peaks at the end of the swing, where the pendulum is slowest and therefore spends most time. The n = 200 distribution amplitude follows this pattern, but of course oscillates.  However, in the large n limit these oscillations take place over undetectably small intervals.


The classical pendulum when not at rest clearly has a time-dependent probability distribution—it swings backwards and forwards.  This means it cannot be in an eigenstate of the energy.  In fact, the quantum state most like the classical is a coherent state built up of neighboring energy eigenstates.  We shall discuss coherent states later in the course.

Operator Approach to the Simple Harmonic Oscillator

Having scaled the position coordinate x to the dimensionless  let us also scale the momentum from p to   (so ). 


The Hamiltonian is



Dirac had the brilliant idea of factorizing this expression: the obvious thought  isn’t quite right, because it fails to take account of the noncommutativity of the operators, but the symmetrical version



is fine, and we shall soon see that it leads to a very easy way of finding the eigenvalues and operator matrix elements for the oscillator, far simpler than using the wave functions we found above.  Interestingly, Dirac’s factorization here of a second-order differential operator into a product of first-order operators is close to the idea that led to his most famous achievement, the Dirac equation, the basis of the relativistic theory of electrons, protons, etc.


To continue, we define new operators  by



(We’ve expressed a in terms of the original variables x, p for later use.)


From the commutation relation  it follows that





Therefore the Hamiltonian can be written:




Note that the operator N can only have non-negative eigenvalues, since






Suppose N has an eigenfunction  with eigenvalue ,



From the two equations above



so  is an eigenfunction of N with eigenvalue   Operating with  again and again, we climb an infinite ladder of eigenstates equally spaced in energy.


 is often termed a creation operator, since the quantum of energy  added each time it operates is equivalent to an added photon in black body radiation (electromagnetic oscillations in a cavity).


It is easy to check that the state  is an eigenstate with eigenvalue  provided it is nonzero, so the operator a takes us down the ladder. However, this cannot go on indefinitely—we have established that N cannot have negative eigenvalues. We must eventually reach a state  the operator a  annihilates the state. (At each step down, a annihilates one quantum of energy—so a is often called an annihilation or destruction operator.)


Since the norm squared of    and since for any nonvanishing state, it must be that the lowest eigenstate (the ) has    It follows that the  ’s on the ladder are the positive integers, so from this point on we relabel the eigenstates with n in place of


That is to say, we have proved that the only possible eigenvalues of N are zero and the positive integers: 0, 1, 2, 3… .  N is called thenumber operator: it measures the number of quanta of energy in the oscillator above the irreducible ground state energy (that is, above the “zero-point energy” arising from the wave-like nature of the particle).


Since from above the Hamiltonian


the energy eigenvalues are



It is important to appreciate that Dirac’s factorization trick and very little effort has given us all the eigenvalues of the Hamiltonian



Contrast the work needed in this section with that in the standard Schrödinger approach. We have also established that the lowest energy state , having energy  must satisfy the first-order differential equation  that is,




The solution, unnormalized, is


(In fact, we’ve seen this equation and its solution before:  this was the condition for the “least uncertain” wave function in the discussion of the Generalized Uncertainty Principle.)


We denote the normalized set of eigenstates  Now and Cn is easily found:




Therefore, if we take the set of orthonormal states  as the basis in the Hilbert space, the only nonzero matrix elements of  are  That is to say,



(The column vectors in the space this matrix operates on have an infinite number of elements: the lowest energy, the ground state component, is the entry at the top of the infinite vector—so up the energy ladder is down the vector!)


The adjoint



For practical computations, we need to find the matrix elements of the position and momentum variables between the normalized eigenstates.  Now







These matrices are, of course, Hermitian (not forgetting the i factor in p). 


To find the matrix elements between eigenstates of any product of x’s and p’s, express all the x’s and p’s in terms of a’s and ’s, to give a sum of products of a’s and ’s. Each product in this sum can be evaluated sequentially from the right, because each a or  has only one nonzero matrix element when the product operates on one eigenstate.

Normalizing the Eigenstates in x-space

The normalized ground state wave function is



where we have gone back to the x variable, and normalized using .


To find the normalized wave functions for the higher states, they are first constructed formally by applying the creation operator  repeatedly on the ground state   Next, the result is translated into x-space (actually ) by writing  as a differential operator, acting on








We need to check that this expression is indeed the same as the Hermite polynomial wave function derived earlier, and to do that we need some further properties of the Hermite polynomials.

Some Properties of Hermite Polynomials

The mathematicians define the Hermite polynomials by:




It follows immediately from the definition that the coefficient of the leading power is 2n.


It is a straightforward exercise to check that Hn is a solution of the differential equation



so these are indeed the same polynomials we found by the series solution of Schrödinger’s equation earlier (recall the equation for the polynomial component of the wave function was




 with .)


We have found  in the form




We shall now prove that the polynomial component is exactly equivalent to the Hermite polynomial as defined at the beginning of this section.


We begin with the operator identity:



Both sides of this expression are to be regarded as operators, that is, it is assumed that both are operating on some function .


Now take the nth power of both sides: on the right, we find, for example,



since the intermediate exponential terms cancel against each other.




and substituting this into the expression for  above,




This established the equivalence of the two approaches to Schrödinger’s equation for the simple harmonic oscillator, and provides us with the overall normalization constants without doing integrals.  (The expression for  above satisfies .)



Use  to prove:

(a) the coefficient of is 2n.




(Hint: rewrite as , then integrate by parts n times, and use (a).)



It’s worth doing these exercises to become more familiar with the Hermite polynomials, but in evaluating matrix elements (and indeed in establishing some of these results) it is almost always far simpler to work with the creation and annihilation operators.


Exercise: use the creation and annihilation operators to find .  This matrix element is useful in estimating the energy change arising on adding a small nonharmonic potential energy term to a harmonic oscillator.

Time-Dependent Wave Functions

The set of normalized eigenstates  discussed above are of course solutions to the time-independent Schrödinger equation, or in ket notation eigenstates of the Hamiltonian   Putting in the time-dependence explicitly, .  It is necessary to include the time dependence when dealing with a state which is a superposition of states of different energies, such as  which then becomes  Expectation values of combinations of position and/or momentum operators in such states are best evaluated by expressing everything in terms of annihilation and creation operators.

Solving Schrödinger’s Equation in Momentum Space

In the lecture on Function Spaces,  we established that the basis of  states (eigenstates of the position operator) and that of  states (eigenstates of the momentum operator) were both complete bases in Hilbert space (physicist’s definition) so we could work equally well with either from a formal point of view.  Why then do we almost always work in x-space?  Well, probably because we live in x-space, but there’s another reason. The momentum operator in the x-space representation is , so Schrödinger’s equation, written , with p in operator form, is a second-order differential equation.  Now consider what happens to Schrödinger’s equation if we work in p-space.  Since the operator identity  is true regardless of representation, we must have .  So for a particle in a potential , writing Schrödinger’s equation in p-space we are confronted with the nasty looking operator !  This will produce a differential equation in general a lot harder to solve than the standard x-space equation—so we stay in x-space. 


But there are two potentials that can be handled in momentum space: first, for a linear potential , the momentum space analysis is actually easier—it’s just a first-order equation.  Second, for a particle in a quadratic potential—a simple harmonic oscillator—the two approaches yield the same differential equation. That means that the eigenfunctions in momentum space (scaled appropriately) must be identical to those in position space—the simple harmonic eigenfunctions are their own Fourier transforms!