# 2.4: Stationary States

- Page ID
- 16915

#### Separation of Variables

When expressed in the position-space basis, Schrödinger's equation is a daunting partial differential equation in two variables. There is, however, a clever trick we can use to solve for a certain class of wave functions. Before we see this trick, let's make sure we know what we mean by a "class of wave functions." Schrödinger's equation written in its basis-free form provides a description how the circumstances surrounding the particle (encapsulated by the hamiltonian operator) cause its quantum state to change. If we watch this play out in the position basis, we see how the hamiltonian operator expressed in the position basis governs the evolution of the components \(\psi\left(x,t\right)\). It turns out that some quantum states available to particles have a special property. This property is manifested in a specific mathematical way with the wave function. By focusing only on wave functions with this property, we limit ourselves to a specific class of the broader selection of all possible wave functions. This mathematical property is called *separability*, and and this technique is known as *separation of variables*, which goes as follows...

Let's consider a special class of functions \(\psi_E\left(x,t\right)\) that are separable into a product of two functions, one that is purely a function of position, and one that is purely a function of time. We'll represent the function of position as \(\psi_E\left(x\right)\), and the time-dependent function we will denote with \(T\left(t\right)\):

\[\psi_E\left(x,t\right) = \psi_E\left(x\right)T\left(t\right) \]

Plugging this into Schrödinger's equation in position-space (Equation 2.3.13), we have:

\[ -\dfrac{\hbar^2}{2m}\dfrac{\partial^2}{\partial x^2}\left[\psi_E\left(x\right)T\left(t\right)\right] + V\left(x\right) \left[\psi_E\left(x\right)T\left(t\right)\right] = i\hbar \dfrac{\partial}{\partial t} \left[\psi_E\left(x\right)T\left(t\right)\right] \]

Dividing both sides ot the equation by \( \psi_E\left(x\right)T\left(t\right)\), we get:

\[ -\dfrac{\hbar^2}{2m} \dfrac{ \dfrac{\partial^2}{\partial x^2}\psi_E\left(x\right)} {\psi_E\left(x\right)} + V\left(x\right) = i\hbar \dfrac{ \dfrac{\partial}{\partial t} T\left(t\right)} {T\left(t\right)} \]

Now comes the all-important part of this method: Notice that the left side of this equation *only depends upon position*. Whatever it equals, it doesn't matter what the time is. On the other hand, the right-hand side of the equation *depends only upon time* – it doesn't matter what the location is. And yet the quantities on each side of this equation, whatever they are, they are the same. The only kind of quantity that is independent of both position and time is a constant. We will state without proof (other than how well things work out!) that this constant is a real number, and equals the total energy of the particle. Setting both sides of this equation separately equal to the energy gives:

Notice that we are able to replace the partial derivatives with ordinary derivatives because the functions are now of one variable only, converting our single partial differential equation into two ordinary differential equations.

#### Stationary States

We can immediately solve the differential Equation 2.4.5, by our usual guess-first-and-confirm-later method. A single derivative of the function gives back a constant multiplied by the same function, so it looks like it is an exponential function:

We cannot solve Equation 2.4.4 immediately without the function \(V\left(x\right)\) (and even then we can only solve it exactly for very few \(V\left(x\right)\) functions). But what we can say is that this part of the wave function is unchanging with time, which has an interesting consequence, which we can see by reconstructing the full wave function:

The probability density of this particle does not change with time! For this reason, we call such a condition for a particle a *stationary state*, and Equation 2.4.4 we call the *stationary state Schrödinger equation*. Generally we choose the stationary-state wave function to itself be normalized, which means that we choose the constant \(A\) in Equation 2.4.7 to satisfy \(\left|A\right|^2=1\), which we can most easily realize by choosing \(A=1\).

When we embarked on this journey, we said that this was a special case of a quantum state, and now we can see how true that is. The Hilbert space vector that represents the state of a particle can have many attributes, and the subset of these vectors that allow for a separable wave function are those that admit stationary states. What is more, each of these stationary states allows for a single, well-defined energy.

Previously we viewed the wave function as a function of \(x\) that changes with time:

**Figure 2.4.1 – Evolving Wave Function (General)**

That is, the Hilbert space vector \(\left|\;\Psi\left(t\right)\;\right>\) has an infinite continuum of components \(\left<\;x\;|\;\Psi\left(t\right)\;\right>=\psi\left(x,t\right)\), and all those components are evolving with time, changing the vector in the process. But for a quantum state that is stationary, the function changes in a very specific way. A useful way to picture the time evolution of these states is to imagine an unchanging function in space (\(\psi_E\left(x\right)\)), and for every point on the function visualize a tiny, rotating phasor (see the discussion of Argand diagrams in Section 1.1).

**Figure 2.4.2 – Evolving Wave Function (Stationary State)**

The phasor at every point represents the time evolution of the state there. They all have unit length and rotate with the same rotational velocity of \(\omega = \frac{E}{\hbar} \):

\[ phasor:\;\; z\left(t\right) = e^{-i\omega t} = e^{-i\frac{E}{\hbar} t} \]

This brings out the "stationary" nature of the state nicely. Importantly, the magnitude-squared aspect of the probability density washes away the effects of the rotating phasors at every point on the curve, which means that any remnant of time dependence is lost for these states when we start making measurements.

Another thing to consider is how different stationary states compare with each other. For every allowed value of \(E\) there is different solution to the stationary-state Schrödinger equation, so two different stationary states will have different curves. Furthermore, the phasors located at every point on the function rotate at different speeds on the two graphs.

#### The Energy Basis

The quantum states that allow for separability (and which have well-defined energies) form a subset of all the possible quantum states. The same was true for the quantum states that represent well-defined positions (\(\left|\;x\;\right>\)), and those that represent well-defined momentum (\(\left|\;k\;\right>\)). We stated without proof that these latter two sets of quantum states can be used as unit vectors to expand more general state vectors into, and now we will say the same about the separable quantum states, which we will represent with the ket \(\left|\;\Psi_E\left(t\right)\;\right>\). Mathematically, we express this as:

\[ \psi_E\left(x,t\right) = \left<\;x\;|\;\Psi_E\left(t\right)\;\right> \]

Notice that these energy-space unit vectors vary with time, but they do so in a well-defined manner – their time dependence looks like the exponential in Equation 2.4.8. It becomes useful to define some energy basis vectors that are time-independent, so we separate-out the time-dependence by defining \(\left|\;E\;\right>\) such that:

\[ \left|\;\Psi_E\left(t\right)\;\right> = \left|\;E\;\right> e^{-i\frac{E}{\hbar} t} \]

Also notice that every one of these unit vectors has its own, unique time dependence, since the energy with which they are each associated appears in the exponential.

The effect of the hamiltonian operator on these unit vectors can be determined from the original Schrödinger equation (Equation 2.2.8):

\[ H \left|\;\Psi_E\left(t\right)\;\right> = H \left[\left|\;E \;\right>e^{-i\frac{E}{\hbar}t}\right] = i\hbar \dfrac{\partial}{\partial t} \left[\left|\;E\;\right>e^{-i\frac{E}{\hbar}t}\right] = E\left[\left|\;E\;\right>e^{-i\frac{E}{\hbar}t}\right] \;\;\; \Rightarrow \;\;\; H\left|\;E\;\right> = E\;\left|\;E\;\right>\]

Note that here \(H\) is an operator, and \(E\) is a scalar. It may help to again picture these as matrices – \(H\) is a square matrix, the kets are column matrices, and \(E\) is a number. We will further discuss this operator-on-state-equals-number-times-state situation pretty soon, in a future section.

#### The Energy Spectrum

From previous studies in quantum mechanics, we got a glimpse into where the word "quantum" comes from. We found that in particular for *bound states* (states where the particle is confined to a region due to potential energy barriers that are higher than the particle's total energy, the *energy spectrum* (allowed energies for the particle) is *quantized*, meaning that there were only specific energy values that could be measured for the particle – none of the energies between those values could ever be measured. The proof of this came from considering boundary conditions at the classical turning points of the potential. Only specific energies allow for the value and first derivative of the wave function to match inside and outside those boundaries. Without the presence of these boundaries (for an unbound particle), then this restriction no longer applies, and the energy spectrum is a continuum.

This makes for an interesting comparison of the set of unit vectors for bound and unbound states. For bound states, we can expand a general state into the energy-space unit vectors with a sum:

For unbound states, we have to include all of the energies, which means the sum becomes an integral:

\[ \left|\;\Psi_{unbound}\;\right> = \int \limits_{all\;E} \left|\;E\;\right>\left<\;E\;|\;\Psi_{unbound}\; \right>dE \]

We will focus here on the bound state cases, so from this point on, we'll drop the "bound" subscript. The brackets \(\left<\;E_n\;|\;\Psi\;\right>\) in Equation 2.4.12 are often written in terms of constants \(C_n\)'s (along with the time dependence), which are, in general, complex numbers. The expansion then looks like:

This *completeness relation* ensures that we can write a general state as a linear combination of stationary states. In essence, the stationary states are the "unit vectors," and the state vector is expanded into that basis, with the \(C_n\)'s being the components (which can be complex numbers in general).

Let us now interpret what this means. The bracket is an inner product of the full state with the state representing the energy \(E_n\). This inner product we have interpreted as the "overlap" of these states, which is the probability amplitude of getting a result of \(E_n\) when the energy of this state is measured. Therefore the probability (not probability density – the distribution is not a continuum!) of measuring energy \(E_n\) is:

\[ P\left(E_n\right) = \left[C_ne^{-i\frac{E}{\hbar}t}\right]^* \left[C_ne^{-i\frac{E}{\hbar}t}\right] = C_n^* C_n = \left|C_n\right|^2 \]

Naturally, the sum of all the probabilities must equal one, which is what we find when we insist that the state is normalized:

\[ 1 = \left<\;\Psi\;|\;\Psi\;\right> = \sum \limits_n \left<\;\Psi\;|\;E_n\;\right>\left<\;E_n\;|\;\Psi\; \right> = \sum \limits_n C_n^*C_n = \sum \limits_n \left|C_n\right|^2 \]

The expectation value of the energy for this state is computed using the usual method. Multiply the probabilities of each energy by the corresponding energy, and sum:

\[ \left<E\right> = \sum \limits_n P\left(E_n\right) E_n = \left|C_n\right|^2 E_n \]