# 5.4: Eigenstates and Eigenvalues

- Page ID
- 94125

\( \newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \)

\( \newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}} \)

\( \newcommand{\id}{\mathrm{id}}\) \( \newcommand{\Span}{\mathrm{span}}\)

( \newcommand{\kernel}{\mathrm{null}\,}\) \( \newcommand{\range}{\mathrm{range}\,}\)

\( \newcommand{\RealPart}{\mathrm{Re}}\) \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\)

\( \newcommand{\Argument}{\mathrm{Arg}}\) \( \newcommand{\norm}[1]{\| #1 \|}\)

\( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\)

\( \newcommand{\Span}{\mathrm{span}}\)

\( \newcommand{\id}{\mathrm{id}}\)

\( \newcommand{\Span}{\mathrm{span}}\)

\( \newcommand{\kernel}{\mathrm{null}\,}\)

\( \newcommand{\range}{\mathrm{range}\,}\)

\( \newcommand{\RealPart}{\mathrm{Re}}\)

\( \newcommand{\ImaginaryPart}{\mathrm{Im}}\)

\( \newcommand{\Argument}{\mathrm{Arg}}\)

\( \newcommand{\norm}[1]{\| #1 \|}\)

\( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\)

\( \newcommand{\Span}{\mathrm{span}}\) \( \newcommand{\AA}{\unicode[.8,0]{x212B}}\)

\( \newcommand{\vectorA}[1]{\vec{#1}} % arrow\)

\( \newcommand{\vectorAt}[1]{\vec{\text{#1}}} % arrow\)

\( \newcommand{\vectorB}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \)

\( \newcommand{\vectorC}[1]{\textbf{#1}} \)

\( \newcommand{\vectorD}[1]{\overrightarrow{#1}} \)

\( \newcommand{\vectorDt}[1]{\overrightarrow{\text{#1}}} \)

\( \newcommand{\vectE}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash{\mathbf {#1}}}} \)

\( \newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \)

\( \newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}} \)

\(\newcommand{\avec}{\mathbf a}\) \(\newcommand{\bvec}{\mathbf b}\) \(\newcommand{\cvec}{\mathbf c}\) \(\newcommand{\dvec}{\mathbf d}\) \(\newcommand{\dtil}{\widetilde{\mathbf d}}\) \(\newcommand{\evec}{\mathbf e}\) \(\newcommand{\fvec}{\mathbf f}\) \(\newcommand{\nvec}{\mathbf n}\) \(\newcommand{\pvec}{\mathbf p}\) \(\newcommand{\qvec}{\mathbf q}\) \(\newcommand{\svec}{\mathbf s}\) \(\newcommand{\tvec}{\mathbf t}\) \(\newcommand{\uvec}{\mathbf u}\) \(\newcommand{\vvec}{\mathbf v}\) \(\newcommand{\wvec}{\mathbf w}\) \(\newcommand{\xvec}{\mathbf x}\) \(\newcommand{\yvec}{\mathbf y}\) \(\newcommand{\zvec}{\mathbf z}\) \(\newcommand{\rvec}{\mathbf r}\) \(\newcommand{\mvec}{\mathbf m}\) \(\newcommand{\zerovec}{\mathbf 0}\) \(\newcommand{\onevec}{\mathbf 1}\) \(\newcommand{\real}{\mathbb R}\) \(\newcommand{\twovec}[2]{\left[\begin{array}{r}#1 \\ #2 \end{array}\right]}\) \(\newcommand{\ctwovec}[2]{\left[\begin{array}{c}#1 \\ #2 \end{array}\right]}\) \(\newcommand{\threevec}[3]{\left[\begin{array}{r}#1 \\ #2 \\ #3 \end{array}\right]}\) \(\newcommand{\cthreevec}[3]{\left[\begin{array}{c}#1 \\ #2 \\ #3 \end{array}\right]}\) \(\newcommand{\fourvec}[4]{\left[\begin{array}{r}#1 \\ #2 \\ #3 \\ #4 \end{array}\right]}\) \(\newcommand{\cfourvec}[4]{\left[\begin{array}{c}#1 \\ #2 \\ #3 \\ #4 \end{array}\right]}\) \(\newcommand{\fivevec}[5]{\left[\begin{array}{r}#1 \\ #2 \\ #3 \\ #4 \\ #5 \\ \end{array}\right]}\) \(\newcommand{\cfivevec}[5]{\left[\begin{array}{c}#1 \\ #2 \\ #3 \\ #4 \\ #5 \\ \end{array}\right]}\) \(\newcommand{\mattwo}[4]{\left[\begin{array}{rr}#1 \amp #2 \\ #3 \amp #4 \\ \end{array}\right]}\) \(\newcommand{\laspan}[1]{\text{Span}\{#1\}}\) \(\newcommand{\bcal}{\cal B}\) \(\newcommand{\ccal}{\cal C}\) \(\newcommand{\scal}{\cal S}\) \(\newcommand{\wcal}{\cal W}\) \(\newcommand{\ecal}{\cal E}\) \(\newcommand{\coords}[2]{\left\{#1\right\}_{#2}}\) \(\newcommand{\gray}[1]{\color{gray}{#1}}\) \(\newcommand{\lgray}[1]{\color{lightgray}{#1}}\) \(\newcommand{\rank}{\operatorname{rank}}\) \(\newcommand{\row}{\text{Row}}\) \(\newcommand{\col}{\text{Col}}\) \(\renewcommand{\row}{\text{Row}}\) \(\newcommand{\nul}{\text{Nul}}\) \(\newcommand{\var}{\text{Var}}\) \(\newcommand{\corr}{\text{corr}}\) \(\newcommand{\len}[1]{\left|#1\right|}\) \(\newcommand{\bbar}{\overline{\bvec}}\) \(\newcommand{\bhat}{\widehat{\bvec}}\) \(\newcommand{\bperp}{\bvec^\perp}\) \(\newcommand{\xhat}{\widehat{\xvec}}\) \(\newcommand{\vhat}{\widehat{\vvec}}\) \(\newcommand{\uhat}{\widehat{\uvec}}\) \(\newcommand{\what}{\widehat{\wvec}}\) \(\newcommand{\Sighat}{\widehat{\Sigma}}\) \(\newcommand{\lt}{<}\) \(\newcommand{\gt}{>}\) \(\newcommand{\amp}{&}\) \(\definecolor{fillinmathshade}{gray}{0.9}\)## Observable Values

Suppose we start with a quantum state that provides for a broad spectrum of measurements of some quantity, such as energy. What happens to that quantum state after we observe it? Well, the cat is out of the bag, in that now we know precisely the state's energy – the state doesn't go back to a probabilistic one unless we prepare it that way. A classical analog to this is the roll of a die. We have a die in a cup, shake the cup, and turn it over onto a tabletop. We don't know the roll of the die until we "measure" it by lifting the cup, but once we do, the die remains in that state unless we shake it in the cup again.

One thing we can say about the state of the die, even before we measure it, is that *when* we measure it, we are guaranteed to get one of the six possible outcomes. We compute the average roll of the die to be 3.5, and even though we call this the expectation value, we certainly don't ever expect to see 3.5 dots on the die staring up at us when we lift the cup.

The same applies to physical observables. It may be that the possible measurements lie on a continuum, making every outcome possible, but it may also be that only certain outcomes are possible (in a double slit experiment, positions at the dark fringes are not among the possible outcomes, for example). Once the physical quantity is measured, then the quantum state changes from a probabilistic description to a specific one, and that can only be one of the ones that was "allowed" by the physical situation.

There is one other thing we should say about observable values in quantum theory before moving on. One thing we have had to accept in our mathematical treatment of quantum theory is the presence of complex numbers. These unavoidably come into play whenever the phase of a wave function is important (namely, when there is interference). But measurements of real, physical quantities can never result in a number with an imaginary part. We should never see that a particle's energy is something like "\(\left(3.0+2.2i\right)eV\)"! Furthermore, we should never see an expectation value with an imaginary portion either. After all, besides calculating expectation values, we can get them by making lots of observations of the same state and averaging the numbers. If none of the numbers being averaged can have an imaginary part, then neither can their average.

## Special Quantum States

These quantum states that exist after we make a measurement of a physical observable have the property that future measurements of that observable give the same result every time. This means that the expectation value of that observable is exactly that value, and the uncertainty is zero. Let's see what this means mathematically. Let's define the wave function \(\psi_1\left(x\right)\) to be the state that always produces the same observable value \(\omega_1\), and we'll call the operator for that observable \(\Omega\). Then, using our "expectation machine" from the previous section, we have:

\[\left<\omega\right>=\omega_1=\int\limits_{-\infty}^{+\infty}\psi_1^*\left(x\right)~\left[\widehat \Omega~\psi_1\left(x\right)\right]~dx\]

The question is, in what way does the operator \(\widehat\Omega\) alter the wave function \(\psi_1\left(x\right)\)? We will state without yet proving that it changes it to a new wave function that differs from the original only in that the original is multiplied by a constant real number. Thinking of the quantum state as a vector, this means that the vector is rescaled, but not rotated. We can see that the constant real number is simply \(\omega_1\):

\[\int\limits_{-\infty}^{+\infty}\psi_1^*\left(x\right)~\left[\widehat \Omega~\psi_1\left(x\right)\right]~dx=\int\limits_{-\infty}^{+\infty}\psi_1^*\left(x\right)~\left[\omega_1~\psi_1\left(x\right)\right]~dx=\omega_1\int\limits_{-\infty}^{+\infty}\psi_1^*\left(x\right)~\psi_1\left(x\right)~dx=\omega_1\]

The last equality comes about because the wave function is normalized.

In Section 5.2 we introduced the label of an eigenstate and eigenvalues in the context of states of definite energy and those energy values. We see here that this notion generalizes to any observable. It is common practice to distinguish *eigenfunctions* (the wave functions associated with eigenstates) from more general wave functions with a label that indicates which eigenvalue it is linked to, so in general we would write:

\[\widehat\Omega~\psi_i\left(x\right) = \omega_i~\psi_i\left(x\right)\]

## Eigenstates are "Complete"

Given that there is an eigenstate associated every possible value of an observable, then it should come as no surprise that quantum states that are not eigenstates (i.e. they produce many possible outcomes with different probabilities) can be written as a linear combination of eigenstates. We have already seen this idea of "completeness" in the context of building more general states from energy eigenstates, which were found using separation of variables, but now we can state that no matter what *basis* we use, these eigenstates behave like "unit vectors" that allow us to build any quantum state vector.

If the possible observable values are *quantized* (i.e. only come in discrete units), then a general wave function is constructed from a linear combination of the eigenfunctions:

\[\psi\left(x\right)=C_1~\psi_1\left(x\right)+C_2~\psi_2\left(x\right)+\dots = \sum \limits_{\text{all}~i}~C_i\psi_i\left(x\right)\]

If, on the other hand, the possible observable values lie on a continuum, then the linear combination requires an integral. We have actually seen this already! We know that a plane wave solution to the free particle Schrödinger equation (\(e^{ikx}\)) has a definite momentum (\(p=\hbar k\)). Each eigenstate of momentum has its own value of \(k\), and these lie on a continuum for the free particle. So a general wave function is a linear combination (integral) summed over all of these eigenfunctions, with the coefficients of each eigenfunction expressed as "\(A\left(k\right)\)", giving us Equation 4.5.6.

## Eigenstates are "Orthogonal"

Our description of eigenstates as the "unit vectors" of quantum states does not end with being able to construct general vectors. They also satisfy an orthogonality condition, like the one we discussed in Section 1.6:

\[\int\limits_{-\infty}^{+\infty}\psi_i^*\left(x\right)\psi_j\left(x\right)dx=\left\{\begin{array}{l} 1 && i=j \\ 0 && i\ne j \end{array}\right.\]

The value of 1 comes about when \(i=j\) because the wave function is normalized.

Using this fact, we can show that the coefficients in the linear combination are in fact probability amplitudes associated with measuring each of the respective eigenvalues when an observation is made on a general quantum state:

\[1=\int\limits_{-\infty}^{+\infty}\psi^*\left(x\right)\psi\left(x\right)dx=\int\limits_{-\infty}^{+\infty}\left[C_1~\psi_1\left(x\right)+C_2~\psi_2\left(x\right)+\dots\right]^*\left[C_1~\psi_1\left(x\right)+C_2~\psi_2\left(x\right)+\dots\right]dx\]

The integrals of the cross-terms all vanish thanks to the orthogonality condition, leaving:

\[1=C_1^*C_1+C_2^*C_2+\dots=\left|C_1\right|^2+\left|C_2\right|^2+\dots\]

The quantity \(\left|C_i\right|^2\) is the probability of a measurement of an observable from a general state resulting in the eigenvalue of the \(i^{th}\) state.

Now we can also show why our "expectation machine" works:

\[\begin{array}{l}\left<\omega\right> & = \int\limits_{-\infty}^{+\infty}\psi^*\left(x\right)\left[\widehat\Omega\psi\left(x\right)\right]dx \\ & =\int\limits_{-\infty}^{+\infty}\left[C_1^*~\psi_1^*\left(x\right)+C_2^*~\psi_2^*\left(x\right)+\dots\right]\left[C_1~\widehat\Omega\psi_1\left(x\right)+C_2~\widehat\Omega\psi_2\left(x\right)+\dots\right]dx \\ & = \int\limits_{-\infty}^{+\infty}\left[C_1^*~\psi_1^*\left(x\right)+C_2^*~\psi_2^*\left(x\right)+\dots\right]\left[C_1~\omega_1~\psi_1\left(x\right)+C_2~\omega_2~\psi_2\left(x\right)+\dots\right]dx \\ & =\left|C_1\right|^2\omega_1 + \left|C_2\right|^2\omega_2 +\dots \\ &= P_1\omega_1 + P_2\omega_2+\dots\end{array}\]

The "altered" state \(\Psi_{new}=\widehat\Omega\Psi\) used un the expectation machine is just what comes from weighting every eigenstate in the original state's "recipe" by the amount of its associated eigenvalue.

## Simple Examples

In the case of a free particle plane wave moving in the \(+x\)-direction, the full wave function is:

\[\Psi\left(x,t\right)=Ae^{i\left(kx-\omega t\right)}\]

We fully expect this to be an eigenstate of momentum, kinetic energy, and total energy, and it is:

\[\begin{array}{l} \widehat p\Psi\left(x,t\right)=-i\hbar\frac{\partial}{\partial x}Ae^{i\left(kx-\omega t\right)}=\hbar k \Psi\left(x,t\right) \\ \widehat {KE}\Psi\left(x,t\right)=-\frac{\hbar^2}{2m}\frac{\partial^2}{\partial x^2}Ae^{i\left(kx-\omega t\right)}=\frac{\hbar^2 k^2}{2m}\Psi\left(x,t\right) \\ \widehat E\Psi\left(x,t\right)=i\hbar\frac{\partial}{\partial t}Ae^{i\left(kx-\omega t\right)}=\hbar \omega \Psi\left(x,t\right) \end{array}\]

## Simultaneous Eigenstates

In the case of a free particle plane wave, we see that it is an eigenstate of many observables at the same time. We already know that an eigenstate of momentum cannot simultaneously be an eigenstate of position due to the uncertainty principle, so being an eigenstate of two different observables at the same time is certainly not guaranteed. The deciding factor is something we mentioned at the end of Section 5.3. If the operators associated with two observables commute with each other – if the altered quantum state that results the consecutive actions of the operators is the same regardless of the order in which they are applied – then these two observables can share eigenstates.