Skip to main content
[ "article:topic", "authorname:tweideman", "license:ccbysa", "showtoc:no" ]
\(\require{cancel}\)
Physics LibreTexts

2.5: Observables

  • Page ID
    17153
  • Eigenstates

    Ultimately the physics we study must lead to measurements of some kind. We have said that the quantum state of a particle contains all the information about that particle, so the question becomes how to link that abstract mathematical concept to observation.

    We have already discussed some special quantum states that are naturally associated with observables. The position-space unit vectors are each associated with a single point in space, so those states are linked to the observation of position. Similarly, the momentum-space unit vectors are associated with specific momenta, and they are linked with the observation of momentum. And most recently, we have seen unit vectors in energy space, which are linked with the observation of total energy.

    We have also talked about operators, both in the Hilbert space, and the position or momentum space versions of these, which are also clearly tied to specific observable quantities. It turns out that these two things – the unit vectors in the various bases, and the operators of the various quantities linked to those bases – satisfy a very special relation. When an operator of some observable acts on the basis vector of that same observable, the basis vector is not rotated – it is only scaled. Put another way:

    \[ \Omega\left|\;\omega\;\right> = \omega\left|\;\omega\;\right> \]

    Here \(\Omega\) is an operator, which acts on a state that is associated with the same observable as the operator. The quantity \(\omega\) which multiplies the ket on the right-hand side is just a scalar value (not an operator), which effectively changes the length of the state vector.

    To get some idea of what is going on here, we can go back to our finite-dimensional vector space example represented by matrices. Consider the following matrix operator and ket represented with matrices in two dimensions:

    \[ \Omega \leftrightarrow \left[\begin{array}{l} \omega_1 & 0 \\ 0 & \omega_2 \end{array} \right]\;,\;\;\;\;\; \left|\;1\;\right> \leftrightarrow \left[ \begin{array}{l} 1 \\ 0\end{array} \right]\;,\;\;\;\;\; \left|\;2\;\right> \leftrightarrow \left[ \begin{array}{l} 0 \\ 1\end{array} \right] \]

    Clearly \(\Omega\) is an operator (it is not a c-number), because it gives a different result for each of the two vectors. But in both cases, the result of the matrix multiplication is a number multiplied by the vector:

    \[ \Omega\left|\;1\;\right> = \omega_1\left|\;1\;\right>\;,\;\;\;\;\; \Omega\left|\;2\;\right> = \omega_2\left|\;2\;\right> \]

    This is the abstract Hilbert space description of what is happening, but we can also look at it in (say) position-space:

    \[ \widehat\Omega_x\;\psi_{\omega}\left(x\right) = \omega\;\psi_{\omega}\left(x\right) \]

    Whether expressed as a Hilbert space operation, or in a specific basis like position, momentum, or energy, we call this an eigenvalue equation. The prefix "eigen" is the German word for "own," which expresses the fact that these operators are tied to the specific states that "own" them. Naturally an operator can act on any state, but it will only have this property when acting on the eigenstate of that observable. The scalar quantity that is projected out of the state upon the operation is called the eigenvalue.

    Let's look at some examples of these from what we have discussed so far. The first that stands out is the case of energy. The stationary states are those with specific energies, and the hamiltonian operator projects out those energies, so we will cease calling the \(\left|\;E\;\right>\)'s "unit vectors in energy space," and now refer to them as "energy eigenstates." The eigenvalue equations for these we have seen before. They are the stationary-state Schrödinger equation:

    \[ \begin{array}{l} H \left|\;E\;\right> = E\;\left|\;E\;\right> & \text{(Hilbert space version)} \\ \widehat H \psi_E\left(x\right) = \dfrac{\widehat p^2}{2m} \psi_E\left(x\right) + \widehat V\left(x\right) \psi_E\left(x\right) = E \;\psi_E\left(x\right) & \text{(position-space version)} \end{array} \]

    We also have examples of this for position and momentum. They aren't very interesting expressed in Hilbert space, but in position space it is far more illuminating. The wave function for a particle at a particular point in space (which we will call \(x'\)) is the delta function, and the operator for position is just the variable x so:

    \[ \psi_{position=x'}\left(x\right) = \left<\;x\;|\;x'\;\right> \;\;\; \Rightarrow \;\;\; \widehat x \; \psi_{position\;eigenstate}\left(x\right) = x\;\delta\left(x-x'\right) = x' \delta\left(x-x'\right) \]

    The equality comes about because of the "filtering" effect the delta function has when multiplied by any function of \(x\) (see Equation 2.1.15), and the eigenvalue for this is the actual position, \(x'\).

    The momentum eigenstate expressed in position space we know, and we know the momentum operator in position space, so:

    \[ \psi_{momentum=\hbar k}\left(x\right) = \left<\;x\;|\;k\;\right> \;\;\; \Rightarrow \;\;\; \widehat p \; \psi_{momentum\;eigenstate}\left(x\right) = -i\hbar \dfrac{d}{dx}\left[ \dfrac{1}{\sqrt{2\pi}} e^{ikx} \right] = \hbar k \left[ \dfrac{1}{\sqrt{2\pi}} e^{ikx}\right] \]

    Compatible Observables

    If an operator corresponding to a specific observable does not produce an eigenvalue equation with a given state, then that state is not an eigenstate of that observable. Going back to our example of two-dimensional matrices above, suppose apply the same operator to a different pair of unit vectors:

    \[ \Omega \leftrightarrow \left[\begin{array}{l} \omega_1 & 0 \\ 0 & \omega_2 \end{array} \right]\;,\;\;\;\;\; \left|\;1'\;\right> \leftrightarrow \dfrac{1}{\sqrt{2}}\left[ \begin{array}{l} +1 \\ +1\end{array} \right]\;,\;\;\;\;\; \left|\;2'\;\right> \leftrightarrow \dfrac{1}{\sqrt{2}}\left[ \begin{array}{l} +1 \\ -1\end{array} \right] \]

    Notice that in this case, the operator acting on the state vectors does not result in an eigenvalue equation – the resulting vector is not simply a product of a number and the original vector. The action of the operator actually rotates the vector (the components have a different ratio), rather than just changing its length. On the other hand, these states do have their own operator that will result in an eigenvalue equation, namely:

    \[ \Omega ' \leftrightarrow \frac{1}{2} \left[\begin{array}{l} \omega_1+\omega_2 & \omega_1-\omega_2 \\ \omega_1-\omega_2 & \omega_1+\omega_2 \end{array} \right] \]

    Multiplying this matrix by the column matrices for \(\left|\;1'\;\right>\) and \(\left|\;2'\;\right>\) gives eigenvalue equations with the eigenvalues \(\omega_1\) and \(\omega_2\), respectively.

    One must not get the false impression that there is only one operator that works for any given eigenstate. Indeed, an eigenstate of an observable will yield an eigenvalue equation not only with the operator associated with that observable, but also with any function of that observable. Let's show this for the simple case of the square of an operator. Squaring an operator simply results in a new operator, which can be described as, "operate on the state once, crating a new state, then operate on that new state again with the same operator." If the first operation results in a number multiplying the original state, then since the number has no effect on the second operation, it is clear that the second operation pulls out the same number again:

    \[ \Omega^2 \left|\;\omega\;\right> = \Omega \left[\;\Omega \left|\;\omega\;\right> \right] = \Omega \left[\;\omega \left|\;\omega\;\right> \right] =\omega \;\Omega \left|\;\omega\;\right> = \omega^2 \left|\;\omega\;\right> \]

    So the state \(\left|\;\omega\;\right>\), which is an eigenstate of the observable represented by the operator \(\Omega\), is also an eigenstate of the new operator \(\Omega^2\), and its eigenvalue is \(\omega^2\).

    The best physical example of this is the kinetic energy operator. We can construct it from the momentum operator (plus some scalars), which means that a particle that is in an eigenstate of momentum is also in an eigenstate of kinetic energy:

    \[ \widehat {KE} \left[ \dfrac{1}{\sqrt{2\pi}} e^{ikx}\right] = -\dfrac{\hbar^2}{2m} \dfrac{d^2}{dx^2} \left[ \dfrac{1}{\sqrt{2\pi}} e^{ikx}\right] = \dfrac{\hbar^2 k^2}{2m} \left[ \dfrac{1}{\sqrt{2\pi}} e^{ikx}\right] \]

    Notice that the converse is not also true: An eigenstate of kinetic energy is not necessarily an eigenstate of momentum. This is because the square root function is double-valued (positive or negative). So for example, the wave function below is an eigenstate of kinetic energy, but not of momentum:

    \[ \psi\left(x\right) = e^{ikx} + e^{-ikx} \]

    The kinetic energy operator acting on this state gives an eigenvalue equation, but the momentum operator does not - it changes the state in a way that is not simply multiplying it by a number.

    Functions of operators are not the only things that can share eigenstates with other operators. There can exist distinct operators that can do this as well. The observables associated with such operators are said to be compatible. In classical physics, all quantities are compatible – we can measure any two quantities we like, and the measurements don't interfere with each other. But in quantum mechanics, where observables are represented by operators, this is no longer the case.

    The standard test for compatibility of observables is to see if their operators commute with each other. This means the following: Suppose you act on a state with one operator (which changes the state in some way – it doesn't have to be an eigenstate), and then act on the altered state with a different operator to get yet another new state. Now repeat the process of two operations on the same starting state, but this time reverse the order. If the final state ends up being the same, then the operators commute, and their observables are compatible. In other words, if the order in which the two operations are performed is irrelevant to the outcome, then the observables are compatible. The two dimensional matrices representing the operators \(\Omega\) and \(\Omega '\) above do not commute with each other, so they would not represent compatible observables.

    Where quantum mechanics gets interesting is for the cases of incompatible observables. We know that the uncertainty principle tells us that there are problems with measuring position and momentum, so we might guess that those observables are incompatible. Let's check this hypothesis by seeing if they commute. We'll put them into the position basis, and have them act on a general wave function:

    \[ \begin{array}{l} \text{momentum-then-position:} & \widehat x \; \widehat p \; \psi\left(x\right) = x\left[-i\hbar \dfrac{d}{dx}\psi\left(x\right)\right] = -i\hbar\; x\;\psi '\left(x\right) \\ \text{position-then-momentum:} & \widehat p \; \widehat x \; \psi\left(x\right) = -i\hbar \dfrac{d}{dx}\left[x\;\psi\left(x\right)\right] = -i\hbar \left[\psi\left(x\right) + x\psi '\left(x\;\right)\right] \end{array} \]

    We see that in fact we get different results when the operators act in a different order, proving that momentum and position are incompatible observables. A nice shortcut for determining this is something called the commutator. This is a process that turns two operators into a single one. Suppose that \(\Omega\) and \(\Lambda\) are operators. Construct a new operator \(\Gamma\) as follows:

    \[ \Gamma = \Omega \Lambda - \Lambda \Omega \equiv \left[\;\Omega\;,\;\Lambda \;\right] \]

    The last equality is just the standard shorthand for the commutator. The commutator above is written in terms of the Hilbert space operators, but it works the same way for operators expressed in a specific basis (such as \(\widehat \Omega_x\) and \(\widehat \Lambda_x\)), provided they are expressed in the same basis.

    So we can say that two observables are compatible (i.e. they can simultaneously be measured with arbitrary precision) if the operators associated with those observables commute – i.e. the operator formed from their commutator (\(\Gamma\) above) is just zero. The non-zero commutator of the incompatible position and momentum operators can be extracted from Equation 2.5.13:

    \[ \left[ \;\widehat x \;, \;\widehat p\;\right] = i\hbar \]

    While we have shown this is true for the position basis, we can switch over to the momentum basis and show it is true there as well. This shoulw be clear from the fact that the result of the commutator is simply a constant, with not a whiff of basis dependence. We therefore elevate this commutator to the status of holding for position and momentum operators in Hilbert space (which we distinguish by using capital letters):

    \[ \left[ \;X \;, \;P\;\right] = i\hbar \]

    Expectation Values

    Up to now, our computation of average values has followed what we first described in Equation 1.3.4. The question before us now is how to compute an expectation value of an observable represented by an operator. The procedure can be described in Hilbert space quite simply:

    \[ \left<\omega\right> = \left<\;\Psi\;|\;\Omega\;|\;\Psi\;\right> \]

    This should be viewed as a two-step process: The operator acts on the ket, presumably altering it to a new vector. Then the inner product of this new vector with the old vector is taken:

    \[ \left<\omega\right> = \left<\; \Psi \; \right| \cdot \Big[ \;\Omega \; \left| \; \Psi \; \right> \Big] =  \left<\; \Psi \; | \; \widetilde \Psi \; \right> \]

    In the position basis, this becomes:

    \[ \left<\omega\right> = \int \limits_{-\infty}^{+\infty} \psi^*\left(x\right) \left[\widehat\Omega_x \;\psi\left(x\right)\right] dx \]

    If the particle happens to be in an eigenstate of the observable, then there is only one possible outcome for the measurement, and there not much "averaging" to do – the expectation value just equals the eigenvalue:

    \[ \left<\omega\right> = \left<\; \omega \; \right| \cdot \Big[ \;\Omega \; \left| \; \omega \; \right> \Big] =  \left<\; \omega \; \right| \cdot \Big[ \;\omega \; \left| \; \omega \; \right> \Big] = \omega\;\cancelto{1}{\left<\;\omega\;|\;\omega\right>} = \omega \]

    If the operator \(\widehat\Omega_x\) happens to be a function of the position (i.e. it does not include derivatives), then the expectation value looks just like it did in Equation 1.3.4:

    \[ \left<\omega\right> = \int \limits_{-\infty}^{+\infty} \psi^*\left(x\right) \Big[\Omega\left(x\right) \psi\left(x\right)\Big] dx = \int \limits_{-\infty}^{+\infty} \Omega\left(x\right) \left|\psi\left(x\right)\right|^2 dx = \int \limits_{-\infty}^{+\infty} \Omega\left(x\right) P\left(x\right) dx \]

    However, if the operator is not compatible with the basis used for the wave function – meaning that the operator involves one or more derivatives – then the expectation value integral looks different from Equation 1.3.4, because the wave function is changed by the operation, and it cannot simply be combined with its complex conjugate to form a probability density.

    It should be noted that the expectation value, since it can be expressed as it is in Equation 2.5.16, is independent of the choice of basis. This means that the expectation value of momentum can be computed using Equation 2.5.18 using the momentum operator in the position basis, OR a fourier transform can first be performed on \(\psi\left(x\right)\) to get the momentum-space version \(\phi\left(k\right)\), and then the expectation value no longer involves an operator that is a derivative, and the expectation value once again has the form of Equation 1.3.4. Both of these methods will yield the same answer.

    Uncertainties

    Even in light of the difference in computing expectation values in quantum mechanics compared to standard probability theory shown above, computation of uncertainty is no different for quantum mechanics. It simply boils down to two expectation value computations: one for the observable (using its operator), and one for the square of the observable (using its operator twice consecutively). Then construct the uncertainty the usual way:

    \[ \left. \begin{array}{l} \left<\omega\right> = \left<\;\Psi\;|\;\Omega\;|\;\Psi\;\right> \\ \\ \left<\omega^2\right> = \left<\;\Psi\;|\;\Omega^2\;|\;\Psi\;\right> \end{array}  \right\} \;\;\; \Rightarrow \;\;\; \Delta \omega = \sqrt{\left<\omega^2\right>-\left<\omega\right>^2} \]