Skip to main content
[ "article:topic", "authorname:tweideman", "license:ccbysa", "showtoc:no" ]
Physics LibreTexts

2.2: Dynamics of Quantum State Vectors

  • Page ID
  • Time Dependence

    We discussed the state vector \(\left|\;\Psi\;\right>\) in terms of its infinite number of components in Hilbert space – one for every position on the \(x\)-axis, but so far we have not said anything about how or if this state vector changes with time. Given that the probability of finding the particle at a given position should be able change with time, it only makes sense that the quantum state vector would be dynamic.

    In this treatment of quantum mechanics, we do not incorporate special relativity, so time and space are not treated on an equal footing, which means that there is no "time component" of the quantum state vector in the Hilbert space, and we treat it as a vector that changes with time. The component of this vector parallel to the unit vector \(\left|\;x\;\right>\) (i.e. the wave function) is therefore also a function of time:

    \[\left<\;x\;|\;\Psi\left(t\right)\;\right> = \psi\left(x,t\right) \]

    Similarly for momentum space:

    \[\left<\;k\;|\;\Psi\left(t\right)\;\right> = \phi\left(k,t\right) \]

    The question that now arises is, "What determines how the quantum state vector evolves over time?" While will will not be able to derive an exact answer to this, we can certainly motivate it. We know that macroscopically objects change momentum because of external influences – forces. Forces can be expressed as potential energies, so if we know what the full potential energy function \(V\left(x\right)\) looks like, then we would expect it to have some say over the time evolution of the state vector. How exactly it does this is the subject of the next couple sections.

    Vector Space Operators

    If we wish to change a regular vector, we have the following two options (or a combination of both):

    shrink or expand its length – This is most easily accomplished by simply multiplying it by a scalar, but it is not the only way. Take, for example, the vector \(\widehat i\) in two dimensions, represented by the usual matrix. Its length can be changed by multiplying it by a square matrix that is clearly not a simple scalar:

    \[ \left[ \begin{array}{*{20}{c}} A & B \\ 0 & C \end{array}\right] \left[ \begin{array}{*{20}{c}} 1 \\ 0 \end{array}\right] = A \left[ \begin{array}{*{20}{c}} 1 \\ 0 \end{array}\right] \]

    Interestingly, this vector is expanded by an amount \(A\) by this matrix regardless of the values of \(B\) and \(C\), which means there exist infinitely-many such square matrices that all have the same effect on the column matrix representing this unit vector. However, if we multiply this same square matrix by the column matrix representing the very same unit vector in a different basis, then the same result (just expanding the length) doesn't occur. If we want a square matrix that behaves this way (shrinks or expands a unit vector in every basis by the same amount), then it needs to be a scalar multiplied by the unit matrix:

    \[ A\;I \leftrightarrow \left[ \begin{array}{*{20}{c}} A & 0 \\ 0 & A \end{array}\right] \]

    To distinguish this matrix from just the scalar \(A\) (which has the same effect when it multiplies a vector), we call it a c-number.

    change its direction (rotate it) – The rotation of a vector can be represented in matrix form using the rotation matrix. To rotate the vector counterclockwise (from the \(+x\)-axis toward the \(+y\)-axis by an angle \(\theta\), use the square matrix:

    \[ R\left(\theta\right) \leftrightarrow \left[ \begin{array}{*{20}{c}} \cos\theta & -\sin\theta \\ \sin\theta & \cos\theta\end{array}\right] \]

    It's easy to confirm that rotating \(\widehat i\) by \(90^o\) results in \(\widehat i\). It's also left as an exercise to show that this rotation operation is precisely that – it only rotates a vector, it doesn't change its length at all.

    While we have expressed these as matrix multiplications, it is important to keep in mind that matrices are only specific representations of these actions. The fundamental objects that have these effects on the abstract vectors themselves are called operators. So for example, the \(A\;I\) above is a c-number operator, and \(R\left(\theta\right)\) above is a rotation operator. Operators are what cause vectors to change, and they can be represented in many ways, depending upon the basis we want to work in.

    It should also be noted that if we do two operations in succession on a vector, in general the order of these operations matters. In some special cases operators might commute, but this is not always the case, just as this is the case with matrices. If one or both of the operators is a c-number, then of course the operators will commute, though this is not a necessary condition for commutation. Indeed, the 'c' in "c-number" stands for "commuting."

    Hilbert Space Operators

    For vectors in Hilbert space, the operators are not as simple as discrete square matrices, but they still have the same function – they change vectors into other vectors. The difference is that they have to be able to change an infinite continuum of components. The components of \(\left|\;\Psi\;\right>\) in the position basis \(\left|\;x\;\right>\) are the values of the wave function \(\psi\left(x\right)\). Changing all of these values at the same time means that the wave function is changed to another function. This can be done in a number of ways. The first is simple multiplication. If we multiply the function \(\psi\left(x\right)\) by another function (\Omega\left(x\right)\), then a whole new function is the result. A second way to change a function into another function is to perform one or more derivatives. And this is true in any basis, most notably in the position and momentum bases:

    \[ \left|\;\widetilde\Psi\;\right> = \Omega \left|\;\Psi\;\right> \;\;\;\;\; \leftrightarrow \;\;\;\;\; \widetilde\psi\left(x\right) = \left\{ \begin{array}{*{20}{c}} \Omega\left(x\right)\psi\left(x\right) \\ or \\ \dfrac{d^n}{dx^n}\psi\left(x\right) \end{array} \right. \;\;,\;\;\;\;\; \widetilde\phi\left(k\right) = \left\{ \begin{array}{*{20}{c}} \Omega\left(k\right)\phi\left(k\right) \\ or \\ \dfrac{d^n}{dk^n}\phi\left(k\right) \end{array} \right. \]

    Of course, combinations of these also work, such as the sum of a function and a derivative.

    Time Evolution of a Quantum State

    As was stated earlier, the time evolution of a quantum state vector must be determined by something we can define physically. We know from the discussion that followed that we effect change in a Hilbert space vector by operating on it with operators. We won't drag this out any further, since really what follows amounts to a postulate. We define the hamiltonian operator as the operator version of the total energy of the particle, which is the sum of the kinetic energy and potential energy operators:

    \[ H \equiv  KE + PE \]

    The time evolution of a quantum state is defined by how the hamiltonian affects the state vector, which is as follows:

    \[ H \left|\;\Psi\;\right> = i\hbar \dfrac{\partial}{\partial t}\left|\;\Psi\;\right> \]

    This is known as Schrödinger's equation.