Skip to main content
Physics LibreTexts

Time-Dependent Solutions: Propagators and Representations

Introduction

We’ve spent most of the course so far concentrating on the eigenstates of the Hamiltonian, states whose time-dependence is merely a changing phase. We did mention much earlier a superposition of two different energy states in an infinite well, resulting in a wave function sloshing backwards and forwards. It’s now time to cast the analysis of time dependent states into the language of bras, kets and operators. We’ll take a time-independent Hamiltonian H,  with a complete set of orthonormalized eigenstates, and as usual

Or, as we would now write it

Since H is itself time independent, this is very easy to integrate!

The exponential operator that generates the time-dependence is called the propagator, because it describes how the wave propagates from its initial configuration, and is usually denoted by U:

It’s appropriate to call the propagator U, because it’s a unitary operator:

Since H is hermitian, U is unitary.  It immediately follows that

 

the norm of the ket vector is conserved, or, translating to wave function language, a wave function correctly normalized to give a total probability of one stays that way.  (This can also be proved from the Schrödinger equation, of course, but this is quicker.)

This is all very succinct, but unfortunately the exponential of a second-order differential operator doesn’t sound too easy to work with.  Recall, though, that any function of a Hermitian operator has the same set of eigenstates as the original operator.  This means that the eigenstates of  are the same as the eigenstates of H, and if  then

This is of course nothing but the time dependent phase factor for the eigenstates we found before—and, as before, to find the time dependence of any general state we must express it as a superposition of these eigenkets, each having its own time dependence.  But how do we do that in the operator language?  Easy: we simply insert an identity operator, the one constructed from the complete set of eigenkets, thus:

Staring at this, we see that it’s just what we had before: at the initial time  the wave function can be written as a sum over the eigenkets:

with  , and the usual generalization for continuum eigenvalues, and the time development is just given by inserting the phases:

The expectation value of the energy E in ,

and is (of course) time independent.

The expectation value of the particle position x is

 

and is not in general time-independent.  (It is real, of course, on adding the (n,m) term to the (m,n) term.)

This analysis is only valid for a time-independent Hamiltonian. The important extension to a system in a time-dependent external field, such as an atom in a light beam, will be given later in the course.

The Free Particle Propagator

To gain some insight into what the propagator U looks like, we’ll first analyze the case of a particle in one dimension with no potential at all. 

We’ll also take  to make the equations less cumbersome.

For a free particle in one dimension  the energy eigenstates are also momentum eigenstates, we label them , so

Let’s consider (following Shankar and others) what seems the simplest example. 

Suppose that at t = t0 = 0, a particle is at x0: what is the probability amplitude for finding it at x at a later time t?  (This would be just its wave function at the later time.)

using the standard identity for Gaussian integrals,

On examining the above expression, though, it turns out to be nonsense!  Noting that the term in the exponent is pure imaginary,  independent of x!  This particle apparently instantaneously fills all of space, but then its probability dies away as 1/t…. 

 

Question: Where did we go wrong?

Answer:  Notice first that  is constant throughout space.  This means that the normalization, !  And, as we’ve seen above, the normalization stays constant in time—the propagator is unitary.  Therefore, our initial wave function must have had infinite norm.  That’s exactly right—we took the initial wave function .

Think of the -function as a limit of a function equal to  over an interval of length , with  going to zero, and it’s clear the normalization goes to infinity as .  This is not a meaningful wave function for a particle.  Recall that continuum kets like  are normalized by , they do not represent wave functions individually normalizable in the usual sense.  The only meaningful wave functions areintegrals over a range of such kets, such as .  In an integral like this, notice that states  within some tiny x-interval of length say, have total weight , which goes to zero as  is made smaller, but by writing  we took a single such state and gave it a finite weight.  This we can’t do.

Of course, we do want to know how a wave function initially localized near a point develops.  To find out, we must apply the propagator to a legitimate wave function—one that is normalizable to begin with. The simplest “localized particle” wave function from a practical point of view is a Gaussian wave packet,

(I’ve used d in place of Shankar’s  here to try to minimize confusion with  etc.)

The wave function at a later time is then given by the operation of the propagator on this initial wave function:

Note first that since this is just  written explicitly in terms of Schrödinger wave functions,

it is evident that  as  This is just equivalent to the operator statement that  the unit operator, as

The integral over x′  is just another Gaussian integral, so we use the same result,

.

Looking at the expression above, we can see that

,   .

This gives

where the second exponential is the term .  As written, the small t limit is not very apparent, but some algebraic rearrangement yields:

.

Written this way, it is evident that the expression goes to the initial wave packet for t going to zero, as of course it must. 

Although the phase in the above expression for  has contributions from all three terms, the main phase oscillation is in the third term, and one can see the phase velocity is one-half the group velocity, as discussed earlier.

The resulting probability density:

.

This is a Gaussian wave packet, having a width which goes as  for large times, where d is the width of the initial packet in x-space—so  is the spread in velocities  within the packet, hence the gradual spreading   in x-space.

 

It’s amusing to look at the limit of this as the width d of the initial Gaussian packet goes to zero, and see how that relates to our -function result.  Suppose we are at distance x from the origin, and there is initially a Gaussian wave packet centered at the origin, width d << x.  At time , the wave packet has spread to x and has of order 1/x at x. Thereafter, it continues to spread at a linear rate in time, so locally  must decrease as 1/t  to conserve probability.  In the -function limit , the wave function instantly spreads through a huge volume, but then goes as 1/t as it spreads into an even huger volume.  Or something.

Schrödinger and Heisenberg Representations

Assuming a Hamiltonian with no explicit time dependence, the time-dependent Schrödinger equation has the form

and as discussed above, the formal solution can be expressed as:

Now, any measurement on a system amounts to measuring a matrix element of an operator between two states (or, more generally, a function of such matrix elements). 

In other words, the physically significant time dependent quantities are of the form

where A is an operator, which we are assuming has no explicit time dependence.

So in this Schrödinger picture, the time dependence of the measured value of an operator like x or p comes about because we measure the matrix element of an unchanging operator between bras and kets that are changing in time.

Heisenberg took a different approach: he assumed that the ket describing a quantum system did not change in time, it remained at  but the operators evolved according to:

Clearly, this leads to the same physics as before. The equation of motion of the operator is:

The Hamiltonian itself does not change in time—energy is conserved, or, to put it another way, H commutes with   But for a nontrivial Hamiltonian, say for a particle in one dimension in a potential,

the separate components will have time-dependence, parallel to the classical case: the kinetic energy of a swinging pendulum varies with time.  (For a particle in a potential in an energy eigenstate the expectation value of the kinetic energy is constant, but this is not the case for any other state, that is, for a superposition of different eigenstates.)  Nevertheless, the commutator of x, p will be time-independent:

(The Heisenberg operators are identical to the Schrödinger operators at t = 0.) 

Applying the general commutator result ,

so

and since ,

This result could also be derived by writing V(x) as an expansion in powers of x, then taking the commutator with p.

Exercise: check this.

Notice from the above equations that the operators in the Heisenberg Representation obey the classical laws of motion! Ehrenfest’s Theorem, that the expectation values of operators in a quantum state follow the classical laws of motion, follows immediately, by taking the expectation value of both sides of the operator equation of motion in a quantum state.

Simple Harmonic Oscillator in the Heisenberg Representation

For the simple harmonic oscillator, the equations are easily integrated to give:

We have put in the H subscript to emphasize that these are operators.  It is usually clear from the context that the Heisenberg representation is being used, and the subscript H may be safely omitted.

The time-dependence of the annihilation operator  a is:

with

Note again that although H is itself time-independent, it is necessary to include the time-dependence of individual operators within H.

so

Actually, we could have seen this as follows: if  are the energy eigenstates of the simple harmonic oscillator,

Now the only nonzero matrix elements of the annihilation operator  between energy eigenstates are of the form

Since this time-dependence is true of all energy matrix elements (trivially so for most of them, since they’re identically zero), and the eigenstates of the Hamiltonian span the space, it is true as an operator equation.

Evidently, the expectation value of the operator in any state goes clockwise in a circle centered at the origin in the complex plane. That this is indeed the classical motion of the simple harmonic oscillator is confirmed by recalling the definition , so the complex plane corresponds to the phase space discussed near the beginning of the lecture on the Simple Harmonic Oscillator. We’ll discuss this in much more detail in the next lecture, on Coherent States.

The time-dependence of the creation operator is just the adjoint equation: