6.4: Fermi’s Golden Rule
( \newcommand{\kernel}{\mathrm{null}\,}\)
We consider now a system with an Hamiltonian H0, of which we know the eigenvalues and eigenfunctions:
H0uk(x)=Ekuk(x)=ℏωkuk(x)
Here I just expressed the energy eigenvalues in terms of the frequencies ωk=Ek/ℏ. Then, a general state will evolve as:
ψ(x,t)=∑kck(0)e−iωktuk(x)
If the system is in its equilibrium state, we expect it to be stationary, thus the wavefunction will be one of the eigenfunctions of the Hamiltonian. For example, if we consider an atom or a nucleus, we usually expect to find it in its ground state (the state with the lowest energy). We consider this to be the initial state of the system:
ψ(x,0)=ui(x)
where i stands for initial ). Now we assume that a perturbation is applied to the system. For example, we could have a laser illuminating the atom, or a neutron scattering with the nucleus. This perturbation introduces an extra potential ˆV in the system’s Hamiltonian (a priori ˆV can be a function of both position and time ˆV(x,t), but we will consider the simpler case of time-independent potential ˆV(x)). Now the hamiltonian reads:
H=H0+ˆV(x)
What we should do, is to find the eigenvalues {Evh} and eigenfunctions {vh(x)} of this new Hamiltonian and express ui(x) in this new basis and see how it evolves:
ui(x)=∑hdh(0)vh→ψ′(x,t)=∑hdh(0)e−iEvht/ℏvh(x).
Most of the time however, the new Hamiltonian is a complex one, and we cannot calculate its eigenvalues and eigenfunctions. Then we follow another strategy.
Consider the examples above (atom+laser or nucleus+neutron): What we want to calculate is the probability of making a transition from an atom/nucleus energy level to another energy level, as induced by the interaction. Since H0 is the original Hamiltonian describing the system, it makes sense to always describe the state in terms of its energy levels (i.e. in terms of its eigenfunctions). Then, we guess a solution for the state of the form:
ψ′(x,t)=∑kck(t)e−iωktuk(x)
This is very similar to the expression for ψ(x,t) above, except that now the coefficient ck are time dependent. The time-dependency derives from the fact that we added an extra potential interaction to the Hamiltonian.
Let us now insert this guess into the Schrödinger equation, iℏ∂ψ′∂t=H0ψ′+ˆVψ′:
iℏ∑k[˙ck(t)e−iωktuk(x)−iωck(t)e−iωktuk(x)]=∑kck(t)e−iωkt(H0uk(x)+ˆV[uk(x)])
(where ˙c is the time derivative). Using the eigenvalue equation to simplify the RHS we find
∑k[iℏ˙ck(t)e−iωktuk(x)+ℏωck(t)e−iωktuk(x)]=∑k[ck(t)e−iωktℏωkuk(x)+ck(t)e−iωktˆV[uk(x)]]
∑kiℏ˙ck(t)e−iωktuk(x)=∑kck(t)e−iωktˆV[uk(x)]
Now let us take the inner product of each side with uh(x):
∑kiℏ˙ck(t)e−iωkt∫∞−∞u∗h(x)uk(x)dx=∑kck(t)e−iωkt∫∞−∞u∗h(x)ˆV[uk(x)]dx
In the LHS we find that ∫∞−∞u∗h(x)uk(x)dx=0 for h≠k and it is 1 for h=k (the eigenfunctions are orthonormal). Then in the sum over k the only term that survives is the one k=h:
∑kiℏ˙ck(t)e−iωkt∫∞−∞u∗h(x)uk(x)dx=iℏ˙ch(t)e−iωht
On the RHS we do not have any simplification. To shorten the notation however, we call Vhk the integral:
Vhk=∫∞−∞u∗h(x)ˆV[uk(x)]dx
The equation then simplifies to:
˙ch(t)=−iℏ∑kck(t)ei(ωh−ωk)tVhk
This is a differential equation for the coefficients ch(t). We can express the same relation using an integral equation:
ch(t)=−iℏ∑k∫t0ck(t′)ei(ωh−ωk)t′Vhkdt′+ch(0)
We now make an important approximation. We said at the beginning that the potential ˆV is a perturbation, thus we assume that its effects are small (or the changes happen slowly). Then we can approximate ck(t′) in the integral with its value at time 0, ck(t=0):
ch(t)=−iℏ∑kck(0)∫t0ei(ωh−ωk)t′Vhkdt′+ch(0)
[Notice: for a better approximation, an iterative procedure can be used which replaces ck(t′) with its first order solution, then second etc.].
Now let’s go back to the initial scenario, in which we assumed that the system was initially at rest, in a stationary state ψ(x,0)=ui(x). This means that ck(0)=0 for all k≠i. The equation then reduces to:
ch(t)=−iℏ∫t0ei(ωh−ωi)t′Vhidt′
or, by calling Δωh=ωh−ωi,
ch(t)=−iℏVhi∫t0eiΔωht′dt′=−VhiℏΔωh(1−eiΔωht)
What we are really interested in is the probability of making a transition from the initial state ui(x) to another state uh(x):P(i→h)=|ch(t)|2. This transition is caused by the extra potential ˆV but we assume that both initial and final states are eigenfunctions of the original Hamiltonian H0 (notice however that the final state will be a superposition of all possible states to which the system can transition to).
We obtain
P(i→h)=4|Vhi|2ℏ2Δω2hsin(Δωht2)2
The function sinzz is called a sinc function (see Figure 6.4.1). Take sin(Δωt/2)Δω/2. In the limit t→∞ (i.e. assuming we are describing the state of the system after the new potential has had a long time to change the state of the quantum system) the sinc function becomes very narrow, until when we can approximate it with a delta function. The exact limit of the function gives us:
P(i→h)=2π|Vhi|2tℏ2δ(Δωh)
We can then find the transition rate from i→h as the probability of transition per unit time, Wih=dP(i→h)dt:
Wih=2πℏ2|Vhi|2δ(Δωh)
This is the so-called Fermi’s Golden Rule, describing the transition rate between states.
This transition rate describes the transition from ui to a single level uh with a given energy Eh=ℏωh. In many cases the final state is an unbound state, which, as we saw, can take on a continuous of possible energy available. Then, instead of the point-like delta function, we consider the transition to a set of states with energies in a small interval E→E+dE. The transition rate is then proportional to the number of states that can be found with this energy. The number of state is given by dn=ρ(E)dE, where ρ(E) is called the density of states (we will see how to calculate this in a later lecture). Then, Fermi’s Golden rule is more generally expressed as:
Wih=2πℏ|Vhi|2ρ(Eh)|Eh=Ei
[Note, before making the substitution δ(Δω)→ρ(E) we need to write δ(Δω)=ℏδ(ℏΔω)=ℏδ(Eh−Ei)→ℏρ(Eh)|Eh=Ei. This is why in the final formulation for the Golden rule we only have a factor ℏ and not its square.]