Skip to main content
[ "article:topic", "dipole moment", "Quadrupole Moment", "authorname:tweideman", "license:ccbysa", "showtoc:no" ]
\(\require{cancel}\)
Physics LibreTexts

6.2: Selection Rules and Transition Times

  • Page ID
    17210
  • Dipole Transitions

    In the previous section, we mentioned the "very important concept" that related probability density to charge density. We did this to equate the oscillation frequency of the transitional state to the oscillation frequency of the photon emitted or absorbed. It turns out that the spatial part of the transitional state also plays an important role in restricting transitions.

    Charge distributions radiate whenever they fluctuate. The diagram below shows two simple types of charge oscillations.

    Figure 6.2.1 – Dipole and Quadrupole Oscillations

    dipole_and_quadrupole_oscillations.png

    While it may not be apparent from the diagram, the dipole oscillation consists of the positive and negative charges swapping positions periodically (i.e. they do not just oscillate between the center point and the extreme). The dipole moment of a two point-charge configuration is a vector \(\overrightarrow p\) with a magnitude equal to the absolute value of one of the two equal charges multiplied by the separation of the charges, and with a direction that points from the negative charge toward the positive charge. The equivalent quantity for the quadrupole moment is not a vector, and this is as deep as we will go into discussing the details of this quantity. There is also an even simpler (scalar) moment, which is simply the total charge. We have left it out because we are interested in fluctuations that lead to radiation.

    One can also compute the dipole and quadrupole moments for general distributions of charge. A general charge fluctuation (imagine an amorphous blob of charge pulsating in some random-looking fashion) can be broken down into many component types, in a manner similar to how a function of a single variable can be broken down into a power series in that variable. The lowest-order contributions to the general charge distribution fluctuation come from dipole and quadrupole oscillations. Because of the nature of light (satisfying Maxwell's equations), it turns out that the most efficient mode of power radiation (i.e. the mode responsible for the fastest transfer of energy) is the dipole component of oscillation.

    Of course, in order to have the dipole mode available, the transitional state (the mixed state between the initial and final energy eigenstates) must have a dipole moment. We can compute the dipole moment using the classical definition of dipole moment for a charge distribution, and our idea of relating probability density to charge density. The positive charge in our dipole is the proton at the origin, so from classical electromagnetism, we have:

    \[ \overrightarrow p = \int  \left[-\overrightarrow r\right]\rho\left(\overrightarrow r\right) \;dV \]

    The integral is performed over the volume containing all the charge, and the negative sign comes from the fact that the direction of the dipole moment points toward the positive charge. Replacing the charge density with the probability density (of the transitional state) multiplied by the charge of the electron, and noting that the electron wave function occupies all space, we get:

    \[ \overrightarrow p = -e \int\limits_{all\;space} \overrightarrow r\;\left|\Psi_{i\;f}\right|^2  \;dV \]

    Parity

    The integral for the dipole moment is essentially a sum of an infinite number of vectors – each vector in the sum is the position vector multiplied by a scalar function of the position (the probability density). With the integral acting over all space, there are equal numbers of vectors in this sum pointing in opposite directions. If the probability density at a given position is the same value when the position is reflected across the origin, the sum of all the vectors (i.e. the integral) will come out to zero.

    This mathematical property is the three-dimensional equivalent of the one-dimensional case of integrating a function over an interval across which it is odd. In the three-dimensional case, a function that is the same when reflected about the origin is said to have even parity, and a function that flips its sign when reflected about the origin has odd parity. Of course, a function does not need to have definite parity (even or odd), just as a one-dimensional function need not be even or odd across an interval. A product of several functions of definite parity will yield another function with definite parity. If an odd number of these functions has odd parity, then the product will have odd parity, otherwise the product will have even parity.

    Hydrogen Atom Transitions

    With the main transitions occurring through the dipole mode, it is useful to know when the transition state for a hydrogen atom  has a dipole moment. To determine this, we need to combine Equation 6.1.4 with Equation 6.2.2and we need to know something about the parity of the energy eigenstate wave functions of the hydrogen atom. Let's tackle the last part first.

    Reflecting across the origin using spherical coordinates consists of going halfway around the full circle with both the polar and azimuthal angles. Of course, this reflection process involves no change of the value of \(r\), since the reflected position is the same distance from the origin. So reflection across the origin is achieved by making the substitutions \(\phi\rightarrow \phi+\pi\) and \(\theta\rightarrow \pi-\theta\). Close examination of several of the spherical harmonic functions reveals the pattern (and of course this can be proven in general) that \(Y_{lm}\left(\theta,\phi\right)\) has even parity when \(l\) is even, and odd parity when \(l\) is odd. The radial part of the wave function has even parity for all values of \(n\) and \(l\), since it only changes with \(r\), so the wave function for an energy eigenstate has even/odd parity when \(l\) is even\odd.

    Now to look at the transitional state... The first two terms in Equation 6.1.4 are squares of energy eigenstate wave functions, which have definite parity, which means that those terms have even parity. When multiplied by the position vector to compute the dipole moment, as stated above, the integral will come out to zero. This means that only the last two terms can contribute to a dipole moment. Closer inspection of those terms reveals that they are complex conjugates of each other. Whenever a complex number is added to its complex conjugate, the result is twice its real part:

    \[ z+z^* = \left(a+bi\right) + \left(a-bi\right) = 2a =2Re\left[z\right] \]

    We can therefore write the integral for the dipole moment in a rather compact manner:

    \[ \overrightarrow p = -e \int\limits_{all\;space} \overrightarrow r\;\left|\Psi_{i\;f}\right|^2  \;dV = -2e\;\text{Re}\left[\left(C^*_i\;C_f\right)\;e^{i\frac{\Delta E}{\hbar}t}\int\limits_{all\;space}\overrightarrow r\psi^*_i\;\psi_f \;dV\right] \]

    Let's take a closer look at this. We know that the initial and final wave functions have definite parity, so their product has definite parity, but we can't tell if the product has even parity (giving a zero integral) or an odd parity (giving a non-zero integral). We do know that for the overall parity to be odd (which is what we need for a non-zero dipole moment), then one of the states must have odd parity (have an odd value for \(l\)), while the other has an even \(l\) value, which means the transition must be between two states whose angular momentum quantum numbers differ by an odd integer.

    Selection Rules

    So it appears that if a hydrogen atom emits a photon, it not only has to transition between two states whose energy difference matches the energy of the photon, but it is restricted in other ways as well, if its mode of radiation is to be dipole. For example, a hydrogen atom in its \(3p\) state must drop to either the \(n=1\) or \(n=2\) energy level, to make the energy available to the photon. The \(n=2\) energy level is 4-fold degenerate, and including the single \(n=1\) state, the atom has five different states to which it can transition. But three of the states in the \(n=2\) energy level have \(l=1\) (the \(2p\) states), so transitioning to these states does not involve a change in the angular momentum quantum number, and the dipole mode is not available.

    So what's the big deal? Why doesn't the hydrogen atom just use a quadrupole or higher-order mode for this transition? It can, but the characteristic time for the dipole mode is so much shorter than that for the higher-order modes, that by the time the atom gets around to transitioning through a higher-order mode, it has usually already done so via dipole. All of this is statistical, of course, meaning that in a large collection of hydrogen atoms, many different modes of transitions will occur, but the vast majority of these will be dipole.

    It turns out that examining details of these restrictions introduces a couple more. These come about from the conservation of angular momentum. It turns out that photons have an intrinsic angular momentum (spin) magnitude of \(\hbar\), which means whenever a photon (emitted or absorbed) causes a transition in a hydrogen atom, the value of \(l\) must change (up or down) by exactly 1. This in turn restricts the changes that can occur to the magnetic quantum number: \(m_l\) can change by no more than 1 (it can stay the same). We have dubbed these transition restrictions selection rules, which we summarize as:

    \[ \Delta l = \pm 1\;,\;\;\;\;\;\;\;\; \Delta m_l = 0,\;\pm 1\]

    Transition Times

    After all the discussion about how much faster the hydrogen atom transitions via the dipole mode than through higher-order modes, one might ask just how fast this is. While computing this for the higher-order modes is beyond the scope of this class (actually, the bulk of this work comes from E&M, not quantum mechanics), we can make a good estimate of the transition rate for the dipole mode. To do this, we'll steal a result from E&M (without doing the actual calculation). The power radiated by an electric dipole that is oscillating harmonically with an angular frequency \(\omega\), and whose amplitude of oscillation is the dipole moment \(p\) is given by:

    \[ P\left(p,\;\omega\right) = \dfrac{p^2\omega^4}{12\epsilon_o c^3} \]

    The angular frequency of the dipole of course matches that of the light emitted, so the energy of a single photon emitted by this dipole oscillation is \(\hbar \omega\). Power is the rate at which energy is emitted, so we can use this information to estimate how many photons are emitted per second:

    \[ P = \dfrac{energy}{time} = \left(\dfrac{energy}{photon}\right)\left(\dfrac{photons}{time}\right)=\left(\hbar \omega\right)\left(\dfrac{N_{\gamma}}{T}\right) \;\;\; \Rightarrow \;\;\; \dfrac{N_{\gamma}}{T} = \dfrac{p^2\omega^3}{12\hbar\epsilon_oc^3} \]

    The inverse of the rate at which photons are emitted is the average time it takes for a single photon to be emitted. Remembering that the oscillating dipole is a model for the hydrogen, this time period is the average time it takes for the hydrogen atom to make the transition between energy levels. To compute a transition time, we start by computing the dipole moment for the transitional state using Equation 6.2.4, and plug-in for the angular frequency associated with that transition. The only missing ingredient for this calculation is the specific mix of the two eigenstates in the transitional state (i.e. the relative values of \(C_i\) and \(C_f\)).

    If we know nothing about the mix of the initial and final states in the transitional state, then one can imagine doing an experiment over and over, and on average, the mix would be equal contributions of the two states (if we know something about the perturbation \(H\left(t\right)\) that created the mix, we can make a better estimate). An equal mix would be (choosing real values): \(C_i = C_f = \frac{1}{\sqrt{2}}\), which simplifies Equation 6.2.4 to:

    \[ \overrightarrow p = -e\;\text{Re}\left[e^{i\frac{\Delta E}{\hbar}t}\int\limits_{all\;space}\overrightarrow r\psi^*_i\;\psi_f \;dV\right] \]

    Plugging this into the formula for the transition time give values in the range of 1 to 10 \(ns\) for hydrogen atom transitions.

    One last note about performing this calculation. A vector is being computed here, which means that there are actually three components to compute. For there to be a dipole moment, it has to have a preferred direction in space (i.e. it makes no sense to say that the dipole moment is "radial'). Given the azimuthal symmetry of all the stationary-state wave functions, it might seem like the dipole direction must be along the \(z\) axis, but in fact this is only true in certain cases. Writing the position vector that appears in the integral in terms of its \(x\), \(y,\) and \(z\) directions, but in spherical coordinates, we have:

    \[ \overrightarrow r = r\sin\theta\cos\phi \;\widehat i + r\sin\theta\sin\phi \;\widehat j + r\cos\theta \;\widehat k \] 

    If the \(m_l\) quantum number doesn't change during the transition, then when the complex conjugate of one wave function multiplies the other, the product of the azimuthal part of the wave functions cancel, leaving no \(\phi\) dependence at all in the wave function portion of the integrand. The position vector contributes a \(\cos\phi\) for the \(x\)-component, and a \(\sin\phi\) for the \(y\)-component, and integrals of these functions from \(0\rightarrow 2\pi\) are zero. So the dipole moment points along the \(z\) direction when \(m_l\) doesn't change during the transition, but in general it can have \(x\) and \(y\) components. Note we use the magnitude of the dipole moment to compute the transition time.