10.1: Scattering Theory
- Page ID
- 5679
\( \newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \)
\( \newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}} \)
\( \newcommand{\id}{\mathrm{id}}\) \( \newcommand{\Span}{\mathrm{span}}\)
( \newcommand{\kernel}{\mathrm{null}\,}\) \( \newcommand{\range}{\mathrm{range}\,}\)
\( \newcommand{\RealPart}{\mathrm{Re}}\) \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\)
\( \newcommand{\Argument}{\mathrm{Arg}}\) \( \newcommand{\norm}[1]{\| #1 \|}\)
\( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\)
\( \newcommand{\Span}{\mathrm{span}}\)
\( \newcommand{\id}{\mathrm{id}}\)
\( \newcommand{\Span}{\mathrm{span}}\)
\( \newcommand{\kernel}{\mathrm{null}\,}\)
\( \newcommand{\range}{\mathrm{range}\,}\)
\( \newcommand{\RealPart}{\mathrm{Re}}\)
\( \newcommand{\ImaginaryPart}{\mathrm{Im}}\)
\( \newcommand{\Argument}{\mathrm{Arg}}\)
\( \newcommand{\norm}[1]{\| #1 \|}\)
\( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\)
\( \newcommand{\Span}{\mathrm{span}}\) \( \newcommand{\AA}{\unicode[.8,0]{x212B}}\)
\( \newcommand{\vectorA}[1]{\vec{#1}} % arrow\)
\( \newcommand{\vectorAt}[1]{\vec{\text{#1}}} % arrow\)
\( \newcommand{\vectorB}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \)
\( \newcommand{\vectorC}[1]{\textbf{#1}} \)
\( \newcommand{\vectorD}[1]{\overrightarrow{#1}} \)
\( \newcommand{\vectorDt}[1]{\overrightarrow{\text{#1}}} \)
\( \newcommand{\vectE}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash{\mathbf {#1}}}} \)
\( \newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \)
\( \newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}} \)
\(\newcommand{\avec}{\mathbf a}\) \(\newcommand{\bvec}{\mathbf b}\) \(\newcommand{\cvec}{\mathbf c}\) \(\newcommand{\dvec}{\mathbf d}\) \(\newcommand{\dtil}{\widetilde{\mathbf d}}\) \(\newcommand{\evec}{\mathbf e}\) \(\newcommand{\fvec}{\mathbf f}\) \(\newcommand{\nvec}{\mathbf n}\) \(\newcommand{\pvec}{\mathbf p}\) \(\newcommand{\qvec}{\mathbf q}\) \(\newcommand{\svec}{\mathbf s}\) \(\newcommand{\tvec}{\mathbf t}\) \(\newcommand{\uvec}{\mathbf u}\) \(\newcommand{\vvec}{\mathbf v}\) \(\newcommand{\wvec}{\mathbf w}\) \(\newcommand{\xvec}{\mathbf x}\) \(\newcommand{\yvec}{\mathbf y}\) \(\newcommand{\zvec}{\mathbf z}\) \(\newcommand{\rvec}{\mathbf r}\) \(\newcommand{\mvec}{\mathbf m}\) \(\newcommand{\zerovec}{\mathbf 0}\) \(\newcommand{\onevec}{\mathbf 1}\) \(\newcommand{\real}{\mathbb R}\) \(\newcommand{\twovec}[2]{\left[\begin{array}{r}#1 \\ #2 \end{array}\right]}\) \(\newcommand{\ctwovec}[2]{\left[\begin{array}{c}#1 \\ #2 \end{array}\right]}\) \(\newcommand{\threevec}[3]{\left[\begin{array}{r}#1 \\ #2 \\ #3 \end{array}\right]}\) \(\newcommand{\cthreevec}[3]{\left[\begin{array}{c}#1 \\ #2 \\ #3 \end{array}\right]}\) \(\newcommand{\fourvec}[4]{\left[\begin{array}{r}#1 \\ #2 \\ #3 \\ #4 \end{array}\right]}\) \(\newcommand{\cfourvec}[4]{\left[\begin{array}{c}#1 \\ #2 \\ #3 \\ #4 \end{array}\right]}\) \(\newcommand{\fivevec}[5]{\left[\begin{array}{r}#1 \\ #2 \\ #3 \\ #4 \\ #5 \\ \end{array}\right]}\) \(\newcommand{\cfivevec}[5]{\left[\begin{array}{c}#1 \\ #2 \\ #3 \\ #4 \\ #5 \\ \end{array}\right]}\) \(\newcommand{\mattwo}[4]{\left[\begin{array}{rr}#1 \amp #2 \\ #3 \amp #4 \\ \end{array}\right]}\) \(\newcommand{\laspan}[1]{\text{Span}\{#1\}}\) \(\newcommand{\bcal}{\cal B}\) \(\newcommand{\ccal}{\cal C}\) \(\newcommand{\scal}{\cal S}\) \(\newcommand{\wcal}{\cal W}\) \(\newcommand{\ecal}{\cal E}\) \(\newcommand{\coords}[2]{\left\{#1\right\}_{#2}}\) \(\newcommand{\gray}[1]{\color{gray}{#1}}\) \(\newcommand{\lgray}[1]{\color{lightgray}{#1}}\) \(\newcommand{\rank}{\operatorname{rank}}\) \(\newcommand{\row}{\text{Row}}\) \(\newcommand{\col}{\text{Col}}\) \(\renewcommand{\row}{\text{Row}}\) \(\newcommand{\nul}{\text{Nul}}\) \(\newcommand{\var}{\text{Var}}\) \(\newcommand{\corr}{\text{corr}}\) \(\newcommand{\len}[1]{\left|#1\right|}\) \(\newcommand{\bbar}{\overline{\bvec}}\) \(\newcommand{\bhat}{\widehat{\bvec}}\) \(\newcommand{\bperp}{\bvec^\perp}\) \(\newcommand{\xhat}{\widehat{\xvec}}\) \(\newcommand{\vhat}{\widehat{\vvec}}\) \(\newcommand{\uhat}{\widehat{\uvec}}\) \(\newcommand{\what}{\widehat{\wvec}}\) \(\newcommand{\Sighat}{\widehat{\Sigma}}\) \(\newcommand{\lt}{<}\) \(\newcommand{\gt}{>}\) \(\newcommand{\amp}{&}\) \(\definecolor{fillinmathshade}{gray}{0.9}\)Almost everything we know about nuclei and elementary particles has been discovered in scattering experiments, from Rutherford’s surprise at finding that atoms have their mass and positive charge concentrated in almost point-like nuclei, to the more recent discoveries, on a far smaller length scale, that protons and neutrons are themselves made up of apparently point-like quarks.
The simplest model of a scattering experiment is given by solving Schrödinger’s equation for a plane wave impinging on a localized potential. A potential \(V(r)\) might represent what a fast electron encounters on striking an atom, or an alpha particle a nucleus. Obviously, representing any such system by a potential is only a beginning, but in certain energy ranges it is quite reasonable, and we have to start somewhere!
The basic scenario is to shoot in a stream of particles, all at the same energy, and detect how many are deflected into a battery of detectors which measure angles of deflection. We assume all the ingoing particles are represented by wavepackets of the same shape and size, so we should solve Schrödinger’s time-dependent equation for such a wave packet and find the probability amplitudes for outgoing waves in different directions at some later time after scattering has taken place. But we adopt a simpler approach: we assume the wavepacket has a well-defined energy (and hence momentum), so it is many wavelengths long. This means that during the scattering process it looks a lot like a plane wave, and for a period of time the scattering is time independent. We assume, then, that the problem is well approximated by solving the time-independent Schrödinger equation with an ingoing plane wave. This is much easier!
All we can detect are outgoing waves far outside the region of scattering. For an ingoing plane wave \(e^{i\vec{k}\cdot\vec{r}}\), the wavefunction far away from the scattering region must have the form \[ \psi_{\vec{k}}(\vec{r})=e^{i\vec{k}\cdot\vec{r}}+f(\theta,\varphi)\frac{e^{i k r}}{r} \label{10.1.1}\]
where \(\theta,\varphi\) are measured with respect to the ingoing direction.
Note that the scattering amplitude \(f(\theta,\varphi)\) has the dimensions of length.
We don’t worry about overall normalization, because what is relevant is the fraction of the incoming beam scattered in a particular direction, or, to be more precise, into a small solid angle \(d\Omega\) in the direction \(\theta,\varphi\). The ingoing particle current (with the above normalization) is \(\hbar k/m=v\) through unit area perpendicular to the ingoing beam, the outgoing current into the small angle \(d\Omega\) is \((\hbar k/m)|f(\theta,\varphi)|^2d\Omega\). It is evident that this outgoing current corresponds to the original ingoing current flowing through a perpendicular area of size \(d\sigma(\theta,\varphi)=|f(\theta,\varphi)|^2d\Omega\), and \[ \frac{d\sigma}{d\Omega}=|f(\theta,\varphi)|^2 \label{10.1.2}\]
is called the differential cross section for scattering in the direction \( \theta,\varphi\).
The Time-Independent Description
We shall review the time-independent formulation of scattering theory, first as it is presented in Baym, in terms of the standard Schrödinger equation wavefunctions, then do the same thing a la Sakurai, in the more formal, but of course equivalent, language of bras and kets. The Schrödinger wavefunction approach is an easier introduction, but the formal language is more convenient for analyzing the structure of higher order terms.
Actually, Baym’s treatment isn’t quite time-independent, in that he uses an ingoing wavepacket, but it is one of great length, well approximated by a plane wave. Sakurai goes straight to the plane wave, and we do too. This case is very reminiscent of one-dimensional scattering, in which a plane wave from the left generates outgoing waves in both directions, and the amplitudes can be calculated from the Schrödinger equation for a single energy eigenstate. The only difference is that in 3D there will be outgoing waves in all directions.
Following Baym, Schrödinger’s equation is:
\[ \left(\frac{\hbar^2}{2m}\nabla^2+E_k\right)\psi_{\vec{k}}(\vec{r})=V(\vec{r})\psi_{\vec{k}}(\vec{r}),\;\; where\;\; E_k=\frac{\hbar^2k^2}{2m}. \label{10.1.3}\]
This \(\psi_{\vec{k}}\) we take to have an incoming plane wave component \(e^{i\vec{k}\cdot\vec{r}}\). Overall normalization is irrelevant, since the differential cross-section depends only on the ratio of the scattered wave amplitude to that of the ingoing wave.
The standard approach to an equation like the one above is to transform it into an integral equation using Green’s functions. If \(V(\vec{r})\) is small (just how small it has to be will become clear later) the integral equation can then be solved by iteration.
The Green’s function \(G(\vec{r},\vec{k})\) is essentially the inverse of the differential operator, \[ \left(\frac{\hbar^2}{2m}\nabla^2+E_k\right)G(\vec{r},\vec{k})=\delta(\vec{r}). \label{10.1.4}\]
This is not a mathematically unique definition: clearly, we can add to \(G(\vec{r},\vec{k})\) any solution of the homogeneous equation \[ \left(\frac{\hbar^2}{2m}\nabla^2+E_k\right)\varphi (\vec{r},\vec{k})=0, \label{10.1.5}\]
for example, the incoming plane wave.
If we write the integral equation
\[ \psi_{\vec{k}}(\vec{r})=e^{i\vec{k}\cdot\vec{r}}+\int d^3r' G(\vec{r}-\vec{r}' )V(\vec{r}' ) \psi_{\vec{k}}(\vec{r}' ) \label{10.1.6}\]
this \(\psi_{\vec{k}}(\vec{r})\) is certainly a solution to the original Schrödinger equation, as is easily checked by applying the operator \[ \left(\frac{\hbar^2}{2m}\nabla^2+E_k\right) \label{10.1.7}\]
to both sides of the equation.
The integral equation can be formally solved by iteration, and for “small” \(V\) the solution will converge. But this won’t really do—remember, we haven’t a unique \(G(\vec{r},\vec{k})\)! We have to fix \(G(\vec{r},\vec{k})\) by connecting better with the scattering problem we’re trying to solve.
We know our solution has a single ingoing plane wave, and outgoing waves in all other directions, generated by the interaction of the plane wave with the potential. But the Schrödinger equation could equally describe ingoing waves in the other directions. In defining the Green’s function and writing the integral equation, we have nowhere specified the distant form of the wavefunction, that is, we have not required that the Green’s function on the right hand side of the integral equation only generate outgoing waves. To see how to do this, we must write the Green’s function itself as a sum over waves, in other words a Fourier transform, and see how to eliminate the unphysical (for the present problem) incoming waves in that sum.
The explicit form of the Green’s function is \[ G(r,k)=\int \frac{d^3k'}{(2\pi)^3}\frac{e^{i\vec{k}' \cdot\vec{r}}}{E_k-\frac{\hbar^2k'^2}{2m}}=-\frac{m}{2\pi^2ir\hbar^2}\int_{-\infty}^{\infty}\frac{k' dk' e^{ik'r}}{k'^2-k^2}. \label{10.1.8}\]
Note that \(G(r,k)\) only depends on \(\vec{k}\) through \(E_k\), and only on \(\vec{r}\) through \(r\), since the integration over \(\vec{k}'\) is over all directions. It is easy to verify that this Green’s function satisfies the differential equation, by applying the differential operator to the first integral above: the result is to cancel the denominator in the integral, leaving just \(\int \frac{d^3k'}{(2\pi)^3}e^{i\vec{k}' \cdot\vec{r}}\), which is the \(\delta\)- function in \(\vec{r}\).
To get the second form of \(G(r,k)\) in the equation above, we first do the angular integration \(d(\cos\theta)\) to get \((e^{ik'r}-e^{-ik'r})/ik' r\), then rearrange the integral over the \(-e^{-ik'r}\) term by switching the sign of \(k' \), so it becomes an integral from \(-\infty\) to 0 instead of 0 to \(\infty\). Then we add the two terms (the \(e^{ik'r}\) and the \(-e^{-ik'r}\) ) together to give an integral from \(-\infty\) to \(\infty\). This integral from \(-\infty\) to \(\infty\) is then done by contour integration—at least, after we’ve figured out what to do about the singularities at \(k' =\pm k\).
For the integral to be defined, the contour must be distorted slightly so it bypasses these poles.
It is at this point we feed in our physical knowledge of the situation: that in the scattering process, the second term in \[ \psi_{\vec{k}}(\vec{r})=e^{i\vec{k}\cdot\vec{r}}+\int d^3r' G(\vec{r}-\vec{r}' )V(\vec{r}' ) \psi_{\vec{k}}(\vec{r}' ), \label{10.1.9}\]
that is, the Green’s function term, has to be a sum over outgoing waves only. And, we can guarantee this by distorting the contour of integration in the right direction, as follows.
The contour integral has to be evaluated by closing the contour. Since \(r\) is positive \(e^{ik'r}\) goes to zero in the upper half \(k'\) plane, but diverges in the lower half, so we must close the contour in the upper half plane to ensure no contribution from the semicircle at infinity. Therefore, to get the desired outgoing waves, \(e^{ikr}\) but not \(e^{-ikr}\), our contour closed in the upper half plane must encircle the pole at \(k' =+k\) but not the one at \(k' =-k\). ( \(e^{ikr}\) does represent outgoing waves: the suppressed time dependence is \(e^{-iEt/\hbar} =e^{-i\omega t}\), giving \(e^{i(kr-\omega t)}\).) In other words, the relative configuration of the real-axis part of the contour and the two poles has to be:
x (pole)
____________________________________________________________________________________________________________________________________________
x (pole at \(k' =-k-i\varepsilon\) ) (pole at \(k' =+k+i\varepsilon\) )
Instead of moving the contour slightly off the real axis to avoid the poles, we’ve moved the poles slightly instead. These movements are infinitesimal, so which gets moved makes no difference to the value of the integral. It is more convenient to move the poles, as shown, because this move can be efficiently included in the integral just by adding an infinitesimal imaginary part to the denominator: \[ G_+(r,k)=\int \frac{d^3k'}{(2\pi)^3}\frac{e^{i\vec{k}' \cdot\vec{r}}}{E_k-\frac{\hbar^2k'^2}{2m}+i\varepsilon}=-\frac{m}{2\pi^2ir\hbar^2}\int_{-\infty}^{\infty}\frac{k' dk' e^{ik'r}}{k'^2-k2-i\varepsilon}. \label{10.1.10}\]
Notice that we have written \(G_+\) instead of \(G\), because \(G\) can denote any solution of \[ \left(\frac{\hbar^2}{2m}\nabla^2+E_k\right)G(\vec{r},\vec{k})=\delta(\vec{r}) \label{10.1.11}\]
and we are specifying the particular solution having only outgoing waves. In contrast to \(G\), \(G_+\) is well-defined and unique. (There is another perfectly valid solution having only ingoing waves, but it is irrelevant to the scattering problem. The difference between the ingoing and outgoing solutions satisfies the homogeneous equation having zero on the right-hand side.)
Once we move the poles slightly as described above, the pole at \(k' =+k+i\varepsilon\) is in fact the only singularity of the integrand lying inside the contour of integration (closed in the upper half plane), so the value of the integral is just the contribution from this pole, that is, \[ G_+(r,k)=-\frac{m}{2\pi^2ir\hbar^2}(2\pi i)\frac{ke^{ikr}}{2k}=-\frac{m}{2\pi\hbar^2}\frac{e^{ikr}}{r} \label{10.1.12}\]
.
Therefore the \(i\varepsilon\) prescription (as it’s sometimes called) in \(G_+\) does indeed give us what we want: a solution having only outgoing waves, and the integral equation becomes: \[ \psi_{\vec{k}}(\vec{r})=e^{i\vec{k}\cdot\vec{r}}-\frac{m}{2\pi\hbar^2}\int d^3r' \frac{e^{ik | \vec{r}-\vec{r}' |}}{|\vec{r}-\vec{r}' |}V(\vec{r}' ) \psi_{\vec{k}}(\vec{r}' ). \label{10.1.13}\]
This can be written more simply if we assume the potential to be localized, so that we can take \(|\vec{r}|\gg |\vec{r}' |\). In this case, it is a good approximation to take \(|\vec{r}-\vec{r}' |=r\) in the denominator. However, this approximation cannot be made in the exponential, because to leading order (see diagram) and although the second term is much smaller than the first, it is a phase, which may be of order unity. Such a factor must of course be included so that the contributions to the integral from different regions of the potential are added with the correct relative phases.
\[ k|\vec{r}-\vec{r}' |=kr-k\hat{\vec{r}}\cdot\vec{r}' =kr-\vec{k}_f\cdot\vec{r}' \label{10.1.14}\]
Therefore, assuming the detector distance \(r\) is much larger than the range of the potential, we can write \[ \psi_{\vec{k}}(\vec{r})=e^{i\vec{k}\cdot\vec{r}}-\frac{m}{2\pi\hbar^2}\frac{e^{i kr}}{r}\int d^3r' e^{-i \vec{k}_f\cdot\vec{r}'} V(\vec{r}' ) \psi_{\vec{k}}(\vec{r}' ). \label{10.1.15}\]
The Born Approximation
From the above equation, the first order approximation to the scattering is given by replacing \(\psi\) in the integral on the right with the zeroth-order term \(e^{i\vec{k}\cdot\vec{r}}\), \[ \psi_{\vec{k}(Born)}(\vec{r})=e^{i\vec{k}\cdot\vec{r}}-\frac{m}{2\pi\hbar^2}\frac{e^{i kr}}{r}\int d^3r' e^{i\vec{k}\cdot\vec{r}' -\vec{k}_f\cdot\vec{r}'} V(\vec{r}' ) . \label{10.1.16}\]
This is the Born approximation. In terms of the scattering amplitude \(f(\theta,\varphi)\), which we defined in terms of the asymptotic wave function:
\[ \psi_{\vec{k}}(\vec{r})=e^{i\vec{k}\cdot\vec{r}}+f(\theta,\varphi)\frac{e^{ikr}}{r} \label{10.1.17}\]
the Born approximation is:
\[ f_{Born}(\theta,\varphi)=-\frac{m}{2\pi\hbar^2}\int d^3r' e^{i \vec{k}\cdot\vec{r}' -\vec{k}_f\cdot\vec{r}'} V(\vec{r}' ) =-\frac{m}{2\pi\hbar^2}\int d^3r' e^{-i \vec{q}\cdot\vec{r}'} V(\vec{r}' ) \label{10.1.18}\]
where \(\hbar \vec{q}\) is the momentum transfer, \(\hbar \vec{q}=\hbar (\vec{k}_f-\vec{k})\). (Since the incoming and outgoing momenta have equal magnitude, it is easy to check that \(q=2k\sin\theta/2\). )
The essential physics here is that a particle scattered with momentum change \(\hbar \vec{q}\) is scattered by the \(\vec{q}\) -Fourier component of the potential—one can imagine the potential as built up of Fourier components each of which acts like a diffraction grating. Higher order corrections to the Born approximation correspond to successive scatterings off these gratings—these higher orders are generated by iteration of \[ \psi_{\vec{k}}(\vec{r})=e^{i\vec{k}\cdot\vec{r}}-\frac{m}{2\pi\hbar^2}\int d^3r' \frac{e^{ik | \vec{r}-\vec{r}' |}}{|\vec{r}-\vec{r}'|}V(\vec{r}' ) \psi_{\vec{k}}(\vec{r}' ). \label{10.1.19}\]
It is important to establish when the Born approximation is a good one: sometimes it isn’t. Actually, we are just doing perturbation theory in disguise, so we need the perturbation to be small, that is to say, replacing \(\psi_{\vec{k}}(\vec{r}' )\) by \(e^{i\vec{k}\cdot\vec{r}'}\) in the integral on the right in the equation above should only make a small difference to the value of \(\psi_{\vec{k}}(\vec{r})\) given by doing the integral. This is of course a rather tricky exercise in self-consistency.
Let us attempt to estimate what difference the replacement of \(\psi_{\vec{k}}(\vec{r}' )\) by \(e^{i\vec{k}\cdot\vec{r}'}\) in the integrand does make for the common case of a spherically symmetric potential \(V(r)\) parameterized by depth \(V_0\) and range \(r_0\). The integral is effectively only over a region of size \(r_0\) around the origin.
First consider low energy scattering, \(kr_0<1\) say, so for estimation purposes we can replace the exponential term by 1 in the region of integration. We also assume that where \(\psi\) appears in the integral on the right-hand side of the equation \(|\psi_{\vec{k}}(\vec{r}' )|\) is also pretty close to 1 (remember the integral is only over a volume within \(r_0\) or so of the origin) and so we just replace it by 1. In other words, we’re assuming that the ingoing plane wave, the \(e^{i\vec{k}\cdot\vec{r}'}\), is not dramatically distorted inside that volume where the potential is significant.
Now, we’ve assumed the wave function near the origin is close to 1, so putting that value in the integrand on the right had better give a value for \(\psi_{\vec{k}}(\vec{r})\) on the left hand side of the equation which is pretty close to 1. The approximations give: \[ \psi_k(0)\approx 1-\frac{m}{2\pi\hbar^2}\int d^3r' \frac{V(r' )}{r'} ,\label{10.1.20}\]
so the Born approximation will be reasonable at low energies ( \(kr_0<1\) ) if the second term on the right hand side is a lot less than unity.
When is this true for a real potential? Taking \(V(r)\) to have depth \(V_0\) and range \(r_0\), the Born approximation is good if: \[ \frac{m}{2\pi\hbar^2}\int_0^{r_0}4\pi r^2\frac{V_0}{r}dr\ll 1,\;\; or\;\; V_0\ll \frac{\hbar^2}{mr^2_0}. \label{10.1.21}\]
Notice that the right hand side of this inequality is of order the kinetic energy of a particle confined to a volume equal to the range of the potential, so the Born approximation is valid at low energies provided the potential is well below the strength necessary for a bound state.
In fact, the Born approximation works better at higher energies, because the oscillating phase term in \(-\frac{m}{2\pi\hbar^2}\int d^3r' e^{-i \vec{q}\cdot\vec{r}'} V(\vec{r}' )\) cuts down the value of the integral by a factor of order of magnitude \(1/(kr_0)\). This means the condition becomes \(V_0\ll kr_0\frac{\hbar^2}{mr^2_0}\), always satisfied at high enough energies.
The Lippmann-Schwinger Equation
It proves illuminating, especially in understanding scattering beyond the Born approximation, to recast the Green’s function derivation of the scattering amplitude in the more formal language of bras, kets and operators. The Green’s function was introduced in the previous section as the (non-unique) inverse of the operator \[ E_k-H_0=\left(\frac{\hbar^2}{2m}\nabla^2+E_k\right) .\label{10.1.22}\]
(Parenthetical remark: in numerical computation, the wavefunction might be specified at points on a lattice in space, and a differential operator like this would be represented as a difference operator, that is, as a large but finite matrix operating on a large vector whose elements were the wavefunction values at points on the lattice. The Green’s function would then be the inverse matrix with appropriate boundary conditions specified to ensure uniqueness.)
Purely formally (and following Sakurai), writing \(H=H_0+V\), with \(H_0\) the kinetic energy operator \(\vec{p}^2/2m\), the ingoing plane wave state is a solution of \[ H_0|\vec{k}\rangle =E_k|\vec{k}\rangle . \label{10.1.23}\]
We want to solve\[ (H_0+V)|\psi\rangle =E_k|\psi\rangle . \label{10.1.24}\]
The transformation from a differential equation to an integral equation in this language is: \[ |\psi\rangle = |\vec{k}\rangle +\frac{1}{E_k-H_0}V|\psi\rangle . \label{10.1.25}\]
This gives the undisturbed incoming wave for \(V=0\), and by operating on both sides of the equation with \(E-H_0\), we find \(|\psi\rangle\) does indeed satisfy the full Schrödinger equation. But of course this transformation from a differential to an integral equation has the same flaw as the earlier treatment: \(H_0\) has a continuum of eigenvalues in the infinite volume limit, so the operator equation becomes ill-defined for those eigenstates with energy arbitrarily close to the incoming energy, and those are precisely the states of physical relevance.
To make explicit that this is indeed the problem we’ve already solved, let us translate it into the earlier language. First take the inner product with the bra \(\langle \vec{r}|\): \[ \langle \vec{r}|\psi\rangle = \langle \vec{r}|\vec{k}\rangle +\langle \vec{r}|\frac{1}{E_k-H_0}V|\psi\rangle . \label{10.1.26}\]
Next, insert a representation of unity as a sum over eigenstates of momentum (and therefore of \(H_0\) ) into the last term:
\[ \begin{matrix} \langle \vec{r}|\psi\rangle = \langle \vec{r}|\vec{k}\rangle +\int \frac{dk' }{(2\pi)^3}\langle \vec{r}|\vec{k}' \rangle \langle \vec{k}' |\frac{1}{E_k-H_0}V|\psi\rangle \\ = \langle \vec{r}|\vec{k}\rangle +\int \frac{dk' }{(2\pi)^3}\langle \vec{r}|\vec{k}' \rangle \frac{1}{E_k-E_k'} \langle \vec{k}' |V|\psi\rangle . \end{matrix} \label{10.1.27}\]
Finally, insert another representation of unity as a sum over eigenstates of position in the last term: \[ \langle \vec{r}|\psi\rangle = \langle \vec{r}|\vec{k}\rangle +\int d^3r' \int \frac{dk' }{(2\pi)^3}\langle \vec{r}|\vec{k}' \rangle \frac{1}{E_k-E_k'}\langle \vec{k}' |\vec{r}' \rangle \langle \vec{r}' |V|\psi\rangle . \label{10.1.28}\]
Comparing this expression with the integral equation in the earlier discussion, it is evident that they are indeed equivalent, and therefore the correct i\varepsilon prescription to give the scattered wave function, \[ |\psi\rangle = |\vec{k}\rangle +\frac{1}{E_k-H_0+i\varepsilon}V|\psi\rangle = |\vec{k}\rangle+G_+ V|\psi\rangle \label{10.1.29}\]
where \[ G_+=\frac{1}{E_k-H_0+i\varepsilon}=\int \frac{d^3k'}{(2\pi)^3}\frac{|\vec{k}' \rangle \langle \vec{k}' |}{E_k-E_k' +i\varepsilon} \label{1.1.30}\]
in which form it is evident that \(\langle \vec{r}|G_+|\vec{r}' \rangle\) is the same as \(G_+(\vec{r}-\vec{r}' )\) in the previous work.
This equation for the scattered wave \(|\psi\rangle\) is called the Lippmann-Schwinger equation.
Note : Sakurai defines his Green’s function as
\[ G_+(Sakurai)=\frac{\hbar^2}{2m}\frac{1}{E-H_0+i\varepsilon}\cdot\label{10.1.31}\]
Now that we have a well-defined Green’s function operator \(G_+\), the Lippmann-Schwinger equation can be solved formally: \[ |\psi\rangle = |\vec{k}\rangle +G_+ V|\psi\rangle , \;\; so \;\; |\psi\rangle = \frac{1}{1-G_+ V}|\vec{k}\rangle , \label{10.1.32}\]
with a series solution \[ |\psi\rangle = |\vec{k}\rangle +G_+ V|\vec{k}\rangle +G_+ VG_+ V|\vec{k}\rangle +G_+ VG_+ VG_+ V|\vec{k}\rangle +\dots \label{10.1.33}\]
just a formal version of the solution we found earlier.
The Transition Matrix
Operating on both sides of the above the equation with \(V\), \[ V|\psi\rangle = V|\vec{k}\rangle +VG_+ V|\vec{k}\rangle +VG_+ VG_+ V|\vec{k}\rangle +\dots=T|\vec{k}\rangle \label{10.1.34}\]
defining the “transition matrix” \(T\) by \[ T=V+VG_+V+VG_+VG_+V+\dots=V+V\frac{1}{E-H_0+i\varepsilon}V+\dots \label{10.1.35}\]
In terms of this transition matrix operator, the scattered wave can be written \[ |\psi\rangle = |\vec{k}\rangle +G_+ T|\vec{k}\rangle . \label{10.1.36}\]
Comparing this with \[ |\psi\rangle = |\vec{k}\rangle +G_+ V|\psi\rangle , \label{10.1.37}\]
and recalling that the Born approximation is given by \[ |\psi\rangle_{Born}= |\vec{k}\rangle +G_+ V|\vec{k}\rangle , \label{10.1.38}\]
we see that \(T\) is a kind of generalized potential, including all the higher order terms, so that just as the Born approximation gave the scattering amplitude in terms of \(V\), \[ f^{Born}(\theta,\varphi)=-\frac{m}{2\pi\hbar^2}\int d^3r' e^{i \vec{k}\cdot\vec{r}' -\vec{k}_f\cdot\vec{r}'} V(\vec{r}' ) \label{10.1.39}\]
the exact result including all higher order terms must have the same structure with \(T\) replacing \(V\). Of course, unlike \(V(\vec{r})\),\(T\) is not a diagonal matrix in \(r\)- space: it depends on two space variables, and its Fourier transform is a therefore function of two momenta, that is, the incoming \(\vec{k}\) and the scattered \(\vec{k}'\). Thus we find: \[ f(\theta,\varphi)=-\frac{m}{2\pi\hbar^2}\int d^3r\int d^3r' e^{i \vec{k}\cdot\vec{r}-\vec{k}' \cdot\vec{r}'} T(\vec{r}' ,\vec{r})=-\frac{m}{2\pi\hbar^2}\langle \vec{k}' |T|\vec{k}\rangle . \label{10.1.40}\]
We have replaced the \(\vec{k}_f\) in the Born expression with \(\vec{k}'\). Sakurai has an extra \((2\pi)^3\) in the term on the right, because he uses \(\langle \vec{k}|\vec{k}' \rangle =\delta(\vec{k}-\vec{k'})\), \(\langle \vec{r}|\vec{k}\rangle =\frac{e^{i\vec{k}\cdot\vec{r}}}{(2\pi)^{3/2}}\), we use \(\langle \vec{k}|\vec{k}' \rangle =(2\pi)^3\delta(\vec{k}-\vec{k'})\), \(\langle \vec{r}|\vec{k}\rangle =e^{i\vec{k}\cdot\vec{r}}\).
The Optical Theorem
The Optical Theorem relates the imaginary part of the forward scattering amplitude to the total cross-section, \[ Im\, f(\theta=0)=\frac{k\sigma_{tot}}{4\pi}. \label{10.1.41}\]
The physical content of this initially mysterious theorem will become a lot clearer after we discuss partial waves and some geometric effects. It does tell us that \(f\) cannot be real in all directions, and that in particular \(f\) has a positive imaginary part in the forward direction. We’ve included the proof here for the record, but you can skip it for now. But note that this proof is more general than the simple one given (later) in the section on partial waves, in that we do not here assume the potential to have spherical symmetry.
From the expression for \(f(\theta,\varphi)\) above, we see that we must find the imaginary part of \(\langle \vec{k}|T|\vec{k}\rangle\).
Recall that \[ V|\psi\rangle = T|\vec{k}\rangle , \label{10.1.42}\]
so we need to find \[ Im\,\langle \vec{k}|V|\psi\rangle =Im\left[\left(\langle \psi|-\langle \psi|V\frac{1}{E-H_0-i\varepsilon}\right) V|\psi\rangle \right]. \label{10.1.43}\]
Since \(V\) is hermitian, the only imaginary part of the above matrix element comes from the \(i\varepsilon\), recalling that \[ \frac{1}{E-H_0-i\varepsilon}=\frac{P}{E-H_0}+i\pi\delta(E-H_0). \label{10.1.44}\]
Therefore, \[ Im\,\langle \vec{k}|V|\psi\rangle =-\pi\langle \psi|V\delta(E-H_0)V|\psi\rangle . \label{10.1.45}\]
Again using \[ V|\psi\rangle = T|\vec{k}\rangle \label{10.1.42}\]
we can rewrite the equation \[ Im\,\langle \vec{k}|T|\vec{k}\rangle =Im\,\langle \vec{k}|V|\psi\rangle =-\pi\langle \psi|V\delta(E-H_0)V|\psi\rangle =-\pi\langle \vec{k}|T^{\dagger}\delta(E-H_0)T|\vec{k}\rangle . \label{10.1.46}\]
Inserting a complete set of plane wave states in the final matrix element above gives \[ \begin{matrix} Im\,\langle \vec{k}|T|\vec{k}\rangle =-\pi\langle \vec{k}|T^{\dagger}\delta(E-H_0)T|\vec{k}\rangle \\ =-\pi\int \frac{d^3k'}{(2\pi)^3}\langle \vec{k}|T^{\dagger}|\vec{k}' \rangle \langle \vec{k}' |T|\vec{k}\rangle \delta(E-\frac{\hbar^2k'^2}{2m}) \\ =-\pi\int \frac{d\Omega' }{(2\pi)^3}\frac{mk}{\hbar^2}|\langle \vec{k}' |T|\vec{k}\rangle |^2. \end{matrix} \label{10.1.47}\]
(This is the same formula as Sakurai’s in 7.3: our extra \((2\pi)^3\) in the denominator is only apparent, because our plane wave states differ from his by a factor \((2\pi)^{3/2}\). )
Time-Dependent Formulation of Scattering Theory
In the time-independent formulation presented above, we solved the Lippmann-Schwinger equation to find \[ |\psi\rangle = |\vec{k}\rangle +G_+ V|\vec{k}\rangle +G_+ VG_+ V|\vec{k}\rangle +G_+ VG_+ VG_+ V|\vec{k}\rangle +\dots \label{10.1.48}\]
where \[ G_+(E)=\frac{1}{E-H_0+i\varepsilon}=\int \frac{d^3k'}{(2\pi)^3}\frac{|\vec{k'}\rangle\langle\vec{k'}|}{E-E_{k'} +i\varepsilon} \label{10.1.49}\]
and \(E=E_k\).
(Reminder on our wave function normalization convention: we always have a denominator \(2\pi\) for an integral \(dk\). This means the identity operator as a sum over plane wave projection operators is \(I=\int \frac{d^3k}{(2\pi)^3}|\vec{k}\rangle \langle \vec{k}| \). The normalization is \(\langle \vec{k}|\vec{k}' \rangle =(2\pi)^3\delta(\vec{k}-\vec{k'})\), and \(\langle \vec{r}|\vec{k}\rangle =e^{i\vec{k}\cdot\vec{r}}\). Sakurai uses \(\langle \vec{k}|\vec{k}' \rangle =\delta(\vec{k}-\vec{k'})\), \(\langle \vec{r}|\vec{k}\rangle =\frac{e^{i\vec{k}\cdot\vec{r}}}{(2\pi)^{3/2}}\) and \(I=\int d^3k|\vec{k}\rangle \langle \vec{k}|\) as does Shankar in Chapter 1, for one dimension, page 67, but later, Chapter 21 page 585, Shankar has switched to our notation—so watch out! Our convention is also used by Baym and by Peskin.)
In fact, this function \(G_+\) is the Fourier transform of the propagator we discussed last semester. To see how this comes about, take the matrix element between two position eigenstates and Fourier transform from energy to time: \[ \begin{matrix} G_+(\vec{r},\vec{r'},t)=\frac{1}{2\pi}\int e^{-iEt/\hbar} dE\langle \vec{r}|G_+|\vec{r'}\rangle \\ =\frac{1}{2\pi}\int e^{-iEt/\hbar} dE\int \frac{d^3k'}{(2\pi)^3}\frac{\langle \vec{r}|k' \rangle \langle k' |\vec{r}' \rangle}{E-E_{k'} +i\varepsilon} \\ =\frac{1}{2\pi}\int e^{-iEt/\hbar} dE\int \frac{d^3k'}{(2\pi)^3}\frac{e^{i\vec{k'}\cdot(\vec{r}-\vec{r'})}}{E-E_{k'} +i\varepsilon}. \end{matrix} \label{10.1.50}\]
The integral over \(E\) is along the real axis, and the contour is closed in the half plane where the integrand goes to zero for in the imaginary direction, that is, in the lower half plane for \(t>0\) and the upper half plane for \(t<0\). But with the \(i\varepsilon\) term shown, all the singularities of the integrand are in the lower half plane. Hence \(G_+\) is identically zero for \(t<0\).
For \(t>0\), \(G_+\) is just the free particle propagator between the two points (apart from the phase factor \(-i\) ): \[ G_+(\vec{r},\vec{r'},t)=-i\int \frac{d^3k'}{(2\pi)^3}e^{i\vec{k'}\cdot(\vec{r}-\vec{r'})-iE_{k'}t/\hbar} . \label{10.1.51}\]
To summarize: terms in the series solution of the Lippmann-Schwinger equation can be interpreted as successive scatterings off the Fourier components of a potential, with plane wave propagation in between, with the sign of the \(i\varepsilon\) term ensuring that there are only outgoing waves from each scattering. In the Fourier transformed version above, the sum is over scattering at all possible points where the potential is nonzero, with \(G_+\) propagation in between, the \(i\varepsilon\) ensuring that the scattering path only moves forward in time.
Last semester, we defined the free-particle propagator as the operator \(U(t)=e^{-iH_0t/\hbar}\). The propagator describes development of the free-particle wave function in time, so naturally \(U(t)=0\) for \(t<0\). Then Fourier transforming the propagator from \(t\) to \(E\), and inserting an infinitesimal exponentially decaying factor to define the integral at infinity, we find \[ U(E)=\int_{0}^{\infty} e^{iEt/\hbar} e^{-iH_0t/\hbar}e^{-\varepsilon t/\hbar} dt=\frac{i\hbar}{E-H_0+i\varepsilon}. \label{10.1.52}\]
Note that the propagators \(U\) and \(G_+\) differ by a factor of \(i\hbar\), specifically \[ G_+(t)=\frac{-i}{\hbar} \theta(t)e^{-iH_0t/\hbar}. \label{10.1.53}\]
We follow Sakurai’s (section 7.11) notation, this is the correctly normalized Green’s function for the time-dependent free-particle Schrödinger equation: it is the solution of \[ \left( i\hbar \frac{\partial}{\partial t}-H_0\right) G_+(t)=\delta(t) \label{10.1.54}\]
which propagates forwards in time. The reason the propagators \(U\) and \(G_+\) differ by a factor of \(i\hbar\) is that the Lippmann-Schwinger equation can be generated as a time sequence using the higher-order perturbation theory interaction representation described earlier in the course, essentially expanding \(e^{-i(H_0+V)t/\hbar}\) as a time-ordered series expansion in \(V\), and each factor \(V\) has an accompanying \(1/(i\hbar )\), these factors are taken care of by using \(G_+\) instead of \(U\).
Exercise: in that earlier lecture, we gave the second-order term as: \[ c^{(2)}_n(t)=\left(\frac{1}{i\hbar}\right)^2\sum_n\int_0^t\int_0^{t'} dt' dt' ' e^{-i\omega_f(t-t' )}\langle f|V_S(t' )|n\rangle e^{-i\omega_n(t' -t' ' )}\langle n|V_S(t' ' )|i\rangle e^{-i\omega it' '} \label{10.1.55}\]
Assume that the potential \(V\) is constant in time. Fourier transform this expression from \(t\) to \(E\), \(E=\hbar \omega\), and establish that it has the structure \(G_+(E)VG_+(E)VG_+(E)\).
The Born Cross-Section from Time-Dependent Theory
We established in the lecture on Time-Dependent Perturbation Theory that to leading order in the perturbation, the transition rate from an initial state \(i\) to a final state \(f\) is given by Fermi’s Golden Rule: \[ R_{i\to f}=\frac{2\pi}{\hbar} |\langle f|V|i\rangle |^2\delta(E_f-E_i). \label{10.1.56}\]
We can use this result to find—in leading order—the rate of scattering from an incoming plane wave into any outgoing plane wave state having the same energy, and hence by adding the rate over the plane wave directions pointing within a given small solid angle \(d\Omega\), rederive the Born approximation.
Conceptually, though, this is a bit tricky. From the above solution of the Schrödinger equation, we know the outgoing wave is a spherical one, so in a particular direction the amplitude decreases. But that doesn’t happen with a plane wave! The clearest way to handle this is to put the system in a big box, a cube of side \(L\), with periodic boundary conditions. This makes it easier to count states and normalize the plane waves properly—of course, in the limit of a large box, the plane waves form a complete set, so any spherical wave can be expressed as a sum over these plane waves.
In this section, then, we use box-normalized plane waves: \[ |\vec{k}\rangle =\frac{1}{L^{3/2}}e^{i\vec{k}\cdot\vec{r}},\;\; \langle \vec{k'}|\vec{k}\rangle =\delta_{\vec{k}\vec{k'}} . \label{10.1.57}\]
So \[ \langle f|V|i\rangle =\frac{1}{L^3}\int d^3re^{-i\vec{k}_f\cdot\vec{r}}V(\vec{r}) e^{i\vec{k}_i\cdot\vec{r}}=\frac{1}{L^3}\int d^3re^{-i\vec{q}\cdot\vec{r}}V(\vec{r}) \label{10.1.58}\]
where the momentum transfer to the particle \(\hbar \vec{q}=\hbar (\vec{k}_f-\vec{k}_i)\).
It is important to note that we are taking the incoming wave to be just one of the normalized plane wave states satisfying the box periodic boundary conditions, so now the incoming current, being from just one of these plane waves, is \[ j_{in}=|\psi|^2v=\frac{1}{L^3}\frac{p}{m}. \label{10.1.59}\]
The Golden Rule becomes \[ R_{i\to(f\; in\; d\Omega)}=\frac{2\pi}{\hbar} |\langle f|V|i\rangle |^2\delta(E_f-E_i)d\Omega \label{10.1.60}\]
\(f\) denoting a plane wave going outwards within the solid angle \(d\Omega\).
Now, the \(\delta\)- function simply counts the number of states available at the correct (initial) energy, within the specified final solid angle of direction. The density of states in momentum space (for volume \(L^3\) of real space) is one state in each momentum-space volume \((2\pi\hbar)^3/L^3\), so using \(dE/dp=p/m\), the density of states in energy for outgoing solid angle \(d\Omega\) is \(L^3mpd\Omega/(2\pi\hbar)^3\).
Putting this all together \[ R_{i\to(f\; in\; d\Omega)}=\frac{2\pi}{\hbar} |\frac{1}{L^3}\int d^3xe^{-i\vec{q}\cdot\vec{x}}V(\vec{x})|^2L^3mpd\Omega/(2\pi\hbar)^3. \label{10.1.61}\]
The transition rate, the rate of scattering into \(d\Omega\), is just the incident current multiplied by the infinitesimal scattering cross-section \(d\sigma(\theta,\varphi)\) (that was our definition of \(d\sigma\) ),
\[ j_{in}(\frac{d\sigma(\theta,\varphi)}{d\Omega})d\Omega=R_{i\to(f\; in\; d\Omega)} \label{10.1.62}\]
because our definition of \(R_{i\to(f\; in\; d\Omega)}\) included the appropriately normalized ingoing wave.
So finally \[ \frac{d\sigma(\theta,\varphi)}{d\Omega}=\frac{R_{i\to(f\; in\; d\Omega)}}{j_{in}d\Omega}=\frac{m}{p}\frac{2\pi}{\hbar} |\int d^3re^{-i\vec{q}\cdot\vec{r}}V(\vec{r})|^2mp/(2\pi\hbar)^3=|\frac{m}{2\pi\hbar^2}\int d^3re^{-i\vec{q}\cdot\vec{r}}V(\vec{r})|^2. \label{10.1.63}\]
Footnote: the continuum version.
\[ R_{i\to(f\; in\; d\Omega)}=\frac{2\pi}{\hbar} |\langle f|V|i\rangle |^2\delta(E_f-E_i)d\Omega. \label{10.1.60}\]
In the continuum version, \(\langle \vec{r}|\vec{k}\rangle =e^{i\vec{k}\cdot\vec{r}}\), so the matrix element term is just \(|\int d^3re^{-i\vec{q}\cdot\vec{r}}V(\vec{r})|^2\). The energy \(\delta\)- function is only meaningful inside an integral, in this case over the small volume of outgoing scattering states in the solid angle \(d\Omega\) and energy equal to the ingoing energy. But this integral over \(k'\) - space must include the \(1/(2\pi)^3\) factor, according to our rule, giving an outgoing phase space term \[ \int \frac{d^3k'}{(2\pi)^3}\delta(E_{k'}-E_k)=d\Omega\int \frac{k'^2dk' }{(2\pi)^3} \delta\left(\frac{\hbar^2k'^2}{2m}-\frac{\hbar^2k^2}{2m}\right)=d\Omega\frac{k^2}{(2\pi)^3}\frac{m}{\hbar^2k}=d\Omega\frac{mp}{(2\pi\hbar)^3}. \label{10.1.64}\]
This establishes that our continuum normalization conventions give the same result as that obtained from box normalization.
Electrons Scattering from Atoms
This same approach, using the Golden Rule to derive the leading order scattering rate, is useful is analyzing the scattering of fast electrons by atoms. The problem with slow electrons is that the wave function needs to be antisymmetric with respect to all electrons present. We assume fast electrons have little overlap with the atomic electron wave functions in momentum space, so we don’t have to worry about symmetry.
With this approximation, following Sakurai (page 431) the scattering amplitude matrix element is \[ \int d^3re^{i\vec{q}\cdot\vec{r}}\langle n|\left( -\frac{Ze^2}{r}+\sum_i\frac{e^2}{|\vec{r}-\vec{r}_i|}\right) |0\rangle \label{10.1.65}\]
where the potential term \(V(\vec{x})\) is that from the nucleus, plus the repulsion from the other electrons at positions \(\vec{x}_i\). Taking the final atomic state \(n\) allows for the possibility of inelastic scattering.
Since the distance \(r\) of the scattered electron from the nucleus has nothing to do with the atomic state, \(n=0\) for the nuclear contribution, which is then just Coulomb scattering, and \[ \int d^3r\frac{e^{i\vec{q}\cdot\vec{r}}}{r}=\frac{4\pi}{q^2}. \label{10.1.66}\]
(To do this integral, put in a convergence factor \(e^{-\varepsilon r}\) then let \(\varepsilon\to 0\). )
The term involving the atomic electrons is another matter: for the \(i^{th}\) electron, integrating over the coordinate of the scattered electron gives a factor \((4\pi/q^2)e^{i\vec{q}\cdot\vec{r}_i}\), but the hard part is finding the value of the matrix element of this operator between the atomic states. Notice that this is just the Fourier transform of the electrostatic potential from the \(i^{th}\) electron’s charge density,
\( \nabla^2V_i(\vec{r}i)=4\pi\rho_i(\vec{r}_i)\) transforms to \(V_i(\vec{q})=(4\pi/q^2)\rho_i(\vec{q})\) and \(\rho_i(\vec{r})=e\delta(\vec{r}-\vec{r}_i)\) Fourier transforms to \(e(e^{i\vec{q}\cdot\vec{r}_i})\).
The Form Factor
For elastic scattering, then, the contribution of the atomic electrons is simply interpreted: their charge density gives rise to a potential by the usual electrostatic equation, and the (fast) electron is scattered by this potential. For inelastic scattering, the Fourier transform of the electron density is evaluated between different atomic states. In both cases, the matrix element is called the form factor \(F_n(\vec{q})\) for the scattering, actually \(ZF_n(\vec{q})=\langle n|\sum_i e^{i\vec{q}\cdot\vec{r}_i}|0\rangle \). The normalizing factor \(Z\) is introduced so that for elastic scattering, \(F_n(\vec{q})\to 1\) as \(q\to0\).
So the form factor is a map of the charge density in \(q\)- space. By measuring the scattering rate at different angles, and Fourier analyzing, it is possible to delineate the charge distribution in ordinary space. The same technique works for nuclei, and in fact for particles—the neutron, for example, although electrically neutral, has a nontrivial electrical charge distribution within its volume, revealed by scattering very fast electrons.
More general form factors describe distribution of spin, and also time dependence of distributions of charge or spin in excited systems. These can all be measured with suitably designed scattering experiments.