# 20: Equilibrium Statistical Mechanics

- Page ID
- 7834

Out of the early Universe we get the light elements, a lot of photons and, as it turns out, a bunch of neutrinos and other relics of our hot past as well. To understand the production of these particles we now turn to the subject of Equilibrium Statistical Mechanics.

##### Phase Space

A collection of particles is conveniently described by how it is distributed in both position and momentum. We usually assume three spatial dimensions, and in this case, the momentum of a particle is a three-dimensional quantity (it takes 3 numbers to specify the momentum of a particle, \(p_x\), \(p_y\) and \(p_z\)). We refer to the space itself as “configuration space” (\(x\), \(y\), \(z\)) and the three-dimensional space associated with momentum as “momentum space.” We can put these spaces together into one six-dimensional object (\(x\), \(y\), \(z\), \(p_x\), \(p_y\), \(p_z\)) we call “phase space.”

##### The phase space distribution function

We conventionally describe the location in phase space of large numbers of particles in a statistical manner, where we just state the average number of particles as a function of location in phase space. More specifically, we define a phase space distribution function, \(f\), such that the number of particles at (\(x\), \(y\), \(z\), \(p_x\), \(p_y\), \(p_z\)) in a phase-space volume of size \(dx\,dy\,dz\,dp_x\,dp_y\,dp_z\) is

\[dN = \frac{f(\vec x, \vec p)}{h^3} dx\,dy\,dz\,dp_x\,dp_y\,dp_z\]

where \(h\) is Planck's constant. This equation serves to define \(f\): It tells us the number of particles in a phase space volume equal to \(h^3\).

##### Types of Equilibria

We are going to introduce some results of statistical mechanics that are quite amazing and useful and that apply in *equilibrium*. So let us first define equilibrium. In fact, we will define two different kinds of equilibrium: kinetic and chemical.

Kinetic equilibrium obtains when reactions that exchange energy between particles (such as collisions) occur rapidly compared to the time-scale under which conditions are changing. For example, the gas particles in this room interact very rapidly. For a given particle, the typical time between collisions is well below a second. Given that conditions in the room are not changing rapidly (that is, the temperature in the room is quite stable), the gas particles in the room will be in kinetic equilibrium.

Chemical equilibrium obtains when reactions that exchange particle type are rapidly occurring. An example of a reaction that changes particle type is electron, positron annihilation:

\[e^- + e^+ \rightarrow 2\gamma\]

where \(\gamma\) is a photon. Another example is given by chemical processes, and is where “chemical equilibrium” gets its name, such as

\[\ce{2H + O <=>H_2O}.\]

When these reactions are fast, chemical equilibrium is rapidly achieved. In chemical equilibrium, just as many forward reactions as backward reactions are happening (so the number densities of all the particles are independent of time).

##### Equilibrium forms for \(f\)

The first of two amazing results from statistical mechanics we will use (without proof) is the following. In *kinetic* equilibrium the phase space distribution function always has the following form:

\[f = \left[\exp\left(\frac{E(p)-\mu}{k_BT}\right) \pm 1\right]^{-1}. \label{eqn:f}\]

where the + is for fermions and the - is for bosons, \(T\) is the temperature, \(\mu\) is the chemical potential and \(E\) is the energy of each particle, \(E^2 = p^2c^2 + m^2c^4\).

You already have some intuition for what is meant by temperature. We will soon do some exercises to see how the way that \(T\) affects \(f\) is consistent with your ideas about temperature. You may have less of a feeling about the physical meaning of \(\mu\). We'll do some exercises to address that. In the meantime, I'll tell you that when two systems are allowed to exchange kinetic energy, after a sufficiently long time if conditions are not changing, then their temperatures become related (in fact, equal). Similarly, if particles are allowed to change *type*, then the chemical potentials of the different types of particles also become related (though not necessarily equal). For example, if this reaction is proceeding rapidly, in both the forwards and backwards direction:

\[a+b \leftarrow \rightarrow c+d\]

then *chemical equilibrium* will obtain and the number densities \(n_a\), \(n_b\), \(n_c\) and \(n_d\) will be independent of time. Further, and this is our second result from statistical mechanics, we will have this relation between their chemical potentials:

\[\mu_a + \mu_b = \mu_c + \mu_d.\]

More generally, for

\[a+b+... \leftarrow \rightarrow c+d+ …\]

happening rapidly, then \(\mu_a + \mu_b + ... = \mu_c + \mu_d + ....\)

As a further example, if the reactions

\[e^- + e^+ \leftarrow \rightarrow 2\gamma\]

are happening rapidly then \(\mu_{e^{-}} + \mu_{e^{+}} = 2 \mu_\gamma\).

##### How to go from \(f\) to number density, energy density and pressure

The number density, energy density and pressure of a collection of free particles is:

\[\begin{equation}

\begin{aligned}

n({\vec x}) &= g \int \frac{d^3p}{h^3} f({\vec x},{\vec p}) \\ \\ \epsilon({\vec x}) &= g \int \frac{d^3p}{h^3} E(p) f({\vec x},{\vec p}) \\ \\ P({\vec x}) &= g \int \frac{d^3p}{h^3} \frac{p^2c^2}{3E}f({\vec x},{\vec p})

\end{aligned} \label{eqn:n}

\end{equation}\]

where \(f\) is the phase-space distribution function and \(g\) counts the number of internal degrees of freedom for the particles. For example, electrons have two spin states so for them \(g=2\).

Box \(\PageIndex{1}\)

**Exercise 19.1.1: **From the definition of \(f\), derive the above expression for the number density \(n({\vec x})\).

**Exercise 19.1.2:** From the definition of \(f\), derive the above expression for the energy density \(\epsilon({\vec x})\).

**Exercise 19.1.3: **(Optional) From the definition of \(f\), derive the above expression for the pressure \(P({\vec x})\). This one is significantly harder. You need to recall that pressure is force per unit area. The force on a wall from particles hitting it in time interval \(\Delta t\) is equal to the sum of the changes in each particle's momentum as it bounces off the wall, divided by \(\Delta t\). First show that for a wall perpendicular to the \(x\) axis this force is given by \(F =\frac{1}{2} \int d^3p f A (v_x \Delta t) (2p_x)/\Delta t\) so that \(P = \int d^3p f v_x p_x\). Then use the fact that \(v_x = p_xc^2/E\) (see footnote^{2}) and that \(\int d^3p f p_x^2 = 1/3\int d^3p f p^2\) as long as \(f\) only depends on \(p^2 = p_x^2+p_y^2 + p_z^2\) to get \(P = \int d^3p f p^2c^2/(3E)\). If there are \(g\) internal degrees of freedom, that's that many more particles doing exactly the same thing (\(f\) tells us the distribution for each internal degree of freedom) so we get the desired result including the factor of \(g\).

##### An example: black body (thermal) radiation

From Equation \ref{eqn:f} and Equation \ref{eqn:n} we can derive a lot of results. For example, a gas of photons (\(g=2\)) in kinetic equilibrium with \(\mu = 0\) (how this arises physically to be explained later) has a contribution to its number density from particles with magnitude of momentum between \(p\) and \(p+dp\) equal to

\[n(p)dp = \frac{8\pi }{h^3} \frac{p^2dp}{\exp(pc/(k_B T)) -1}.\]

Note that by \(n(p)\) we mean the function of the magnitude of momentum that when integrated over \(p\) gives number density \(n = \int_0^\infty dp \, n(p)\).

Box \(\PageIndex{2}\)

**Exercise 19.2.1:** Derive the above equation for \(n(p) dp\). Start from the integral above (one of equations \ref{eqn:n}) that gives the number density. Because the energy of each particle (and therefore the whole integrand) only depends on the magnitude of the momentum, \(p = \sqrt{p_x^2+p_y^2+p_z^2}\), switch from Cartesian to spherical coordinates and integrate over the angular variables. That is, replace \(d^3p = dp_xdp_ydp_z\) with \(p^2dp d(\cos\theta_p)d\phi_p\) and integrate over the angular variables \(\theta_p\) and \(\phi_p\). Remember that \(E(p) = pc\) for photons.

**Exercise 19.2.2: **If you sampled one photon out of the distribution, there is a probability that it will have energy between \(p\) and \(p+dp\). For what value of \(p\) does this probability peak?

**Exercise 19.2.3:** How does that most probable \(p\) depend on temperature? Notice that this is qualitatively consistent with what you expect for temperature.

To perform the integral over \(p\) and obtain an expression for the number density of photons in kinetic equilibirium with zero chemical potential, we make a change of variables \(x = pc/(k_BT)\) to remove all dimensionful constants from the integral and find:

\[\int_0^\infty n(p) dp = \frac{8\pi }{c^3h^3}(k_BT)^3 \int_0^\infty \frac{x^2dx}{\exp(x) -1}.\]

The integral can be looked up in a table or performed numerically. It's equal to 2\(\zeta(3) \simeq 2.404\) where \(\zeta\) is the Riemann zeta function.

##### What physical conditions lead to \(\mu = 0\)?

If a particle can be freely created or destroyed, without other particles being created or destroyed, and these reactions are sufficiently fast, then the chemical potential will be driven towards zero. We can see this from the rule we already learned.

Assume this reaction is fast, called free-free, or Bremstrahlung:

\[e^- + p^+ \rightarrow e^- + p^+ + \gamma\]

in which an electron is accelerated in the electric field of a proton and thus radiates a photon. If the photon can get absorbed in some way, then we also effectively have the reverse reaction as well. In this case we would have \(\mu_{e^-} + \mu_{p^+} = \mu_{e^-} + \mu_{p^+}+\mu_\gamma\) which leads us to \(\mu_\gamma = 0\).

#### Equilibrium Statistical Mechanics Results in Various Limits

All of these results come from doing the appropriate integral over \(f = \left(\exp\left[(E(p)-\mu)/(k_BT)\right] \pm 1\right)^{-1}\). We will refer back to these later.

In the relativistic \((k_BT >> mc^2\)) and \(k_B T >> \mu\) limit for bosons

\[\begin{equation}

\begin{aligned}

\epsilon &= \frac{\pi^2}{30\hbar^3 c^3} g(k_BT)^4 \\ \\ n &= \frac{\zeta(3)}{\pi^2 \hbar^3 c^3} g (k_BT)^3 \\ \\ P &= \epsilon/3.

\end{aligned}

\end{equation}\]

where \(\zeta\) is the Riemann Zeta function and \(\zeta(3) = 1.202...\).

In the relativistic and \(k_B T >> \mu\) limit for fermions we get

\[\begin{equation}

\begin{aligned}

\epsilon &= \frac{7}{8}\frac{\pi^2}{30\hbar^3c^3} g(k_BT)^4 \\ \\ n &= \frac{3}{4}\frac{\zeta(3)}{\pi^2\hbar^3 c^3} g (k_BT)^3 \\ \\ P &= \epsilon/3.

\end{aligned}

\end{equation}\]

In the non-relativistic and dilute (\(f <<1\) for most particles) limits we can neglect the \(\pm 1\) factor in the denominator of \(f\) so we get the same result for both bosons and fermions:

\[\begin{equation}

\begin{aligned}

n &= g\left(\frac{mc^2 k_BT}{2\pi \hbar^2 c^2}\right)^{3/2} \exp\left[-(mc^2-\mu)/(k_BT)\right] \\ \\ \epsilon &= mc^2n \\ \\ P &= nk_BT << \epsilon.

\end{aligned}

\end{equation}\]

#### Homework

19.1: Starting from the appropriate integrals of the phase space distribution function over momentum space show that for relativistic massless bosons with \(\mu = 0\) that \(P = \epsilon/3\). [Note: this is a result you've seen before.]

19.2: The gas in this room consists of non-relativistic particles so to a very good approximation \(E(p) = mc^2 + p^2/(2m)\). Derive this approximation from \(E^2 = p^2c^2 + m^2 c^4\).

19.3: Derive the Maxwell-Boltzmann distribution of velocities for the gas in the room (assuming for simplicity it's a gas of one type of particle (which of course it is not)); i.e., derive this: the probability that a particular gas particle has a speed between \(v\) and \(v+dv\) is proportional to \( v^2 \exp(\frac{-mv^2}{2k_BT}) dv\) where \(m\) is the mass of the gas particle.

Also for the gas in the room, for the huge majority of particles, the exponential term is much greater than one so you can ignore the \(\pm 1\) in the denominator and approximate \(f = 1/[\exp(E-\mu)/kT]\).

19.4: Find the number density of particles as a function of \(\mu\), \(m\) and \(T\) assuming they are non-relativistic and you can neglect the \(\pm 1\) term in the denominator. It's OK to leave an unevaluated integral in your answer, but simplify it as much as possible and make a change of variables so the integration variable is dimensionless, and so the integral is just a number; i.e., it does not have any dependence on \(T\), \(\mu\) or \(m\)).