# 7.5: Quantum Distributions

$$\newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} }$$ $$\newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}}$$$$\newcommand{\id}{\mathrm{id}}$$ $$\newcommand{\Span}{\mathrm{span}}$$ $$\newcommand{\kernel}{\mathrm{null}\,}$$ $$\newcommand{\range}{\mathrm{range}\,}$$ $$\newcommand{\RealPart}{\mathrm{Re}}$$ $$\newcommand{\ImaginaryPart}{\mathrm{Im}}$$ $$\newcommand{\Argument}{\mathrm{Arg}}$$ $$\newcommand{\norm}[1]{\| #1 \|}$$ $$\newcommand{\inner}[2]{\langle #1, #2 \rangle}$$ $$\newcommand{\Span}{\mathrm{span}}$$ $$\newcommand{\id}{\mathrm{id}}$$ $$\newcommand{\Span}{\mathrm{span}}$$ $$\newcommand{\kernel}{\mathrm{null}\,}$$ $$\newcommand{\range}{\mathrm{range}\,}$$ $$\newcommand{\RealPart}{\mathrm{Re}}$$ $$\newcommand{\ImaginaryPart}{\mathrm{Im}}$$ $$\newcommand{\Argument}{\mathrm{Arg}}$$ $$\newcommand{\norm}[1]{\| #1 \|}$$ $$\newcommand{\inner}[2]{\langle #1, #2 \rangle}$$ $$\newcommand{\Span}{\mathrm{span}}$$$$\newcommand{\AA}{\unicode[.8,0]{x212B}}$$

## Comparing Distributions

In the previous section, we merely stated what the Boltzmann distribution was for large numbers of particles. In terms of probability, the chance of pulling out a particle at random in a state that has energy $$E_n$$ was given by Equation 7.4.1:

$P\left(E_n\right) = Ae^{-E_n\;/\;k_BT}$

It's clear from this function that the probability of finding a particle in a low-energy state is higher than finding it in a higher-energy state. Does this mean that particles prefer lower energy states? No! every available state gets an equal weighting, but it's all about allocation of resources. The collection of particles has some fixed total energy, and if one of the particles gets a lot of it, then many more will get very little. Let's look at a very simple example.

## Distinguishable Particles

Suppose we have four distinguishable particles bound in a 1-dimensional harmonic oscillator potential, which means that the energy levels are evenly-spaced, and we will call this spacing $$\epsilon$$. As we have done before, we will choose the zero point of the potential energy so that the ground state of a particle has an energy of zero. So a particle in the first excited state has an energy of $$1\epsilon$$, in the second excited state an energy of $$2\epsilon$$, and so on. We assume as always that every distinguishable microstate is equally probable. We have chosen a small enough system that it is not difficult to enumerate all of the possible distinguishable microstates for a low total energy, which for the sake of this example we will choose to be $$2\epsilon$$. There are basically two types of microstates – one where three particles are in their ground state, while the fourth is in the $$E=2\epsilon$$ state, and one where two particles are in the ground state and two are in the $$E=1\epsilon$$ state. Of course, with distinguishable particles, there are many ways to form these (see the diagram below).

Figure 7.5.1 – All the Distinguishable Microstates (Boltzmann)

With each of these states equally probable, it is easy to compute the probability that any single particle (say the blue one - it will be the same for any selected particle variety) is in each energy state. Out of the 10 microstates, 6 of them include the blue particle in the ground state, 3 with the blue particle in the $$E=1\epsilon$$ state, and one with the blue particle in the $$E=2\epsilon$$ state. So the probability distribution and occupation numbers are:

$\begin{array}{l} P\left(E=0\epsilon\right) = 0.6 \;\;\; \Rightarrow \;\;\; \mathcal N\left(E=0\epsilon\right)=4\cdot 0.6 = 2.4 \\ P\left(E=1\epsilon\right) = 0.3 \;\;\; \Rightarrow \;\;\; \mathcal N\left(E=1\epsilon\right)=4\cdot 0.3 = 1.2 \\ P\left(E=2\epsilon\right) = 0.1 \;\;\; \Rightarrow \;\;\; \mathcal N\left(E=2\epsilon\right)=4\cdot 0.1 = 0.4\end{array}$

Before we move on, we should say a few words about temperature. It is common to think of temperature as a proxy for internal energy, averaged over the particles, but as we will see, this will become a problem. Instead, it is better to think about its true definition, which is in terms of the system entropy's reaction to a change in energy:

$\dfrac{1}{T} \equiv \dfrac{\partial S}{\partial E}$

In words, this means that the more the entropy reacts to a small change in energy, the lower the temperature is. By "entropy reaction," we think in terms of the shift in available microstates, since entropy is defined in terms of the log of the multiplicity of states:

$S \equiv k_B\; \ln\Omega$

We also define a zero point of temperature to occur when the entropy drops to zero (perfect order – only one microstate). From this definition, it's clear that for our model above, absolute zero will only occur when all of the particles are simultaneously in the ground state (zero total energy), which is consistent with our usual notion of temperature, but we'll see this doesn't hold up for one of the upcoming versions of this model.

## Indistinguishable Bosons

Let's now look at distributions that occur for our model above when the particles are indistinguishable. We'll start with particles that obey Bose-Einstein statistics (bosons). To avoid issues with degeneracy (for now), we'll assume these are spin-0. Bosons don't have to obey an exclusion principle, so like the distinguishable particles, the two different types of states (one particle in $$E=2\epsilon$$ or two particles in $$E=1\epsilon$$) are possible. A difference, however, occurs when we enumerate the number of distinguishable states in order to compute probabilities. Namely, none of the four states in the top row of Figure 7.5.1 are distinguishable from each other, nor are the six states in the bottom row. For bosons, we have a different picture:

Figure 7.5.2 – All the Distinguishable Microstates (Bose-Einstein)

When we select a particle at random from this system, there are only eight possibilities. Five of these are in the ground state, two are in the first excited state, and one is in the the second excited state. This results in a different probability distribution than for the case of distinguishable particles:

$\begin{array}{l} P_{BE}\left(E=0\epsilon\right) = 0.625 \;\;\; \Rightarrow \;\;\; \mathcal N_{BE}\left(E=0\epsilon\right)=4\cdot 0.625 = 2.5 \\ P_{BE}\left(E=1\epsilon\right) = 0.250 \;\;\; \Rightarrow \;\;\; \mathcal N_{BE}\left(E=1\epsilon\right)=4\cdot 0.250 = 1.0 \\ P_{BE}\left(E=2\epsilon\right) = 0.125 \;\;\; \Rightarrow \;\;\; \mathcal N_{BE}\left(E=2\epsilon\right)=4\cdot 0.125 = 0.5 \end{array}$

It should be clear from this that when we add lots of particles to the system, the exponential function we have for the Boltzmann occupation number will not work for bosons. Comparing the two distributions is easiest if we write the exponential (and the constant that multiplies it) in the denominator for the Boltzmann occupation number. [Note: We will use no subscript when referring to distributions of distinguishable particles, but will label the quantum distributions such as Bose-Einstein ("$$BE$$") here and Fermi-Dirac ("$$FD$$") below.]

$\begin{array}{l} \mathcal N\left(E\right) = \dfrac{1}{B\;e^{E/k_BT}} \\ \mathcal N_{BE}\left(E\right) = \dfrac{1}{B\;e^{E/k_BT}-1} \end{array}$

The quantity $$B=B\left(N,T\right)$$ in general depends upon the total number of particles (which is assumed to be large) and the temperature, but not the energy of the state for which the occupation number is desired. It derives from the specifics of the system, which includes not only the particle number and temperature, but also the degeneracy of states and the nature of the energy spectrum. Here we are focused on the energy level dependence of the occupation number, and it's remarkable how similar the two distributions are (but the seemingly small difference is quite important!).

As with the case of distinguishable particles, the temperature for this system is not zero, because it is not fully-ordered – this would occur when all the particles are in the ground state. We can see that this is consistent with the formulas for the occupation numbers. If we set $$T$$ equal to zero in these equations, the denominator blows up (causing the occupation number to vanish) for every energy level except $$E=0$$.

## Indistinguishable Fermions

Okay, now we turn to fermions. We will assume the four particles in our model are spin-$$\frac{1}{2}$$. This means that the exclusion principle precludes more than two of these particles to occupy the same energy level. One of the particles in an energy level can be spin-up, and the other spin-down, but since no two of these particles can be in the same state, that is the limit. There is exactly one configuration of particles under these conditions that results in a total energy of $$2\epsilon$$, and it is shown in the diagram below.

Figure 7.5.3 – All the Distinguishable Microstates (Fermi-Dirac)

Here things get a little tricky. In the spin-0 boson case, the particles at the same energy level are in the same states. In this case, only one particle is in each state – the energy levels are two-fold degenerate. Picking a particle out at random therefore has a probability of $$\frac{1}{4}$$ of being in each state, and a probability of $$\frac{1}{2}$$ of having energies zero and $$1\epsilon$$. There is a zero probability of finding a particle with energy $$2\epsilon$$ – the particles can't give all the energy to one particle, because that would require that at least two of them reside in the same state with zero energy.

What about occupation number? It might appear that this is simply equal to two for the bottom two energy levels, and zero for all the rest, but this is not correct. The occupation number measures the number of particles in a given state at a specific energy. Here there are two states at each energy level, and each state has exactly one particle occupying it. Fermions can never have an occupation number greater than 1 – this is precisely the exclusion principle! In this case, the occupation numbers are:

$\begin{array}{l} \mathcal N_{FD}\left(E=0\epsilon\right)=1 \\ \mathcal N_{FD}\left(E=1\epsilon\right)=1 \\ \mathcal N_{FD} \left(E=2\epsilon\right) = 0\end{array}$

Clearly the system of these particles has an energy distribution that is different from both of the previous two, and when the number of particles is increased to a large number, the function looks (again only slightly) different from those in Equations 7.5.6:

$\mathcal N_{FD}\left(E\right) = \dfrac{1}{B\;e^{E/k_BT}+1}$

Let's raise the question of temperature one more time. In the previous cases, the zero temperature state required all the particles to be in the ground state, but that is impossible in this case. We see here that in fact this state is perfectly ordered – there is one and only one microstate here. So despite the fact that the total energy is not zero, the temperature is in fact zero. The energy of the fermions at the top of this ladder when the temperature is zero is called the fermi energy, represented by $$E_f$$. While this is defined in this specific way, this quantity is extended into non-zero temperatures as well (though its use is only really practical at "low" temperatures, which generally involves a comparison between $$k_BT$$ and the energy level gaps).

Equations 7.5.7 retain the same features for any number of particles, so long as the temperature is zero. It can be expressed graphically as a step function:

Figure 7.5.4 – Fermi-Dirac Occupation Number at T = 0

The point where the step jumps is the fermi energy. If we wish to reconcile this function with Equation 7.5.8, we need to define the value of the function at the discontinuity. It is standard practice to define this value as the average of the two extremes of the step. This means that although the discrete version of the occupation number at the fermi energy is one, in order to make a smooth transition to temperatures greater than zero, we define the fermi energy as the energy at which the occupation number is $$\frac{1}{2}$$ (halfway between the top of the step at 1 and the bottom at 0). This allows us to rewrite Equation 7.5.8 in terms of the fermi energy:

$\frac{1}{2} = \dfrac{1}{B\;e^{E_f/k_BT}+1} \;\;\; \Rightarrow \;\;\; B=e^{-E_f/k_BT}\;\;\; \Rightarrow \;\;\; \mathcal N_{FD}\left(E\right) = \dfrac{1}{e^{\left(E-E_f\right)/k_BT}+1}$

From this form of the occupation number formula, we can see how the step function comes about at $$T=0$$. When $$E>E_f$$, the $$T=0$$ in the exponent's denominator makes the exponent infinity, which makes the denominator of the formula blow up, and the formula value vanishes. When $$E<E_f$$, the $$T=0$$ in the exponent's denominator makes the exponent negative infinity, causing the exponential to vanish, and leaving a value of unity for the formula. Using this formula, we can see what the step function turns into at higher temperatures. No matter what the temperature, the occupation number at the fermi energy is $$\frac{1}{2}$$, but the spread of the step increases with temperature.

Figure 7.5.5 – Fermi-Dirac Occupation Number for T > 0

This curve makes sense physically, since raising the temperature will mostly take particles off the top levels and move them into slightly higher ones, increasing a few occupation numbers for states above the fermi energy and lowering a few occupation numbers for states just below the fermi energy.

While all of this started with a specific model of a one-dimensional harmonic oscillator potential, it should be emphasized that the occupation number functions are more general. The values of $$B$$ need to be computed for whatever model one is considering. Also, the fermi energy definition provides only a cleaner way of looking at fermions – it doesn't introduce anything new – it just shifts the unknown from the value of $$B$$ to the value of $$E_f$$.