Skip to main content
Physics LibreTexts

7.2: Maxwell-Boltzmann Statistics

  • Page ID
    32035
  • \( \newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \) \( \newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}} \)\(\newcommand{\id}{\mathrm{id}}\) \( \newcommand{\Span}{\mathrm{span}}\) \( \newcommand{\kernel}{\mathrm{null}\,}\) \( \newcommand{\range}{\mathrm{range}\,}\) \( \newcommand{\RealPart}{\mathrm{Re}}\) \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\) \( \newcommand{\Argument}{\mathrm{Arg}}\) \( \newcommand{\norm}[1]{\| #1 \|}\) \( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\) \( \newcommand{\Span}{\mathrm{span}}\) \(\newcommand{\id}{\mathrm{id}}\) \( \newcommand{\Span}{\mathrm{span}}\) \( \newcommand{\kernel}{\mathrm{null}\,}\) \( \newcommand{\range}{\mathrm{range}\,}\) \( \newcommand{\RealPart}{\mathrm{Re}}\) \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\) \( \newcommand{\Argument}{\mathrm{Arg}}\) \( \newcommand{\norm}[1]{\| #1 \|}\) \( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\) \( \newcommand{\Span}{\mathrm{span}}\)\(\newcommand{\AA}{\unicode[.8,0]{x212B}}\)

    Now we can see how all this applies to the particles in a gas. The analog of heads or tails would be the momenta and other numbers which characterize the particle properties. Thus, we can consider \(N\) particles distributed into different cells, each of the cells standing for a collection of observables or quantum numbers which can characterize the particle. The number of ways in which \(N\) particles can be distributed into these cells, say \(K\) of them, would be given by Equation 7.1.8. The question is then about a priori probabilities. The basic assumption which is made is that there is nothing to prefer one set of values of observables over another, so we assume equal a priori probabilities. This is a key assumption of statistical mechanics. So we want to find the distribution of particles into different possible values of momenta or other observables by maximizing the probability

    \[p({n_i}) \;=\; C\;N_i\prod_i^K \frac{1}{n_i!}\]

    Here \(C\) is a normalization constant given by \(C = P \sum_{\{n_i\}} W\), the analog of \(2^N\) in Equation 7.1.2. Now the maximization has to be done obviously keeping in mind that \(\sum_i n_i = N\), since we have a total of \(N\) particles. But this is not the only condition. For example, energy is a conserved quantity and if we have a system with a certain energy \(U\), no matter how we distribute the particles into different choices of momenta and so on, the energy should be the same. Thus the maximization of probability should be done subject to this condition. Any other conserved quantity should also be preserved. Thus our condition for the equilibrium distribution should read

    \[ \delta_{n_i}p(\{n_i\}) \;=\;0,\;\;\;\;\;subject \;to\; \sum_i n_iO^{(\alpha)}_i\;=\;fixed \]

    where \(O^{(\alpha)}\) (for various values of \(\alpha\)) give the conserved quantities, the total particle number and energy being two such observables.

    The maximization of probability seems very much like what is given by the second law of thermodynamics, wherein equilibrium is characterized by maximization of entropy. In fact this suggests that we can define entropy in terms of probability or \(W\), so that the condition of maximization of probability is the same as the condition of maximization of entropy. This identification was made by Boltzmann who defined entropy corresponding to a distribution \(\{n_i\}\) of particles among various values of momenta and other observables by

    \[S \;=\; k \log W(\{n_i\})\]

    where \(k\) is the Boltzmann constant. For two completely independent systems \(A\), \(B\), we need \(S = S_A +S_B\), while \(W = W_A W_B\). Thus the relation should be in terms of \(\log W\). This equation is one of the most important formulae in physics. It is true even for quantum statistics, where the counting of the number of ways of distributing particles is different from what is given by Equation 7.1.8. We will calculate entropy using this and show that it agrees with the thermodynamic properties expected of entropy. We can restate Boltzmann’s hypothesis as

    \[p(\{n_i\}) = C \;W(\{n_i\}) = C e^{\frac{S}{k}}\]

    With this identification, we can write

    \[\begin{align} S &= k \log \left[ N_i\prod_i \frac{1}{n_i!} \right] \\[4pt] &\approx \; k \left[ N \log N - N - \sum_i (n_i \log n_i -n_i) \right] \end{align}\]

    We will consider a simple case where the single particle energy values are \( \epsilon_i\) (where \(i\) may be interpreted as momentum label) and we have only two conserved quantities to keep fixed, the particle number and the energy. To carry out the variation subject to the conditions we want to impose, we can use Lagrange multipliers. We add terms \(λ( \sum_i n_i − N) − β( \sum_i n_i \epsilon_i − U)\), and vary the parameters (or multipliers) \(β\), \(λ\) to get the two conditions

    \[ \sum_i n_i=N,\;\;\;\; \sum_i n_i \epsilon_i =U \label{7.2.6}\]

    Since these are anyway obtained as variational conditions, we can vary \(n_i\) freely without worrying about the constraints, when we try to maximize the entropy. Usually we use \(µ\) instead of \(λ\), where \(λ = βµ\), so we will use this way of writing the Lagrange multiplier. The equilibrium condition now becomes

    \[ \delta \left[ \frac{S}{k}- \beta(\sum_i n_i \epsilon_i -U)+λ(\sum_i n_i -N) \right] =0 \]

    This simplifies to

    \[ \sum_i \delta n_i (\log n_i + \beta \epsilon_i -\beta \mu) =0\]

    Since \(n_i\) are not constrained, this means that the quantity in brackets should vanish, giving the solution

    \[ n_i\;=\;e^{- \beta (\epsilon_i - \mu)} \label{7.2.9} \]

    This is the value at which the probability and entropy are a maximum. It is known as the Maxwell-Boltzmann distribution. As in the case of the binomial distribution, the variation around this value is very very small for large values of \(n_i\)) , so that observable values can be obtained by using just the solution in Equation \ref{7.2.9}. We still have the conditions from Equation \ref{7.2.6} obtained as maximization conditions (for variation of \(β\), \(λ\)), which means that

    \[ \sum_i e^{- \beta (\epsilon_i - \mu)}\;=\;N \\ \sum_i \epsilon_i e^{- \beta (\epsilon_i - \mu)} = U \label{7.2.10}\]

    The first of these conditions will determine \(\mu\) in terms of \(N\) and the second will determine \(β\) in terms of the total energy \(U\).

    In order to complete the calculation, we need to identify the summation over the index \(i\). This should cover all possible states of each particle. For a free particle, this would include all momenta and all possible positions. This means that we can replace the summation by an integration over \(d^3p\; d^3x\). Further the single-particle energy is given by

    \[ \epsilon\;=\;\frac{p^2}{2m} \]

    Since

    \[ \int d^3x\; d^3p \exp \left( -\frac{\beta p^2}{2m} \right) \;=\; V\left( \frac{2 \pi m}{\beta} \right)^{\frac{3}{2}} \label{7.2.12}\]

    we find from \ref{7.2.10}

    \[ \beta\;=\; \frac{3N}{2U} \\ \beta \mu \;=\; \log \left[ \frac{N}{V} \left( \frac{\beta}{2 \pi m} \right)^{\frac{3}{2}} \right] \;=\; \log \left[ \frac{N}{V} \left( \frac{3N}{4 \pi m U} \right)^{\frac{3}{2}} \right]\]

    The value of the entropy at the maximum can now be expressed as

    \[ S\;=\;k \left[ \frac{5}{2}N - N \log N + N \log V - \frac{3}{2}N \log \left( \frac{3N}{4 \pi m U} \right) \right] \label{7.2.14}\]

    From this, we find the relations

    \[ \begin{align} \left( \frac{\partial S}{\partial U} \right)_{V,N} &= k \frac{3N}{2U} = k\; \beta \\[4pt] \left( \frac{\partial S}{\partial N} \right)_{V,U} &= - \log \left[ \frac{N}{V} \left( \frac{3N}{4 \pi m U} \right)^{\frac{3}{2}} \right] \;=\; -k\;\beta \mu \\[4pt] \left( \frac{\partial S}{\partial V} \right)_{U,N} &= k \frac{N}{V} \label{7.2.17} \end{align}\]

    Comparing these with

    \[dU = T \;dS − p \;dV + µ \;dN\]

    which is the same as Equation 5.1.8, we identify

    \[ \beta \;=\; \frac{1}{kT} \label{7.2.19}\]

    Further, \(µ\) is the chemical potential and \(U\) is the internal energy. The last relation in Equation \ref{7.2.17} tells us that

    \[ p\;=\;\frac{N\;k\;T}{V} \]

    which is the ideal gas equation of state.

    Once we have the identification from Equation \ref{7.2.19}, we can also express the chemical potential and internal energy as functions of the temperature:

    \[ \mu\;=\;kT \log \left[ \frac{N}{V} \left( \frac{1}{2 \pi m kT} \right)^{\frac{3}{2}} \right] \\ U\;=\;\frac{3}{2}NkT \]

    The last relation also gives the specific heats for a monatomic ideal gas as

    \[ C_v=\frac{3}{2}Nk,\;\;\;\; C_p=\frac{5}{2}Nk \]

    These specific heats do not vanish as \(T → 0\), so clearly we are not consistent with the third law of thermodynamics. This is because of the classical statistics we have used. The third law is a consequence of quantum dynamics. So, apart from the third law, we see that with Boltzmann’s identification of entropy as \(S = k \log W\), we get all the expected thermodynamic relations and explicit formulae for the thermodynamic quantities.

    We have not addressed the normalization of the entropy carefully so far. There are two factors of importance. In arriving at Equation \ref{7.2.14}, we omitted the \(N!\) in \(W\), using \(\frac{W}{N!}\) in Boltzmann’s formula rather than \(W\) itself. This division by \(N!\) is called the Gibbs factor and helps to avoid a paradox about the entropy of mixing, as explained below. Further, the number of states cannot be just given by \(d^3x\; d^3p\) since this is, among other reasons, a dimensionful quantity. The correct prescription comes from quantum mechanics which gives the semiclassical formula for the number of states as

    \[ \text{Number of states} \;=\; \frac{d^3xd^3p}{(2 \pi h)^3} \]

    where \(h\) is Planck’s constant. Including this factor, the entropy can be expressed as

    \[ S\;=\;N\;k\;\left[ \frac{5}{2} + \log \left( \frac{V}{N} \right) + \frac{3}{2}\log \left( \frac{U}{N} \right) + \frac{3}{2}\log \left( \frac{4 \pi m}{3(2 \pi ħ)^2} \right) \right] \label{7.2.25}\]

    This is known as the Sackur-Tetrode formula for the entropy of a classical ideal gas.

    Gibbs Paradox

    Our expression for entropy has omitted the factor \(N!\). The original formula for the entropy in terms of \(W\) includes the factor of \(N!\) in \(W\). This corresponds to an additional factor of \(N \log N − N\) in the formula from Equation \ref{7.2.24}. The question of whether we should keep it or not was considered immaterial since the entropy contained an additive undetermined term anyway. However, Gibbs pointed out a paradox that arises with such a result. Consider two ideal gases at the same temperature, originally with volumes \(V_1\) and \(V_2\) and number of particles \(N_1\) and \(N_2\). Assume they are mixed together, this creates some additional entropy which can be calculated as \(S − S_1 − S_2\). Since \(U = \frac{3}{2}N kT\), if we use the formula from Equation \ref{7.2.24} without the factors due to \(\frac{1}{N!}\) (which means with an additional \(N \log N − N\)), we find

    \[S − S_1 − S_2 = k [N \log V − N_1 \log V_1 − N_2 \log V_2]\]

    (We have also ignored the constants depending on \(m\) for now.) This entropy of mixing can be tested experimentally and is indeed correct for monatomic nearly ideal gases. The paradox arises when we think of making the gases more and more similar, taking a limit when they are identical. In this case, we should not get any entropy of mixing, but the above formula gives

    \[S − S_1 − S_2 = k \left[N_1 \log \left( \frac{V}{V_1} \right) + N_2 \log \left( \frac{V}{V_2} \right) \right]\]

    (In this limit, the constants depending on \(m\) are the same, which is why we did not have to worry about it in posing this question.) This is the paradox. Gibbs suggested dividing out the factor of \(N!\), which leads to the formula in Equation \ref{7.2.24}. If we use that formula, there is no change for identical gases because the specific volume \(\frac{V}{N}\) is the same before and after mixing. For dissimilar gases, the formula of mixing is still obtained. The Gibbs factor of \(N!\) arises naturally in quantum statistics.

    Equipartition

    The formula for the energy of a single particle is given by

    \[ \epsilon = \frac{p^2}{2m} = \frac{p_1^2 + p_2^2+ p_3^2}{2m} \]

    If we consider the integral in Equation \ref{7.2.12} for each direction of \(p\), we have

    \[ \int dx\;dp\; \exp \left( -\frac{\beta p_1^2}{2m} \right) \;=\; L_1 \left( \frac{2 \pi m}{\beta} \right)^{\frac{1}{2}}\]

    The corresponding contribution to the internal energy is \(\frac{1}{2} kT\), so that for the three degrees of freedom we get \(\frac{3}{2} kT\), per particle. We have considered the translational degrees of freedom corresponding to the movement of the particle in 3-dimensional space. For more complicated molecules, one has to include rotational and vibrational degrees of freedom. In general, in classical statistical mechanics, we will find that for each degree of freedom we get \(\frac{1}{2} kT\). This is known as the equipartition theorem. The specific heat is thus given by

    \[ C_v = \frac{1}{2}k \;\times\; \text{number of degrees of freedom} \]

    Quantum mechanically, equipartition does not hold, at least in this simple form, which is as it should be, since we know the specific heats must go to zero as \(T → 0\).


    This page titled 7.2: Maxwell-Boltzmann Statistics is shared under a CC BY-NC-SA license and was authored, remixed, and/or curated by V. Parameswaran Nair.