Skip to main content
Physics LibreTexts

7.4: The Gibbsian Ensembles

  • Page ID
    32037
  • \( \newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \) \( \newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}} \)\(\newcommand{\id}{\mathrm{id}}\) \( \newcommand{\Span}{\mathrm{span}}\) \( \newcommand{\kernel}{\mathrm{null}\,}\) \( \newcommand{\range}{\mathrm{range}\,}\) \( \newcommand{\RealPart}{\mathrm{Re}}\) \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\) \( \newcommand{\Argument}{\mathrm{Arg}}\) \( \newcommand{\norm}[1]{\| #1 \|}\) \( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\) \( \newcommand{\Span}{\mathrm{span}}\) \(\newcommand{\id}{\mathrm{id}}\) \( \newcommand{\Span}{\mathrm{span}}\) \( \newcommand{\kernel}{\mathrm{null}\,}\) \( \newcommand{\range}{\mathrm{range}\,}\) \( \newcommand{\RealPart}{\mathrm{Re}}\) \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\) \( \newcommand{\Argument}{\mathrm{Arg}}\) \( \newcommand{\norm}[1]{\| #1 \|}\) \( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\) \( \newcommand{\Span}{\mathrm{span}}\)\(\newcommand{\AA}{\unicode[.8,0]{x212B}}\)

    The distribution which was obtained in Equation 7.17 gives the most probable number of particles with momentum \(p\) as \(n_p = \text{exp}(−β(\epsilon_p − µ))\). This was obtained by considering the number of ways in which free particles can be distributed among possible momentum values subject to the constraints of fixed total number of particles and total energy. We want to consider some generalizations of this now. First of all, one can ask whether a similar formula holds if we have an external potential. The barometric formula (2.13) has a similar form since \(mgh\) is the potential energy of a molecule or atom in that context. So, for external potentials, one can make a similar argument.

    Interatomic or intermolecular forces are not so straightforward. In principle, if we have intermolecular forces, single particle energy values are not easily identified. Further, in some cases, one may even have new molecules formed by combinations or bound states of old ones. Should they be counted as one particle or two or more? So, one needs to understand the distribution from a more general perspective. The idea is to consider the physical system of interest as part of a larger system, with exchange of energy with the larger system. This certainly is closer to what is really obtained in most situations. When we study or do experiments with a gas at some given temperature, it is maintained at this temperature by being part of a larger system with which it can exchange energy. Likewise, one could also consider a case where exchange of particles is possible. The important point is that, if equilibrium is being maintained, the exchange of energy or particles with a larger system will not change the distribution in the system under study significantly. Imagine high energy particles get scattered into the volume of gas under study from the environment. This can raise the temperature slightly. But there will be roughly equal number of particles of similar energy being scattered out of the volume under study as well. Thus while we will have fluctuations in energy and particle number, these will be very small compared to the average values, in the limit of large numbers of particles. So this approach should be a good way to analyze systems statistically.

    Arguing along these lines one can define three standard ensembles for statistical mechanics: the micro-canonical, the canonical and the grand canonical ensembles. The canonical ensemble is the case where we consider the system under study (of fixed volume \(V\)) as one of a large number of similar systems which are all in equilibrium with larger systems with free exchange of energy possible. For the grand canonical ensemble, we also allow free exchange of particles, so that only the average value of the number of particles in the system under study is fixed. The micro-canonical ensemble is the case where we consider a system with fixed energy and fixed number of particles. (One could also consider fixing the values of other conserved quantities, either at the average level (for grand canonical case) or as rigidly fixed values (for the micro-canonical case)).

    We still need a formula for the probability for a given distribution of particles in various states. In accordance with the assumption of equal a priori probabilities, we expect the probability to be proportional to the number of states \(\mathcal{N}\) available to the system subject to the constraints on the conserved quantities. In classical mechanics, the set of possible trajectories for a system of particles is given by the phase space since the latter constitutes the set of possible initial data. Thus the number of states for a system of \(N\) particles would be proportional to the volume of the subspace of the phase space defined by the conserved quantities. In quantum mechanics, the number of states would be given in terms of the dimension of the Hilbert space. The semiclassical formula for the counting of states is then

    \[ d \mathcal{N}=\prod_{i=1}^N \frac{d^3x_i d^3p_i}{(2 \pi ħ)^3}\]

    In other words, a cell of volume \((2 \pi h)^{3N}\) in phase space corresponds to a state in the quantum theory. (This holds for large numbers of states; in other words, it is semiclassical.) This gives a more precise meaning to the counting of states via the phase volume. In the microcanonical ensemble, the total number of states with total energy between \(E\) and \(E + \delta E\) would be

    \[ \mathcal{N}= \int_{H=E}^{H=E+ \delta E} \prod_{i=1}^N \frac{d^3x_i d^3p_i}{(2 \pi ħ)^3} \equiv W(E) \]

    where \(H(\{ x \}, \{ p \})\) is the Hamiltonian of the \(N\)-particle system. The entropy is then defined by Boltzmann’s formula as \(S(E) = k \log W(E)\). For a Hamiltonian \(H = \sum_i \frac{p_i^2}{2m}\), this can be explicitly calculated and leads to the formulae we have already obtained. However, as explained earlier, this is not easy to do explicitly when the particles are interacting. Nevertheless, the key idea is that the required phase volume is proportional to the exponential of the entropy,

    \[\text{Probability} \propto \text{exp} \left( \frac{S}{k} \right) \]

    This idea can be carried over to the canonical and grand canonical ensembles.

    In the canonical ensemble, we consider the system of interest as part of a much larger system, with, say, \(N + M\) particles. The total number of available states is then

    \[ d \mathcal{N}=\prod_{i=1}^{N+M} \frac{d^3x_i d^3p_i}{(2 \pi ħ)^3}\]

    The idea is then to consider integrating over the \(M\) particles to obtain the phase volume for the remaining, viewed as a subsystem. We refer to this subsystem of interest as system 1 while the \(M\) particles which are integrated out will be called system 2. If the total energy is \(E\), we take the system 1 to have energy \(E_1\), with system 2 having energy \(E − E_1\). Of course, \(E_1\) is not fixed, but can vary as there can be some amount of exchange of energy between the two systems. Integrating out the system 2 leads to

    \[ \delta \mathcal{N}=\prod_{i=1}^N \frac{d^3x_i d^3p_i}{(2 \pi ħ)^3} W(E-E_1)=\prod_{i=1}^N \frac{d^3x_i d^3p_i}{(2 \pi ħ)^3} e^{\frac{S(E-E_1)}{k}}\]

    We then expand \(S(E)\) as

    \[ \begin{equation}
    \begin{split}
    S(E − E_1) & = S(E) − E_1 \left( \frac{∂S}{∂E} \right)_{V,N} \;+\; ... \\[0.125in]
    & = S(E) − \frac{1}{T}E_1 \;+\; ... \\[0.125in]
    & = S(E) − \frac{H_N}{T} \;+\;...
    \end{split}
    \end{equation} \label{7.4.6}\]

    where have used the thermodynamic formula for the temperature. The temperature is the same for system 1 and the larger system (system 1 + system 2) of which it is a part. \(H_N\) is the Hamiltonian of the \(N\) particles in system 1. This shows that, as far as the system under study is concerned, we can take the probability as

    \[ \text{Probability} = C \prod_{i=1}^N \frac{d^3x_i d^3p_i}{(2 \pi ħ)^3} \text{exp} \left( -\frac{H_n(x,p)}{T} \right) \label{7.4.7} \]

    Here C is a proportionality factor which can be set by the normalization requirement that the total probability (after integration over all remaining variables) is 1. (The factor \(e^\frac{S(E)}{k}\) from Equation \ref{7.4.6} can be absorbed into the normalization as it is a constant independent of the phase space variables for the particles in system 1. Also, the subscript 1 referring to the system under study is now redundant and has been removed.)

    There are higher powers in the Taylor expansion in Equation \ref{7.4.6} which have been neglected. The idea is that these are very small as \(E_1\) is small compared to the energy of the total system. In doing the integration over the remaining phase space variables, in principle, one could have regions with \(H_N\) comparable to \(E\), and the neglect of terms of order \(E_1^2\) may not seem justified. However, the formula \ref{7.4.7} in terms of the energy is sharply peaked around a certain average value with fluctuations being very small, so that the regions with \(E_1\) comparable to \(E\) will have exponentially vanishing probability. This is the ultimate justification for neglecting the higher terms in the expansion from Equation \ref{7.4.6}. We can a posteriori verify this by calculating the mean square fluctuation in the energy value which is given by the probability distribution in Equation \ref{7.4.7}. This will be taken up shortly.

    Turning to the grand canonical case, when we allow exchange of particles as well, we get

    \[ \begin{equation}
    \begin{split}
    S(E- E_1 , (N + M) − N)) & = S(E, N+M) − E_1 \left( \frac{∂S}{∂U} \right)_{V,N+M} - N\left( \frac{∂S}{∂N} \right)_{U,V} \;+\; ... \\[0.125in]
    & = S(E, N+M) − \frac{1}{T}E_1 + \frac{\mu}{T}N \;+\; ... \\[0.125in]
    & = S(E, N+M) − \frac{1}{T}(H_N - \mu_N) \;+\;...
    \end{split}
    \end{equation} \label{7.4.8}\]

    By a similar reasoning as in the case of the canonical ensemble, we find, for the grand canonical ensemble,

    \[ \text{Probability} \equiv dp_N = C \prod_{i=1}^N \frac{d^3x_i d^3p_i}{(2 \pi ħ)^3} \text{exp} \left( -\frac{H(x,p)- \mu_N}{kT} \right) \label{7.4.9} \]

    More generally, let us denote by \(\mathcal{O}_α\) an additively conserved quantum number or observable other than energy. The general formula for the probability distribution is then

    \[ dp_N = C \prod_{i=1}^N \frac{d^3x_i d^3p_i}{(2 \pi ħ)^3} \text{exp} \left( -\frac{H(x,p)- \sum_{\alpha} \mu_\alpha \mathcal{O}_α}{kT} \right) \]

    Even though we write the expression for \(N\) particles, it should be kept in mind that averages involve a summation over \(N\) as well. Thus the average of some observable \(B(x, p)\) is given by

    \[ \langle B \rangle = \sum_{N=0}^{\infty} \int dp_N\;B_N(x,p) \]

    Since the normalization factor \(C\) is fixed by the requirement that the total probability is 1, it is convenient to define the “partition function". In the canonical case, it is given by

    \[Q_N = \int \frac{1}{N!} \prod_{i=1}^N \left[ \frac{d^3x_i d^3p_i}{(2 \pi ħ)^3} \right] \text{exp} \left( -\frac{H(x,p)}{kT} \right) \label{7.4.12} \]

    We have introduced an extra factor of \(\frac{1}{N!}\). This is the Gibbs factor needed for resolving the Gibbs paradox; it is natural in the quantum counting of states. Effectively, because the particles are identical, permutation of particles should not be counted as a new configuration, so the phase volume must be divided by \(N!\) to get the “correct" counting of states. We will see that even this is not entirely adequate when full quantum effects are taken into account. In the grand canonical case, the partition function is defined by

    \[Z = \sum_N \int \frac{1}{N!} \prod_{i=1}^N \left[ \frac{d^3x_i d^3p_i}{(2 \pi ħ)^3} \right] \text{exp} \left( -\frac{H(x,p)- \sum_{\alpha} \mu_\alpha \mathcal{O}_α}{kT} \right) \label{7.4.13} \]

    Using the partition functions in place of \(\frac{1}{C}\), and including the Gibbs factor, we find the probability of a given configuration as

    \[dp_N = \frac{1}{Q_N} \frac{1}{N!} \prod_{i=1}^N \left[ \frac{d^3x_i d^3p_i}{(2 \pi ħ)^3} \right] \text{exp} \left( -\frac{H(x,p)}{kT} \right) \]

    while for the grand canonical case we have

    \[dp_N = \frac{1}{Z} \frac{1}{N!} \prod_{i=1}^N \left[ \frac{d^3x_i d^3p_i}{(2 \pi ħ)^3} \right] \text{exp} \left( -\frac{H(x,p)- \sum_{\alpha} \mu_\alpha \mathcal{O}_α}{kT} \right) \]

    The partition function contains information about the thermodynamic quantities. Notice that, in particular,

    \[ \frac{1}{\beta} \frac{\partial}{\partial \mu_{\alpha}} \log Z = \langle \mathcal{O}_{\alpha} \rangle \\ -\frac{\partial}{\partial \beta} \log Z = \langle H - \sum_{\alpha} \mu_{\alpha} \mathcal{O}_{\alpha} \rangle = U - \sum_{\alpha} \mu_{\alpha} \langle \mathcal{O}_{\alpha} \rangle \label{7.4.16}\]

    We can also define the average value of the entropy (not the entropy of the configuration corresponding to particular way of distributing particles among states, but the average over the distribution) as

    \[\bar{S} = k \left[ \log Z + \beta (U - \sum_{\alpha} \mu_{\alpha} \langle \mathcal{O}_{\alpha} \rangle ) \right] \label{7.4.17} \]

    While the averages \(U = \langle H \rangle\) and \(\langle \mathcal{O}_{\alpha} \rangle\) do not depend on the factors of \(N!\) and \((2 \pi h)^{3N}\), the entropy does. This is why we chose the normalization factors in Equation \ref{7.4.12} to be what they are.

    Consider the case when we have only one conserved quantity, the particle number, in addition to the energy. In this case, Equation \ref{7.4.17} can be written as

    \[µ N = U − T S + kT \log Z\]

    Comparing this with the definition of the Gibbs free energy in Equation 5.6 and its expression in terms of \(µ\) in Equation 5.16, we find that we can identify

    \[ p V = kT \log Z \label{7.4.19} \]

    This gives the equation of state in terms of the partition function.

    These equations (\ref{7.4.13}), (\ref{7.4.16} - \ref{7.4.19}) are very powerful. Almost all of the thermodynamics we have discussed before is contained in them. Further, they can be used to calculate various quantities, including corrections due to interactions among particles, etc. As an example, we can consider the calculation of corrections to the equation of state in terms of the intermolecular potential.


    This page titled 7.4: The Gibbsian Ensembles is shared under a CC BY-NC-SA license and was authored, remixed, and/or curated by V. Parameswaran Nair.

    • Was this article helpful?