Skip to main content
Physics LibreTexts

3.1: Ideal Classical Gas

  • Page ID
    34706
  • \( \newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \) \( \newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}} \)\(\newcommand{\id}{\mathrm{id}}\) \( \newcommand{\Span}{\mathrm{span}}\) \( \newcommand{\kernel}{\mathrm{null}\,}\) \( \newcommand{\range}{\mathrm{range}\,}\) \( \newcommand{\RealPart}{\mathrm{Re}}\) \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\) \( \newcommand{\Argument}{\mathrm{Arg}}\) \( \newcommand{\norm}[1]{\| #1 \|}\) \( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\) \( \newcommand{\Span}{\mathrm{span}}\) \(\newcommand{\id}{\mathrm{id}}\) \( \newcommand{\Span}{\mathrm{span}}\) \( \newcommand{\kernel}{\mathrm{null}\,}\) \( \newcommand{\range}{\mathrm{range}\,}\) \( \newcommand{\RealPart}{\mathrm{Re}}\) \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\) \( \newcommand{\Argument}{\mathrm{Arg}}\) \( \newcommand{\norm}[1]{\| #1 \|}\) \( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\) \( \newcommand{\Span}{\mathrm{span}}\)\(\newcommand{\AA}{\unicode[.8,0]{x212B}}\)

    Direct interactions of typical atoms and molecules are well localized, i.e. rapidly decreasing with distance \(r\) between them and becoming negligible at a certain distance \(r_0\). In a gas of \(N\) particles inside volume \(V\), the average distance rave between the particles is \((V/N)^{1/3}\). As a result, if the gas density \(n \equiv N/V = (r_{ave})^{-3}\) is much lower than \(r_0^{-3}\), i.e. if \(nr_0^3 << 1\), the chance for its particles to approach each other and interact is rather small. The model in which such direct interactions are completely ignored is called the ideal gas.

    Let us start with a classical ideal gas, which may be defined as the ideal gas in whose behavior the quantum effects are also negligible. As was discussed in Sec. 2.8, the condition of that is to have the average occupancy of each quantum state low:

    \[\langle N_k \rangle << 1. \label{1}\]

    It may seem that we have already found all properties of such a system, in particular the equilibrium occupancy of its states – see Equation (\(2.8.1\)):

    \[\langle N_k \rangle = const \times \text{exp} \left\{-\frac{\varepsilon_k}{T}\right\}.\label{2}\]

    In some sense this is true, but we still need, first, to see what exactly Equation (\ref{2}) means for the gas, a system with an essentially continuous energy spectrum, and, second, to show that, rather surprisingly, the particles’ indistinguishability affects some properties of even classical gases.

    The first of these tasks is evidently easiest for gas out of any external fields, and with no internal degrees of freedom.1 In this case, \(\varepsilon_k\) is just the kinetic energy of the particle, which is an isotropic and parabolic function of \(p\):

    \[\varepsilon_k = \frac{p^2}{2m} = \frac{p^2_x+p^2_y+p^2_z}{2m}.\label{3}\]

    Now we have to use two facts from other fields of physics, hopefully well known to the reader. First, in quantum mechanics, the linear momentum \(\mathbf{p}\) is associated with the wavevector \(\mathbf{k}\) of the de Broglie wave, \(\mathbf{p} = \hbar \mathbf{k}\). Second, the eigenvalues of \(\mathbf{k}\) for any waves (including the de Broglie waves) in free space are uniformly distributed in the momentum space, with a constant density of states, given by Equation (\(2.6.1\)):

    \[\frac{dN_{states}}{d^3k}=\frac{gV}{(2\pi )^3}, \quad \text{ i.e. } \frac{dN_{states}}{d^3p} = \frac{gV}{(2\pi h)^3}, \label{4}\]

    where \(g\) is the degeneracy of particle’s internal states (for example, for all spin-1/2 particles, the spin degeneracy \(g = 2s + 1 = 2\)). Even regardless of the exact proportionality coefficient between \(dN_{states}\) and \(d^3p\), the very fact that this coefficient does not depend on \(\mathbf{p}\) means that the probability \(dW\) to find the particle in a small region \(d^3p = dp_1dp_2dp_3\) of the momentum space is proportional to the right-hand side of Equation (\ref{2}), with \(\varepsilon_k\) given by Equation (\ref{3}):

    Maxwell distribution:

    \[\boxed{dW = C \text{exp}\left\{-\frac{p^2}{2mT}\right\}d^3p = C\text{exp}\left\{-\frac{p_1^2+p_2^2+p_3^2}{2mT}\right\} dp_1 dp_2 dp_3. }\label{5}\]

    This is the famous Maxwell distribution.2 The normalization constant \(C\) may be readily found from the last form of Equation (\ref{5}), by requiring the integral of \(dW\) over all the momentum space to equal 1. Indeed, the integral is evidently a product of three similar 1D integrals over each Cartesian component \(p_j\) of the momentum \((j = 1, 2, 3)\), which may be readily reduced to the well-known dimensionless Gaussian integral,3 so that we get

    \[C=\left[\int_{-\infty}^{+\infty} \exp \left\{-\frac{p_{j}^{2}}{2 m T}\right\} d p_{j}\right]^{-3} \equiv\left[(2 m T)^{1 / 2} \int_{-\infty}^{+\infty} e^{-\xi^{2}} d \xi\right]^{-3}=(2 \pi m T)^{-3 / 2}.\label{6}\]

    As a sanity check, let us use the Maxwell distribution to calculate the average energy corresponding to each half-degree of freedom:

    \[\begin{align} \left\langle\frac{p_{j}^{2}}{2 m}\right\rangle &=\int \frac{p_{j}^{2}}{2 m} d W=\left[C^{1 / 3} \int_{-\infty}^{+\infty} \frac{p_{j}^{2}}{2 m} \exp \left\{-\frac{p_{j}^{2}}{2 m T}\right\} d p_{j}\right] \times\left[C^{1 / 3} \int_{-\infty}^{+\infty} \exp \left\{-\frac{p_{j^{\prime}}^{2}}{2 m T}\right\} d p_{j^{\prime}}\right]^{2} \nonumber \\&=\frac{T}{\pi^{1 / 2}} \int_{-\infty}^{+\infty} \xi^{2} e^{-\xi^{2}} d \xi .\label{7} \end{align}\]

    The last, dimensionless integral equals \(\surd\pi /2\),4 so that, finally,

    \[\left\langle \frac{p^2_j}{2m}\right\rangle \equiv \left\langle \frac{mv^2_j}{2}\right\rangle = \frac{T}{2}.\label{8}\]

    This result is (fortunately :-) in agreement with the equipartition theorem (\(2.2.30\)). It also means that the r.m.s. velocity of each particle is

    \[\delta v \equiv \langle v^2 \rangle^{1/2} = \left\langle \sum^3_{j=1} v^2_j \right\rangle^{1/2} = \langle 3v^2_j\rangle^{1/2} = \left(3 \frac{T}{m}\right)^{1/2}. \label{9}\]

    For a typical gas (say, for \(\ce{N2}\), the air’s main component), with \(m \approx 28m_p \approx 4.7 \times 10^{-26}\) kg, this velocity, at room temperature (\(T = k_BT_K \approx k_B \times 300\) K \(\approx 4.1 \times 10^{-21}\) J) is about 500 m/s, comparable with the sound velocity in the same gas – and with the muzzle velocity of a typical handgun bullet. Still, it is measurable using even the simple table-top equipment (say, a set of two concentric, rapidly rotating cylinders with a thin slit collimating an atomic beam emitted at the axis) that was available in the end of the \(19^{th}\) century. Experiments using such equipment gave convincing early confirmations of the Maxwell distribution.

    This is all very simple (isn’t it?), but actually the thermodynamic properties of a classical gas, especially its entropy, are more intricate. To show that, let us apply the Gibbs distribution to a gas portion consisting of \(N\) particles, rather than just one of them. If the particles are exactly similar, the eigenenergy spectrum \(\{\varepsilon_k\}\) of each of them is also exactly the same, and each value \(E_m\) of the total energy is just the sum of particular energies \(\varepsilon_{k(l)}\) of the particles, where \(k(l)\), with \(l = 1, 2, ... N\), is the number of the energy level on which the \(l^{th}\) particle resides. Moreover, since the gas is classical, \(\langle N_k \rangle << 1\), the probability of having two or more particles in any state may be ignored. As a result, we can use Equation (\(2.4.8\)) to write

    \[Z \equiv \sum_{m} \exp \left\{-\frac{E_{m}}{T}\right\}=\sum_{k(l)} \exp \left\{-\frac{1}{T} \sum_{l} \varepsilon_{k(t)}\right\}=\sum_{k(1)} \sum_{k(2)} \ldots \sum_{k(N)} \prod \exp \left\{-\frac{\varepsilon_{k(l)}}{T}\right\} , \label{10}\]

    where the summation has to be carried over all possible states of each particle. Since the summation over each set \(\{k(l)\}\) concerns only one of the operands of the product of exponents under the sum, it is tempting to complete the calculation as follows:

    \[Z \rightarrow Z_{\text {dist }}=\sum_{k(1)} \exp \left\{-\frac{\varepsilon_{k(1)}}{T}\right\} \cdot \sum_{k(2)} \exp \left\{-\frac{\varepsilon_{k(2)}}{T}\right\} \ldots \cdot \sum_{k(N)} \exp \left\{-\frac{\varepsilon_{k(N)}}{T}\right\}=\left(\sum_{k} \exp \left\{-\frac{\varepsilon_{k}}{T}\right\}\right)^{N}, \label{11}\]

    where the final summation is over all states of one particle. This formula is indeed valid for distinguishable particles.5 However, if the particles are indistinguishable (again, meaning that they are internally identical and free to move within the same spatial region), Equation (\ref{11}) has to be modified by what is called the correct Boltzmann counting:

    Correct Boltzmann counting:

    \[\boxed{Z = \frac{1}{n!} \left( \sum_k \text{exp}\left\{-\frac{\varepsilon_k}{T}\right\}\right)^N, } \label{12}\]

    that considers all quantum states different only by particle permutations, as the same state.

    \[\sum_{k}(\ldots) \rightarrow \int(\ldots) d N_{\text {states }}=\frac{g V}{(2 \pi)^{3}} \int(\ldots) d^{3} k=\frac{g V}{(2 \pi \hbar)^{3}} \int(\ldots) d^{3} p. \label{13}\]

    In application to Equation (\ref{12}), this rule yields

    \[Z=\frac{1}{N !}\left(\frac{g V}{(2 \pi \hbar)^{3}}\left[\int_{-\infty}^{+\infty} \exp \left\{-\frac{p_{j}^{2}}{2 m T}\right\} d p_{j}\right]^{3}\right)^{N}.\label{14}\]

    The integral in the square brackets is the same one as in Equation (\ref{6}), i.e. is equal to \((2\pi mT)^{1/2}\), so that finally

    \[Z=\frac{1}{N !}\left(\frac{g V}{(2 \pi \hbar)^{3}}(2 \pi m T)^{3 / 2}\right)^{N} \equiv \frac{1}{N !}\left[g V\left(\frac{m T}{2 \pi \hbar^{2}}\right)^{3 / 2}\right]^{N}. \label{15}\]

    Now, assuming that \(N >> 1\),7 and applying the Stirling formula, we can calculate the gas’ free energy:

    \[F=T \ln \frac{1}{Z}=-N T \ln \frac{V}{N}+N f(T), \label{16a} \]

    with

    \[f(T) \equiv-T\left\{\ln \left[g\left(\frac{m T}{2 \pi \hbar^{2}} \right)^{3 / 2} \right]+ 1\right\} . \label{16b}\]

    The first of these relations exactly coincides with Equation (\(1.4.22\)), which was derived in Sec. 1.4 from the equation of state \(PV = NT\), using thermodynamic identities. At that stage, this equation of state was just postulated, but now we can derive it by calculating the pressure from the second of Eqs. (\(1.4.12\)), and Equation (\ref{16a}):

    \[P = -\left(\frac{\partial F}{\partial V} \right)_T = \frac{NT}{V}. \label{17}\]

    So, the equation of state of the ideal classical gas, with density \(n \equiv N/V\), is indeed given by Equation (\(1.4.21\)):

    \[P = \frac{NT}{V} \equiv nT. \label{18}\]

    Hence we may use Eqs. (\(1.4.23\))-(\(1.4.28\)), derived from this equation of state, to calculate all other thermodynamic variables of the gas. For example, using Equation (\(1.4.24\)) with \(f(T)\) given by Equation (\ref{16b}), for the internal energy and the specific heat of the gas we immediately get

    \[E=N\left[f(T)-T \frac{d f(T)}{d T}\right]=\frac{3}{2} N T, \quad c_{V} \equiv \frac{C_{V}}{N}=\frac{1}{N}\left(\frac{\partial E}{\partial T}\right)_{V}=\frac{3}{2},\label{19}\]

    in full agreement with Equation (\ref{8}) and hence with the equipartition theorem.

    Much less trivial is the result for entropy, which may be obtained by combining Eqs. (\(1.4.23\)) and (\ref{16a}):

    \[S = - \left(\frac{\partial F}{\partial T}\right)_V = N \left[ \ln \frac{V}{N} - \frac{df(T)}{dT} \right]. \label{20}\]

    This formula,8 in particular, provides the means to resolve the following gas mixing paradox (sometimes called the “Gibbs paradox”). Consider two volumes, \(V_1\) and \(V_2\), separated by a partition, each filled with the same gas, with the same density \(n\), at the same temperature \(T\), and hence with the same pressure \(P\). Now let us remove the partition and let the gas portions mix; would the total entropy change? According to Equation (\ref{20}), it would not, because the ratio \(V/N = n\), and hence the expression in the square brackets is the same in the initial and the final state, so that the entropy is additive, as any extensive variable should be. This makes full sense if the gas particles in both parts of the volume are truly identical, i.e. the partition’s removal does not change our information about the system. However, let us assume that all particles are distinguishable; then the entropy should clearly increase because the mixing would decrease our information about the system, i.e. increase its disorder. A quantitative description of this effect may be obtained using Equation (\ref{11}). Repeating for \(Z_{dist}\) the calculations made above for \(Z\), we readily get a different formula for entropy:

    \[S_{dist} = N\left[ \ln V - \frac{df_{dist}(T)}{dT} \right], \quad f_{dist} (T) \equiv - T \ln \left[g \left(\frac{mT}{2\pi \hbar^2} \right)^{3/2} \right]. \label{21}\]

    Please notice that in contrast to the \(S\) given by Equation (\ref{20}), this entropy includes the term \(\ln V\) instead of \(\ln (V/N)\), so that \(S_{dir}\) is not proportional to \(N\) (at fixed temperature \(T\) and density \(N/V\)). While for distinguishable particles this fact does not present any conceptual problem, for indistinguishable particles it would mean that entropy was not an extensive variable, i.e. would contradict the basic assumptions of thermodynamics. This fact emphasizes again the necessity of the correct Boltzmann counting in the latter case.

    Using Equation (\ref{21}), we can calculate the change of entropy due to mixing two gas portions, with \(N_1\) and \(N_2\) distinguishable particles, at a fixed temperature \(T\) (and hence at unchanged function \(f_{dist}\)):

    \[\begin{align} \Delta S_{\text {dist }} &=\left(N_{1}+N_{2}\right) \ln \left(V_{1}+V_{2}\right)-\left(N_{1} \ln V_{1}+N_{2} \ln V_{2}\right) \nonumber\\[4pt] &=N_{1} \ln \frac{V_{1}+V_{2}}{V_{1}}+N_{2} \ln \frac{V_{1}+V_{2}}{V_{2}} \nonumber\\[4pt] &>0. \label{22} \end{align}\]

    Note that for a particular case, \(V_1 = V_2 = V/2\), Equation (\ref{22}) reduces to the simple result, \(\Delta S_{dist} = (N_1 + N_2) \ln 2\), which may be readily understood in terms of the information theory. Indeed, allowing each particle of the total number \(N = N_1 + N_2\) to spread to a twice larger volume, we lose one bit of information per particle, i.e. \(\Delta I = (N_1 + N_2)\) bits for the whole system. Let me leave it for the reader to show that Equation (\ref{22}) is also valid if particles in each sub-volume are indistinguishable from each other, but different from those in another sub-volume, i.e. for mixing of two different gases.9 However, it is certainly not applicable to the system where all particles are identical, stressing again that the correct Boltzmann counting (\ref{12}) does indeed affect the gas entropy, even though it may be not as consequential as the Maxwell distribution (\ref{5}), the equation of state (\ref{18}), and the average energy (\ref{19}).

    Now let us briefly discuss two generalizations of our results for ideal classical gases. First, let us consider such gas in an external field of potential forces. It may be described by replacing Equation (\ref{3}) with

    \[\varepsilon_k = \frac{p^2_k}{2m} + U(\mathbf{r}_k),\label{23}\]

    where \(\mathbf{r}_k\) is the position of the \(k^{th}\) particular particle, and \(U(\mathbf{r})\) is the potential energy of the particle. If the potential \(U(\mathbf{r})\) is changing in space sufficiently slowly,13 Equation (\ref{4}) is still applicable, but only to small volumes, \(V \rightarrow dV = d^3r\) whose linear size is much smaller than the spatial scale of substantial variations of the function \(U(\mathbf{r})\). Hence, instead of Equation (\ref{5}), we may only write the probability \(dW\) of finding the particle in a small volume \(d^3rd^3p\) of the 6-dimensional phase space:

    \[d W=w(\mathbf{r}, \mathbf{p}) d^{3} r d^{3} p, \quad w(\mathbf{r}, \mathbf{p})=\text { const } \times \exp \left\{-\frac{p^{2}}{2 m T}-\frac{U(\mathbf{r})}{T}\right\}. \label{24}\]

    Hence, the Maxwell distribution of particle velocities is still valid at each point \(\mathbf{r}\), so that the equation of state (\ref{18}) is also valid locally. A new issue here is the spatial distribution of the total density,

    \[n(\mathbf{r}) \equiv N \int w(\mathbf{r},\mathbf{p})d^3p, \label{25}\]

    of all gas particles, regardless of their momentum/velocity. For this variable, Equation (\ref{24}) yields14

    \[ n(\mathbf{r})=n(0)\text{exp}\left\{-\frac{U(\mathbf{r})}{T}\right\}, \label{26}\]

    where the potential energy at the origin \((\mathbf{r} = 0)\) is used as the reference of \(U\), and the local gas pressure may be still calculated from the local form of Equation (\ref{18}):

    \[P(\mathbf{r}) = n(\mathbf{r})T = P(0) \text{exp} \left\{-\frac{U(\mathbf{r})}{T}\right\}.\label{27}\]

    \[P(h) = P(0) \text{exp} \left\{-\frac{h}{h_0}\right\}, \quad \text{ with } h_0 \equiv \frac{T}{mg} = \frac{k_B T_K}{mg}. \label{28}\]

    For the same \(N_2\), the main component of the atmosphere, at \(T_K = 300\) K, \(h_0 \approx 7\) km. This gives the correct order of magnitude of the atmosphere’s thickness, though the exact law of the pressure change differs somewhat from Equation (\ref{28}), because the flow of radiation from Sun and Earth cause a relatively small deviation of the atmospheric air from the thermal equilibrium: a drop of its temperature \(T\) with height, with the so-called lapse rate of about 2% (\(\sim 6.5\) K) per km.

    The second generalization I need to discuss is to particles with internal degrees of freedom. Now ignoring the potential energy \(U(\mathbf{r})\), we may describe them by replacing Equation (\ref{3}) with

    \[\varepsilon_k = \frac{p^2}{2m} + \varepsilon_k', \label{29}\]

    where \(\varepsilon_k’\) describes the internal energy spectrum of the \(k^{th}\) particle. If the particles are similar, we may repeat all the above calculations, and see that all their results (including the Maxwell distribution, and the equation of state) are still valid, with the only exception of Equation (\ref{16a}-\ref{16b}), which now becomes

    \[f(T)=-T\left\{\ln \left[g\left(\frac{m T}{2 \pi \hbar^{2}}\right)^{3 / 2}\right]+1+\ln \left[\sum_{\varepsilon_{k}^{\prime}} \exp \left\{-\frac{\varepsilon_{k}^{\prime}}{T}\right\}\right]\right\} \label{30}.\]

    As we already know from Eqs. (\(1.4.27\))-(\(1.4.28\)), this change may affect both specific heats of the ideal gas – though not their difference, \(c_V – c_P = 1\). They may be readily calculated for usual atoms and molecules, at not very high temperatures (say the room temperature of \(\sim 25\) meV), because in these conditions, \(\varepsilon_k’ >> T\) for most their internal degrees of freedom, including the electronic and vibrational ones. (The typical energy of the lowest electronic excitations is of the order of a few eV, and that of the lowest vibrational excitations is only an order of magnitude lower.) As a result, these degrees of freedom are “frozen out”: they are in their ground states, so that their contributions \(\text{exp}\{-\varepsilon_k’/T\}\) to the sum in Equation (\ref{30}), and hence to the heat capacity, are negligible. In monoatomic gases, this is true for all degrees of freedom besides those of the translational motion, already taken into account by the first term in Equation (\ref{30}), i.e. by Equation (\ref{16b}), so that their specific heat is typically well described by Equation (\ref{19}).

    The most important exception is the rotational degrees of freedom of diatomic and polyatomic molecules. As quantum mechanics shows,15 the excitation energy of these degrees of freedom scales as \(\hbar 2/2I\), where \(I \) is the molecule’s relevant moment of inertia. In the most important molecules, this energy is rather low (e.g. for \(\ce{N2}\), it is close to 0.25 meV, i.e. \(\sim 1\)% of the room temperature), so that at usual conditions they are well excited and, moreover, behave virtually as classical degrees of freedom, each giving a quadratic contribution to the molecule’s energy, and hence obeying the equipartition theorem, i.e. giving an extra contribution of \(T/2\) to the energy, i.e. 1/2 to the specific heat.16 In polyatomic molecules, there are three such classical degrees of freedom (corresponding to their rotations about three principal axes17), but in diatomic molecules, only two.18 Hence, these contributions may be described by the following generalization of Equation (\ref{19}):

    \[c_{V}= \begin{cases}3 / 2, & \text { for monoatomic gases, } \\ 5 / 2, & \text { for gases of diatomic molecules, } \\ 3, & \text { for gases of polyatomic molecules. }\end{cases} \label{31}\]

    Please keep in mind, however, that as the above discussion shows, this simple result is invalid at very low and very high temperatures; its most notable violation is that the thermal activation of vibrational degrees of freedom for many important molecules at temperatures of a few thousand K.


    This page titled 3.1: Ideal Classical Gas is shared under a CC BY-NC-SA 4.0 license and was authored, remixed, and/or curated by Konstantin K. Likharev via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit history is available upon request.