Skip to main content
Physics LibreTexts

4.4: Energy Dispersion in the Canonical Ensemble

  • Page ID
    6353
  • \( \newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \) \( \newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}} \)\(\newcommand{\id}{\mathrm{id}}\) \( \newcommand{\Span}{\mathrm{span}}\) \( \newcommand{\kernel}{\mathrm{null}\,}\) \( \newcommand{\range}{\mathrm{range}\,}\) \( \newcommand{\RealPart}{\mathrm{Re}}\) \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\) \( \newcommand{\Argument}{\mathrm{Arg}}\) \( \newcommand{\norm}[1]{\| #1 \|}\) \( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\) \( \newcommand{\Span}{\mathrm{span}}\) \(\newcommand{\id}{\mathrm{id}}\) \( \newcommand{\Span}{\mathrm{span}}\) \( \newcommand{\kernel}{\mathrm{null}\,}\) \( \newcommand{\range}{\mathrm{range}\,}\) \( \newcommand{\RealPart}{\mathrm{Re}}\) \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\) \( \newcommand{\Argument}{\mathrm{Arg}}\) \( \newcommand{\norm}[1]{\| #1 \|}\) \( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\) \( \newcommand{\Span}{\mathrm{span}}\)\(\newcommand{\AA}{\unicode[.8,0]{x212B}}\)

    The systems in the canonical ensemble are not restricted to having just one particular energy or falling within a given range of energies. Instead, systems with any energy from the ground state energy to infinity are present in the ensemble, but systems with higher energies are less probable. In this circumstance, it is important to ask not only for the mean energy, but also for the dispersion (uncertainty, fluctuation, spread, standard deviation) in energy.

    Terminology: “Uncertainty” suggests that there’s one correct value, but measurement errors prevent your knowing it. (For example: “You are 183.3 ± 0.3 cm tall.”) “Dispersion” suggests that there are several values, each one correct. (For example: “The mean height of people in this room is 172 cm, with a dispersion (as measured by the standard deviation) of 8 cm.”) “Fluctuation” is similar to “dispersion,” but suggests that the value changes with time. (For example: My height fluctuates between when I slouch and when I stretch.) This book will use the term “dispersion” or, when tradition dictates, “fluctuation.” Other books use the term “uncertainty.”

    The energy of an individual member of the ensemble we call H(x), whereas the average energy of the ensemble members we call E:

    \[ H(\mathrm{x})=\text { microscopic energy of an individual system }\]

    \[ E=\langle H(\mathrm{x})\rangle=\text { thermodynamic energy for the ensemble. }\]

    The dispersion in energy ∆E is given through

    \[ \Delta E^{2}=\left\langle(H(\mathbf{x})-E)^{2}\right\rangle \]

    \( \begin{aligned} &=\left\langle H^{2}(\mathbf{x})-2 H(\mathbf{x}) E+E^{2}\right\rangle \\ &=\left\langle H^{2}(\mathbf{x})\right\rangle- 2 E^{2}+E^{2} \\ &=\left\langle H^{2}(\mathbf{x})\right\rangle- E^{2} \end{aligned}\)

    \[ =\left\langle H^{2}(x)\right\rangle-\langle H(x)\rangle^{2}\]

    This relation, which holds for the dispersion of any quantity under any type of average, is worth memorizing. Furthermore it’s easy to memorize: The only thing that might trip you up is whether the result is \( \left\langle H^{2}\right\rangle-\langle H\rangle^{2}\) or \( \langle H\rangle^{2}-\left\langle H^{2}\right\rangle\), but the result must be positive (it is equal to ∆E2) and it’s easy to see that the average of the squares must exceed the square of the averages (consider a list of data containing both positive and negative numbers).

    Now it remains to find \( \left\langle H^{2}(\mathrm{x})\right\rangle\). Recall that we evaluated h\(\left\langle H (\mathrm{x})\right\rangle\) through a slick trick (“parametric differentiation”) involving the derivative

    \[ \frac{\partial \ln Z}{\partial \beta} )_{\text { parameters }},\]

    namely

    \[ \frac{\partial \ln Z}{\partial \beta}=\frac{1}{Z} \frac{\partial Z}{\partial \beta}=\frac{1}{Z} \frac{\partial}{\partial \beta}\left(\sum_{\mathbf{x}} e^{-\beta H(\mathbf{x})}\right)=-\frac{\sum_{\mathbf{x}} H(\mathbf{x}) e^{-\beta H(\mathbf{x})}}{\sum_{\mathbf{x}} e^{-\beta H(\mathbf{x})}}=-E\]

    The essential part of the trick was that the derivative with respect to β pulls down an H(x) from the exponent in the Boltzmann factor. In order to pull down two factors of H(x), we will need to take two derivatives. Thus the average \( \left\langle H^{2}(\mathrm{x})\right\rangle\) must be related to the second-order derivative

    \[ \frac{\partial^{2} \ln Z}{\partial \beta^{2}} )_{\text { parameters }}.\]

    To see precisely how this works out, we take

    \( \begin{aligned} \frac{\partial^{2} \ln Z}{\partial \beta^{2}} &=-\frac{\left(\sum_{x} e^{-\beta H(\mathbf{x})}\right)\left(-\sum_{\mathbf{x}} H^{2}(\mathbf{x}) e^{-\beta H(\mathbf{x})}\right)-\left(\sum_{\mathbf{x}} H(\mathbf{x}) e^{-\beta H(\mathbf{x})}\right)\left(-\sum_{\mathbf{x}} H(\mathbf{x}) e^{-\beta H(\mathbf{x})}\right)}{\left(\sum_{\mathbf{x}} e^{-\beta H(\mathbf{x})}\right)^{2}} \\ &=\frac{\sum_{\mathbf{x}} H^{2}(\mathbf{x}) e^{-\beta H(\mathbf{x})}}{\sum_{\mathbf{x}} e^{-\beta H(\mathbf{x})}}-\left(\frac{\sum_{\mathbf{x}} H(\mathbf{x}) e^{-\beta H(\mathbf{x})}}{\sum_{\mathbf{x}} e^{-\beta H(\mathbf{x})}}\right)^{2} \\ &=\left\langle H^{2}(\mathbf{x})\right\rangle-\langle H(\mathbf{x})\rangle^{2} \end{aligned}\)

    For our purposes, this result is better than we could ever possibly have hoped. It tells us that

    \[ \Delta E^{2}=\frac{\partial^{2} \ln Z}{\partial \beta^{2}}=-\frac{\partial E}{\partial \beta}=-\frac{\partial E}{\partial T} \frac{\partial T}{\partial \beta}.\]

    To simplify the rightmost expression, note that

    \[ \frac{\partial E}{\partial T}=C_{V} \quad \text { and } \quad \frac{\partial \beta}{\partial T}=\frac{\partial\left(1 / k_{B} T\right)}{\partial T}=-\frac{1}{k_{B} T^{2}},\]

    whence

    \[ \Delta E^{2}=k_{B} T^{2} C_{V}\]

    or

    \[ \Delta E=T \sqrt{k_{B} C_{V}}.\]

    This result is called a “fluctuation-susceptibility theorem”.

    Analogy: “susceptibility” means “herd instinct”. A herd of goats is highly susceptible to external influence, because a small influence (such as a waving handkerchief) will cause the whole herd to stampede. On the other hand a herd of cows is insusceptible to external influence. . . indeed, cowherds often talk about how hard it is to get the herd into motion. A politician is susceptible if he or she is readily swayed by the winds of public opinion. The opposite of “susceptible” is “steadfast” or “stalwart”. If a herd (or politician) is highly susceptible, you expect to see large fluctuations. A herd of goats runs all over its pasture, whereas a herd of cows stays pretty much in the same place.

    How does ∆E behave for large systems, that is “in the thermodynamic limit”? Because CV is intensive, ∆E goes up like \(\sqrt{N}\) as the thermodynamic limit is approached. But of course, we expect things to go to infinity in the thermodynamic limit! There will be infinite number, volume, energy, entropy, free energy, etc. The question is what happens to the relative dispersion in energy, ∆E/E. This quantity goes to zero in the thermodynamic limit.

    This resolves the question raised at the end of the previous section. A system in a canonical ensemble is allowed to have any energy from ground state to infinity. But most of the systems will not make use of that option: they will in fact have energies falling within a very narrow band about the mean energy. For larger and larger systems, this energy band becomes more and more narrow. This is why the canonical entropy is the same as the microcanonical entropy.


    This page titled 4.4: Energy Dispersion in the Canonical Ensemble is shared under a CC BY-SA license and was authored, remixed, and/or curated by Daniel F. Styer.

    • Was this article helpful?