Skip to main content
\(\require{cancel}\)
Physics LibreTexts

7.6: Model Examples

  • Page ID
    17246
  • The Problem

    We know how to compute things like average energy per particle once we have the function for the occupation number and the density of states function, so what we will examine here is how to find these two functions. The occupation number formula requires that we find the \(B\) in the denominator for the specific physical model (or equivalently, for fermions, the fermi energy). To get that, we need the density of states function, as we will see. Getting the density of states function requires that we know how the energy changes as the system moves through its states, as well as the degeneracy of the states. We'll take a moment to look at this sometime-daunting task of computing the density of states before moving on to our examples.

    Deriving the Density of States Function

    We already know something about the prescription for determining \(D\left(E\right)\), crudely expressed in Equation 7.4.10 – divide the degeneracy (expressed as a function of energy) by the rate at which the energy is changing with respect to the state change. Typically we have states expressed in terms of a single integer \(n\), which (according to our prescription of approximating sums with integrals) becomes a continuous variable. In Section 7.4, we saw that (for no degeneracy) this density is just the inverse of the widths of the rectangles, but with \(n\) now a continuous variable, these "rectangles" become infinitesimally-small, and their relative widths change continuously from one to the next. Assuming we do express the energy as a function of a single variable \(n\), the denominator of \(D\left(E\right)\) that is the rate at which the energy changes with level is simply the derivative of \(E\) with respect to \(n\). The degeneracy of states for a given value of \(E\) still needs to be determined, but the crude description of Equation 7.4.10 has now become at least a little less crude:

    \[D\left(E\right) = \dfrac{deg\left(E\right)}{\dfrac{dE}{dn}} \]

    Digression: Momentum Space

    Because of the direction we have taken to discuss density of states, it makes sense to define them as above. The reader should be aware, however, that use of the variable \(n\) in this calculation is not standard in the literature. The index \(n\) marks the energy level of the state, but this can be marked by other variables, and what is typically used is wave number, \(k\). For example, the energy spectrum for a particle in a one-dimensional box can be written as:

    \[E_n=\dfrac{n^2\pi^2\hbar^2}{2mL^2} \;\;\;\;\;\; or \;\;\;\;\;\; E\left(k\right)=\dfrac{\hbar^2 k^2}{2m},\;\;\; k \equiv \dfrac{n\pi}{L} \nonumber\]

    One advantage to this is cosmetic – it doesn't look as funny to take a derivative of \(E\) with respect to \(k\) as it does with respect to \(n\), which we think of as taking on only integer values. Indeed, if we are taking a discrete spectrum and treating it as if it was continuous (which is what we do when we approximate sums with integrals), then changing from a variable we save for discrete counting to one that is a continuous variable makes sense.

    Another advantage is that it helps us use our intuition, particularly when we deal with 3 dimensions (as we will below). In this case we are "counting" states by measuring a volume in a three-dimensional space, and when we use \(k\) rather than \(n\), the idea of a volume in momentum space is a bit more intuitive than a volume in "n-space."

    One-Dimensional Harmonic Oscillator with Spin

    We will now extend the model we examined previously in two ways: First, we'll increase the number of particles to a large number, requiring us to use the formulas for the occupation numbers for the various distributions, rather than counting them by hand as we did with four particles. And second, we will introduce a source of degeneracy – the particles are now allowed to have spin.

    We begin with the density of states. As this is a harmonic oscillator, the energy levels are evenly-spaced, so the rate at which the energy changes with respect to changes in the state is easy:

    \[ \dfrac{dE}{dn} = \dfrac{d}{dn}\left(n\hbar\omega_c\right) = \hbar\omega_c \]

    So all we need is the degeneracy as a function of energy. This is a one-dimensional well, so there is no degeneracy do to spatial symmetries (so no energy-dependent degeneracy), but there is a degeneracy due to spin. For a particle with spin of \(s\), there are \(2s+1\) distinct \(m_s\) states. Spin-0 has one state, spin-\(\frac{1}{2}\) has two states (spin up and spin down), spin-1 has three states (\(m_s = 0, \;\pm 1\)), and so on. So our density of states function is complete:

    \[ D(E) = \dfrac{2s+1}{\hbar\omega_c} \]

    Next we set out to determine \(B\) for the various distributions. We do this by first constructing the integral that computes the total number of particles:

    \[N=\int\limits_0^\infty \mathcal N\left(E\right)\; D\left(E\right)\;dE \]

    Now we choose the type of distribution we have and plug in the proper functional form for \(\mathcal N\left(E\right)\). Here is each of the three cases, in turn:

    Distinguishable (Boltzmann Distribution)

    The quantity \(B\) depends upon particle number and temperature, but not the particular energy level of a particle in the collection, so it plays no role in the integral:

    \[ N=\int\limits_0^\infty \left[\dfrac{1}{B\;e^{E/k_BT}}\right]\left[\dfrac{2s+1}{\hbar\omega_c}\right]dE \;\;\; \Rightarrow \;\;\; B=\dfrac{2s+1}{N\hbar\omega_c} \int\limits_0^\infty e^{-E/k_BT}dE =\left(\dfrac{2s+1}{N\hbar\omega_c}\right) k_BT\]

    For brevity (as this same quantity arises in the other distributions), we define:

    \[ \mathcal E \equiv \dfrac{N\hbar\omega_c}{2s+1} \]

    Putting it all together, we have for the occupation number:

    \[ \mathcal N\left(E\right) = \dfrac{1}{B}\; e^{-E/k_BT} = \dfrac{\mathcal E}{k_BT} \;e^{-E/k_BT} \]

    This, together with the density of states given in Equation 7.6.3, allows us to compute (for example) the average energy per particle for this physical model. It turns out that the \(2s+1\) in the density of states cancels with the same factor in \(B\), so it plays no role in the average energy when the particles are distinguishable, giving us the same result that we found already in Equation 7.4.15.

    Indistinguishable Bosons (Bose-Einstein Distribution)

    The only difference between this case and the one above is the integral we have to perform:

    \[ N=\int\limits_0^\infty \left[\dfrac{1}{B\;e^{E/k_BT}-1}\right]\left[\dfrac{2s+1}{\hbar\omega_c}\right]dE \;\;\; \Rightarrow \;\;\; \mathcal E = \int\limits_0^\infty \left[\dfrac{1}{B\;e^{E/k_BT}-1}\right]dE\]

    Only a little trickery is needed to solve this integral, if you aren't able to find it in a table of integrals. Define \(u\):

    \[ u\equiv Be^{E/k_BT} \;\;\; \Rightarrow \;\;\; du = \frac{1}{k_BT}\;u\;dE\]

    Plugging this in and changing the limits for the new variable gives us:

    \[ \mathcal E = k_BT\int\limits_B^\infty \dfrac{du}{u\left(u-1\right)} = k_BT\left[\ln\left(\dfrac{u-1}{u}\right)\right]_B^\infty = -k_BT\;\ln\left(1-\frac{1}{B}\right) \]

    Finally, we have \(B\):

    \[ B=\dfrac{1}{1-e^{-\mathcal E/k_BT}}\]

    Once again, this can be plugged back into the formula for the occupation number, giving all the pieces necessary to compute average energy per particle, etc.

    Indistinguishable Fermions (Fermi-Dirac Distribution)

    This calculation follows the previous one very closely, with only a few sign differences, so there is no need to detail the math here. The result is:

    \[ B=\dfrac{1}{e^{\mathcal E/k_BT}-1}\]

    We can compare this result with our definition of the fermi energy given in Equation 7.5.9 (for spin-\(\frac{1}{2}\) particles):

    \[ B=\dfrac{1}{e^{\mathcal E/k_BT}-1}=e^{-E_f/k_BT} \;\;\; \Rightarrow \;\;\; e^{\mathcal E/k_BT}-1 = e^{E_f/k_BT}\]

    For low temperatures (where the fermi energy is meaningful), the exponential overwhelms the constant 1, and we see that \(E_f \approx \mathcal E\). This actually makes sense, when we recall the definition of \(\mathcal E\) from Equation 7.6.6. The fermi energy at \(T=0\) is just the top energy level, with two spin-\(\frac{1}{2}\) fermions in each level leading up to it, and this is the energy \(\mathcal E\) for \(N\) particles with spin \(s=\frac{1}{2}\).

    Finally, its always a good idea to check to see if the classical limit works out correctly. This limit is reached when the temperature gets very large. This is because at large temperatures, the particles occupy many more of the higher-energy states, and are far enough separated quantum-mechanically that quantum statistics don't play an important role. At high temperatures, the exponents in Equation 7.6.11 and Equation 7.6.12 become very small, allowing us to approximate the exponential function with the first two terms of its power series expansion, and simplifying the \(B\)'s for the two quantum distributions with this approximation turns them into the \(B\) for the Boltmann distribution, confirming classical limit convergence:

    \[e^x = 1 + \frac{1}{1!}x + \frac{1}{2!}x^2 + \dots \approx 1+x \;\;\; \Rightarrow \;\;\; \left\{ \begin{array}{l} e^{-\mathcal E/k_BT} \approx 1-\dfrac{\mathcal E}{k_BT} \;\;\; \Rightarrow \;\;\; B_{BE} \approx \dfrac{k_BT}{\mathcal E} \\ e^{\mathcal E/k_BT} \approx 1+\dfrac{\mathcal E}{k_BT} \;\;\; \Rightarrow \;\;\; B_{FD} \approx \dfrac{k_BT}{\mathcal E}\end{array} \right. \]

    Three-Dimensional Quantum Gas

    We can't remain in 1-dimension forever, so now we take on a system of non-interacting ("gas") particles confined within a well-defined volume. We know that this will pose us some problems, because three dimensions introduces degeneracies, which play a role in the density of states.

    If we assume that this system is confined to a space that is cubical (we will relax this constraint later), then since the particles are free to move around in the space provided, but cannot escape the walls, this is essentially a symmetric 3-dimensional infinite square well. We know the energy spectrum for this already – Equation 4.2.12 with all three sides of equal length gives:

    \[E_{n_xn_yn_z} = \left(n_x^2+n_y^2+n_z^2\right)\dfrac{\pi^2\hbar^2}{2mL^2}\;,\;\;\;\;\; n_x,\;n_y,\;n_z=1,\;2,\;\dots \]

    While these energy levels begin with all the \(n\)'s equaling 1, with the number of particles and energy levels occupied, there is no problem using the same spectrum with the lowest state being \(n=0\). We are once again shifting the \(n\) variables from integers to a continuous parameters. In three-dimensional "\(n\)-space," there are entire surfaces of equal energy which are described by:

    \[ n_x^2+n_y^2+n_z^2=constant\]

    But this is the equation for a spherical surface, so if we call the radius of the sphere simply "\(n\)," the energy levels can be expressed in terms of that single parameter:

    \[E\left(n\right) = \dfrac{\pi^2\hbar^2}{2mL^2} \; n^2 \]

    This gives us the denominator of Equation 7.6.1 for the density of states:

    \[\dfrac{dE}{dn} = 2\dfrac{\pi^2\hbar^2}{2mL^2}\;n = 2\sqrt{\dfrac{\pi^2\hbar^2}{2mL^2}}E^{\frac{1}{2}} \]

    Now for the degeneracy. We know that every point on the spherical surface represents a state at the same energy, so the surface area must be the degeneracy. There is one hitch: The values of \(n_x\),\(n_y\), and \(n_z\) are not allowed to be negative, so while the surface is a sphere, only the portion of the sphere where this values are positive count. Therefore the degeneracy due to spatial symmetry is the surface area of one-eighth of a sphere of radius \(n\), which we then need to write in terms of the energy of states on that spherical surface:

    \[ spatial\;deg\left(E\right) =\frac{1}{8}\left[4\pi n^2\right] =\dfrac{mL^2}{\pi\hbar^2}\;E \]

    These particles in general have spin, so including a factor of \(2s+1\) completes the numerator:

    \[ deg\left(E\right) = spin\;deg\left(E\right) \cdot spatial\;deg\left(E\right) = \left(2s+1\right)\dfrac{mL^2}{\pi\hbar^2}\;E\]

    Putting this together with the denominator, we get at last the density of states.

    \[D\left(E\right) = \dfrac{deg\left(E\right)}{\dfrac{dE}{dn}}=\dfrac{\left(2s+1\right)\dfrac{mL^2}{\pi\hbar^2}\;E}{2\sqrt{\dfrac{\pi^2\hbar^2}{2mL^2}}E^{\frac{1}{2}}}=\dfrac{\left(2s+1\right)m^{\frac{3}{2}}V}{\sqrt{2}\;\pi^2\hbar^3}E^{\frac{1}{2}}\]

    Note that in the final equality, we have written \(L^3\) as the volume of the space, \(V\).

    With this, one can compute the \(B\) values for the various distributions (or equivalently for fermions, the fermi energy), and with the occupation number function thus derived, one can compute the average energy per particle, as we have done above.

    Conduction Electrons

    Electrons in metals are more-or-less non-interacting, and are confined to a three-dimensional space, so they behave very much like a quantum gas. They are fermions with spin-\(\frac{1}{2}\), so they obey the Fermi-Dirac statistics. We can use the quantum gas density of states to write the fermi energy in simple terms by looking at the system at \(T=0\). At this temperature, the occupation number is just a step function – it equals 1 when \(E<E_f\), and 0 when \(E>E_f\) (and \(\frac{1}{2}\) when \(E=E_f\), by definition). Therefore the limits of an integral involving this function are stopped at \(E_f\) and the particle number at \(T=0\) is:

    \[ N=\int\limits_0^\infty \mathcal N_{FD}\left(E\right)\;D\left(E\right)\;dE = \int\limits_0^{E_f}\left[\dfrac{\left(2s+1\right)m^{\frac{3}{2}}V}{\sqrt{2}\;\pi^2\hbar^3}E^{\frac{1}{2}} \right] \;dE=\dfrac{\left(2s+1\right)m^{\frac{3}{2}}V}{\sqrt{2}\;\pi^2\hbar^3}\left[\frac{2}{3}E_f^{\frac{3}{2}}\right] \]

    Solving for the fermi energy, we get:

    \[E_f = \left[\dfrac{9\pi^4}{2\left(2s+1\right)^2}\right]^{\frac{1}{3}}\dfrac{\hbar^2}{m}\left(\dfrac{N}{V}\right)^{\frac{2}{3}} \]

    The ratio \(\frac{N}{V}\) is the particle density within the confined space. For a metal that has one valence electron available per atom, this is the same as the atomic particle density of the metal. Plugging in some numbers for metals typically gives results of a few \(eV\) for the fermi energy. At room temperature (around \(300K\)), the value of \(k_BT\) is about \(\frac{1}{40}eV\), which makes \(k_BT\) roughly a factor of 100 smaller than the fermi energy. This means that the metal would need to be heated significantly to get the electrons to significantly populate the higher states. At 100 times the temperature (where \(k_BT\) is comparable to \(E_F\)), or \(30,000K\), the metal has long since vaporized, so we cannot heat the metal to turn conduction electrons into particles that behave classically.

    Work Functions of Metals

    We have been aware of the photoelectric effect since way back in Physics 9HC, and know that electrons can be knocked-off the surface of a metal by impinging photons of sufficient energy. Now we can see what this function represents in terms of particles in a finite square well. The electrons at the top of the stack of pairs in the metal's energy levels are at the fermi energy (because, as we said, conductors in solid form are quite "cold" compared to the fermi energy), and to free the electron, the incoming photon must have enough energy to raise one of those top electron's energies to the top of the well. We will talk later about what the full depth of the well is, but for now we'll call it \(V_o\). The work function is therefore:

    \[ \phi = V_o - E_f \]