Skip to main content
\(\require{cancel}\)
Physics LibreTexts

5.4 Entropy As a Microscopic Quantity

5.4.1 A microscopic view of entropy

To understand why the second law of thermodynamics is always true, we need to see what entropy really means at the microscopic level. An example that is easy to visualize is the free expansion of a monoatomic gas. Figure a/1 shows a box in which all the atoms of the gas are confined on one side. We very quickly remove the barrier between the two sides, a/2, and some time later, the system has reached an equilibrium, a/3. Each snapshot shows both the positions and the momenta of the atoms, which is enough information to allow us in theory to extrapolate the behavior of the system into the future, or the past. However, with a realistic number of atoms, rather than just six, this would be beyond the computational power of any computer.2

Figure a: A gas expands freely, doubling its volume.

 

But suppose we show figure a/2 to a friend without any further information, and ask her what she can say about the system's behavior in the future. She doesn't know how the system was prepared. Perhaps, she thinks, it was just a strange coincidence that all the atoms happened to be in the right half of the box at this particular moment. In any case, she knows that this unusual situation won't last for long. She can predict that after the passage of any significant amount of time, a surprise inspection is likely to show roughly half the atoms on each side. The same is true if you ask her to say what happened in the past. She doesn't know about the barrier, so as far as she's concerned, extrapolation into the past is exactly the same kind of problem as extrapolation into the future. We just have to imagine reversing all the momentum vectors, and then all our reasoning works equally well for backwards extrapolation. She would conclude, then, that the gas in the box underwent an unusual fluctuation, b, and she knows that the fluctuation is very unlikely to exist very far into the future, or to have existed very far into the past.

 

Figure b: An unusual fluctuation in the distribution of the atoms between the two sides of the box. There has been no external manipulation as in figure a/1.

 

What does this have to do with entropy? Well, state a/3 has a greater entropy than state a/2. It would be easy to extract mechanical work from a/2, for instance by letting the gas expand while pressing on a piston rather than simply releasing it suddenly into the void. There is no way to extract mechanical work from state a/3. Roughly speaking, our microscopic description of entropy relates to the number of possible states. There are a lot more states like a/3 than there are states like a/2. Over long enough periods of time --- long enough for equilibration to occur --- the system gets mixed up, and is about equally likely to be in any of its possible states, regardless of what state it was initially in. We define some number that describes an interesting property of the whole system, say the number of atoms in the right half of the box, \(R\). A high-entropy value of \(R\) is one like \(R=3\), which allows many possible states. We are far more likely to encounter \(R=3\) than a low-entropy value like \(R=0\) or \(R=6\).

Figure c: Earth orbit is becoming cluttered with space junk, and the pieces can be thought of as the “molecules” comprising an exotic kind of gas. These image shows the evolution of a cloud of debris arising from a 2007 Chinese test of an anti-satellite rocket. Panels 1-4 show the cloud five minutes, one hour, one day, and one month after the impact. The entropy seems to have maximized by panel 4.

5.4.2 Phase space

There is a problem with making this description of entropy into a mathematical definition. The problem is that it refers to the number of possible states, but that number is theoretically infinite. To get around the problem, we coarsen our description of the system. For the atoms in figure a, we don't really care exactly where each atom is. We only care whether it is in the right side or the left side. If a particular atom's left-right position is described by a coordinate \(x\), then the set of all possible values of \(x\) is a line segment along the \(x\) axis, containing an infinite number of points. We break this line segment down into two halves, each of width \(\Delta x\), and we consider two different values of \(x\) to be variations on the same state if they both lie in the same half. For our present purposes, we can also ignore completely the \(y\) and \(z\) coordinates, and all three momentum components, \(p_x\), \(p_y\), and \(p_z\).

 

Figure d: The phase space for two atoms in a box.

 

Now let's do a real calculation. Suppose there are only two atoms in the box, with coordinates \(x_1\) and \(x_2\). We can give all the relevant information about the state of the system by specifying one of the cells in the grid shown in figure d. This grid is known as the phase space of the system. The lower right cell, for instance, describes a state in which atom number 1 is in the right side of the box and atom number 2 in the left. Since there are two possible states with \(R=1\) and only one state with \(R=2\), we are twice as likely to observe \(R=1\), and \(R=1\) has higher entropy than \(R=2\).

 

Figure e: The phase space for three atoms in a box.

 

Figure e shows a corresponding calculation for three atoms, which makes the phase space three-dimensional. Here, the \(R=1\) and 2 states are three times more likely than \(R=0\) and 3. Four atoms would require a four-dimensional phase space, which exceeds our ability to visualize. Although our present example doesn't require it, a phase space can describe momentum as well as position, as shown in figure f. In general, a phase space for a monoatomic gas has six dimensions per atom (one for each coordinate and one for each momentum component).

f / A phase space for a single atom in one dimension, taking momentum into account.

5.4.3 Microscopic definitions of entropy and temperature

Two more issues need to be resolved in order to make a microscopic definition of entropy.

First, if we defined entropy as the number of possible states, it would be a multiplicative quantity, not an additive one: if an ice cube in a glass of water has \(M_1\) states available to it, and the number of states available to the water is \(M_2\), then the number of possible states of the whole system is the product \(M_1 M_2\). To get around this problem, we take the natural logarithm of the number of states, which makes the entropy additive because of the property of the logarithm \(\ln (M_1 M_2) = \ln M_1 + \ln M_2\).

The second issue is a more trivial one. The concept of entropy was originally invented as a purely macroscopic quantity, and the macroscopic definition \(\Delta S = Q/T\), which has units of J/K, has a different calibration than would result from defining \(S=\ln M\). The calibration constant we need turns out to be simply the Boltzmann constant, \(k\).

\mythmhdr{Microscopic definition of entropy} The entropy of a system is \(S = k \ln M\), where \(M\) is the number of available states.3

 

g / Ludwig Boltzmann's tomb, inscribed with his equation for entropy.

 

This also leads to a more fundamental definition of temperature. Two systems are in thermal equilibrium when they have maximized their combined entropy through the exchange of energy. Here the energy possessed by one part of the system, \(E_1\) or \(E_2\), plays the same role as the variable \(R\) in the examples of free expansion above. A maximum of a function occurs when the derivative is zero, so the maximum entropy occurs when

\[\begin{equation*} \frac{d\left(S_1+S_2\right)}{dE_1} = 0 . \end{equation*}\]

We assume the systems are only able to exchange heat energy with each other, \(dE_1=-dE_2\), so

\[\begin{equation*} \frac{dS_1}{dE_1} = \frac{dS_2}{dE_2} , \end{equation*}\]

and since the energy is being exchanged in the form of heat we can make the equations look more familiar if we write \(dQ\) for an amount of heat to be transferred into either system:

\[\begin{equation*} \frac{dS_1}{dQ_1} = \frac{dS_2}{dQ_2} . \end{equation*}\]

In terms of our previous definition of entropy, this is equivalent to \(1/T_1=1/T_2\), which makes perfect sense since the systems are in thermal equilibrium. According to our new approach, entropy has already been defined in a fundamental manner, so we can take this as a definition of temperature:

\[\begin{equation*} \frac{1}{T} = \frac{dS}{dQ} , \end{equation*}\]

where \(dS\) represents the increase in the system's entropy from adding heat \(dQ\) to it.

Examples with small numbers of atoms

Let's see how this applies to an ideal, monoatomic gas with a small number of atoms. To start with, consider the phase space available to one atom. Since we assume the atoms in an ideal gas are noninteracting, their positions relative to each other are really irrelevant. We can therefore enumerate the number of states available to each atom just by considering the number of momentum vectors it can have, without considering its possible locations. The relationship between momentum and kinetic energy is \(E=(p_x^2+p_y^2+p_z^2)/2m\), so if for a fixed value of its energy, we arrange all of an atom's possible momentum vectors with their tails at the origin, their tips all lie on the surface of a sphere in phase space with radius \(|\mathbf{p}|=\sqrt{2mE}\). The number of possible states for that atom is proportional to the sphere's surface area, which in turn is proportional to the square of the sphere's radius, \(|\mathbf{p}|^2=2mE\).

Now consider two atoms. For any given way of sharing the energy between the atoms, \(E=E_1+E_2\), the number of possible combinations of states is proportional to \(E_1E_2\). The result is shown in figure h. The greatest number of combinations occurs when we divide the energy equally, so an equal division gives maximum entropy.

h / A two-atom system has the highest number of available states when the energy is equally divided. Equal energy division is therefore the most likely possibility at any given moment in time.

 

By increasing the number of atoms, we get a graph whose peak is narrower, i. With more than one atom in each system, the total energy is \(E=(p_{x,1}^2+p_{y,1}^2+p_{z,1}^2+p_{x,2}^2+p_{y,2}^2+p_{z,2}^2+...)/2m\). With \(n\) atoms, a total of \(3n\) momentum coordinates are needed in order to specify their state, and such a set of numbers is like a single point in a \(3n\)-dimensional space (which is impossible to visualize). For a given total energy \(E\), the possible states are like the surface of a \(3n\)-dimensional sphere, with a surface area proportional to \(p^{3n-1}\), or \(E^{(3n-1)/2}\). The graph in figure i, for example, was calculated according to the formula \(E_1^{29/2}E_2^{29/2}=E_1^{29/2}(E-E_1)^{29/2}\).

i / When two systems of 10 atoms each interact, the graph of the number of possible states is narrower than with only one atom in each system.

 

Since graph i is narrower than graph h, the fluctuations in energy sharing are smaller. If we inspect the system at a random moment in time, the energy sharing is very unlikely to be more lopsided than a 40-60 split. Now suppose that, instead of 10 atoms interacting with 10 atoms, we had a \(10^{23}\) atoms interacting with \(10^{23}\) atoms. The graph would be extremely narrow, and it would be a statistical certainty that the energy sharing would be nearly perfectly equal. This is why we never observe a cold glass of water to change itself into an ice cube sitting in some warm water!

By the way, note that although we've redefined temperature, these examples show that things are coming out consistent with the old definition, since we saw that the old definition of temperature could be described in terms of the average energy per atom, and here we're finding that equilibration results in each subset of the atoms having an equal share of the energy.

Entropy of a monoatomic ideal gas

Let's calculate the entropy of a monoatomic ideal gas of \(n\) atoms. This is an important example because it allows us to show that our present microscopic treatment of thermodynamics is consistent with our previous macroscopic approach, in which temperature was defined in terms of an ideal gas thermometer.

The number of possible locations for each atom is \(V/\Delta x^3\), where \(\Delta x\) is the size of the space cells in phase space. The number of possible combinations of locations for the atoms is therefore \((V/\Delta x^3)^n\).

The possible momenta cover the surface of a \(3n\)-dimensional sphere, whose radius is \(\sqrt{2mE}\), and whose surface area is therefore proportional to \(E^{(3n-1)/2}\). In terms of phase-space cells, this area corresponds to \(E^{(3n-1)/2} / \Delta p^{3n}\) possible combinations of momenta, multiplied by some constant of proportionality which depends on \(m\), the atomic mass, and \(n\), the number of atoms. To avoid having to calculate this constant of proportionality, we limit ourselves to calculating the part of the entropy that does not depend on \(n\), so the resulting formula will not be useful for comparing entropies of ideal gas samples with different numbers of atoms.

The final result for the number of available states is

\[\begin{equation*} M = \left(\frac{V}{\Delta x^3}\right)^n\:\frac{E^{(3n-1)/2}}{\Delta p^{3n-1}} , \text{[function of $n$]} \end{equation*}\]

so the entropy is

\[\begin{equation*} S = nk \ln V + \frac{3}{2}nk\ln E + \text{(function of $\Delta x$, $\Delta p$, and $n$)} , \end{equation*}\]

where the distinction between \(n\) and \(n-1\) has been ignored. Using \(PV=nkT\) and \(E=(3/2)nkT\), we can also rewrite this as

\[\begin{equation*} S = \frac{5}{2} nk \ln T - nk \ln P + ... , \text{[entropy of a monoatomic ideal gas]} \end{equation*}\]

where “\(...\)” indicates terms that may depend on \(\Delta x\), \(\Delta p\), \(m\), and \(n\), but that have no effect on comparisons of gas samples with the same number of atoms.

self-check:

Why does it make sense that the temperature term has a positive sign in the above example, while the pressure term is negative? Why does it make sense that the whole thing is proportional to \(n\)?

(answer in the back of the PDF version of the book)

To show consistency with the macroscopic approach to thermodynamics, we need to show that these results are consistent with the behavior of an ideal-gas thermometer. Using the new definition \(1/T=dS/dQ\), we have \(1/T=dS/dE\), since transferring an amount of heat \(dQ\) into the gas increases its energy by a corresponding amount. Evaluating the derivative, we find \(1/T=(3/2)nk/E\), or \(E=(3/2)nkT\), which is the correct relation for a monoatomic ideal gas.

 

Example 20: A mixture of molecules

\(\triangleright\) Suppose we have a mixture of two different monoatomic gases, say helium and argon. How would we find the entropy of such a mixture (say, in terms of \(V\) and \(E\))? How would the energy be shared between the two types of molecules, i.e., would a more massive argon atom have more energy on the average than a less massive helium atom, the same, or less?

\(\triangleright\) Since entropy is additive, we simply need to add the entropies of the two types of atom. However, the expression derived above for the entropy omitted the dependence on the mass \(m\) of the atom, which is different for the two constituents of the gas, so we need to go back and figure out how to put that \(m\)-dependence back in. The only place where we threw away \(m\)'s was when we identified the radius of the sphere in momentum space with \(\sqrt{2mE}\), but then threw away the constant factor of \(m\). In other words, the final result can be generalized merely by replacing \(E\) everywhere with the product \(mE\). Since the log of a product is the sum of the logs, the dependence of the final result on \(m\) and \(E\) can be broken apart into two different terms, and we find

\[\begin{equation*} S=nk \ln V +\frac{3}{2}nk\ln m+\frac{3}{2}nk\ln E+... \end{equation*}\]

The total entropy of the mixture can then be written as

\[\begin{multline*} S =n_1k\ln V +n_2k \ln V +\frac{3}{2}n_1k\ln m_1+\frac{3}{2}n_2k\ln m_2 \\ +\frac{3}{2}n_1k\ln E_1+\frac{3}{2}n_2k\ln E_2+... \end{multline*}\]

Now what about the energy sharing? If the total energy is \(E=E_1+E_2\), then the most ovewhelmingly probable sharing of energy will the the one that maximizes the entropy. Notice that the dependence of the entropy on the masses \(m_1\) and \(m_2\) occurs in terms that are entirely separate from the energy terms. If we want to maximize \(S\) with respect to \(E_1\) (with \(E_2=E-E_1\) by conservation of energy), then we differentiate \(S\) with respect to \(E_1\) and set it equal to zero. The terms that contain the masses don't have any dependence on \(E_1\), so their derivatives are zero, and we find that the molecular masses can have no effect on the energy sharing. Setting the derivative equal to zero, we have

\[\begin{align*} 0 &= \frac{\partial}{\partial E_1} \left(n_1k\ln V +n_2k \ln V +\frac{3}{2}n_1k\ln m_1+\frac{3}{2}n_2k\ln m_2\right. \\ & +\left.\frac{3}{2}n_1k\ln E_1+\frac{3}{2}n_2k\ln (E-E_1)+...\right) \\ &= \frac{3}{2}k \left( \frac{n_1}{E_1} - \frac{n_2}{E-E_1} \right) \\ 0 &= \frac{n_1}{E_1} - \frac{n_2}{E-E_1} \\ \frac{n_1}{E_1} &= \frac{n_2}{E_2} . \end{align*}\]

In other words, each gas gets a share of the energy in proportion to the number of its atoms, and therefore every atom gets, on average, the same amount of energy, regardless of its mass. The result for the average energy per atom is exactly the same as for an unmixed gas, \(\bar{K}=(3/2)kT\).

Equipartition

Example 20 is a special case of a more general statement called the equipartition theorem. Suppose we have only one argon atom, named Alice, and one helium atom, named Harry. Their total kinetic energy is \(E=p_x^2/2m+p_y^2/2m+p_z^2/2m+{p'}_x^2/2m'+{p'}_y^2/2m'+{p'}_z^2/2m'\), where the primes indicate Harry. We have six terms that all look alike. The only difference among them is that the constant factors attached to the squares of the momenta have different values, but we've just proved that those differences don't matter. In other words, if we have any system at all whose energy is of the form \(E=(...)p_1^2+(...)p_2^2+...\), with any number of terms, then each term holds, on average, the same amount of energy, \(\frac{1}{2}kT\). We say that the system consisting of Alice and Harry has six degrees of freedom. It doesn't even matter whether the things being squared are momenta: if you look back over the logical steps that went into the argument, you'll see that none of them depended on that. In a solid, for example, the atoms aren't free to wander around, but they can vibrate from side to side. If an atom moves away from its equilibrium position at \(x=0\) to some other value of \(x\), then its electrical energy is \((1/2)\kappa x^2\), where \(\kappa\) is the spring constant (written as the Greek letter kappa to distinguish it from the Boltzmann constant \(k\)). We can conclude that each atom in the solid, on average, has \(\frac{1}{2}kT\) of energy in the electrical energy due to its \(x\) displacement along the \(x\) axis, and equal amounts for \(y\) and \(z\). This is known as equipartition, meaning equal partitioning, or equal sharing. The equipartition theorem says that if the expression for the energy looks like a sum of squared variables, then each degree of freedom has an average energy of \(\frac{1}{2}kT\). Thus, very generally, we can interpret temperature as the average energy per degree of freedom (times \(k/2\)).

An unexpected glimpse of the microcosm

You may have the feeling at this point that of course Boltzmann was right about the literal existence of atoms, but only very sophisticated experiments could vindicate him definitively. After all, the microscopic and macroscopic definitions of entropy are equivalent, so it might seem as though there was no real advantage to the microscopic approach. Surprisingly, very simple experiments are capable of revealing a picture of the microscopic world, and there is no possible macroscopic explanation for their results.

j / An experiment for determining the shapes of molecules.

 

In 1819, before Boltzmann was born, Clément and Desormes did an experiment like the one shown in figure j. The gas in the flask is pressurized using the syringe. This heats it slightly, so it is then allowed to cool back down to room temperature. Its pressure is measured using the manometer. The stopper on the flask is popped and then immediately reinserted. Its pressure is now equalized with that in the room, and the gas's expansion has cooled it a little, because it did mechanical work on its way out of the flask, causing it to lose some of its internal energy \(E\). The expansion is carried out quickly enough so that there is not enough time for any significant amount of heat to flow in through the walls of the flask before the stopper is reinserted. The gas is now allowed to come back up to room temperature (which takes a much longer time), and as a result regains a fraction \(b\) of its original overpressure. During this constant-volume reheating, we have \(PV=nkT\), so the amount of pressure regained is a direct indication of how much the gas cooled down when it lost an amount of energy \(\Delta E\).

k / The differing shapes of a helium atom (1), a nitrogen molecule (2), and a difluoroethane molecule (3) have surprising macroscopic effects.

 

If the gas is monoatomic, then we know what to expect for this relationship between energy and temperature: \(\Delta E=(3/2)nk\Delta T\), where the factor of 3 came ultimately from the fact that the gas was in a three-dimensional space, k/1. Moving in this space, each molecule can have momentum in the x, y, and z directions. It has three degrees of freedom. What if the gas is not monoatomic? Air, for example, is made of diatomic molecules, k/2. There is a subtle difference between the two cases. An individual atom of a monoatomic gas is a perfect sphere, so it is exactly the same no matter how it is oriented. Because of this perfect symmetry, there is thus no way to tell whether it is spinning or not, and in fact we find that it can't rotate. The diatomic gas, on the other hand, can rotate end over end about the x or y axis, but cannot rotate about the z axis, which is its axis of symmetry. It has a total of five degrees of freedom. A polyatomic molecule with a more complicated, asymmetric shape, k/3, can rotate about all three axis, so it has a total of six degrees of freedom.

Because a polyatomic molecule has more degrees of freedom than a monoatomic one, it has more possible states for a given amount of energy. That is, its entropy is higher for the same energy. From the definition of temperature, \(1/T=dS/dE\), we conclude that it has a lower temperature for the same energy. In other words, it is more difficult to heat \(n\) molecules of difluoroethane than it is to heat \(n\) atoms of helium. When the Clément-Desormes experiment is carried out, the result \(b\) therefore depends on the shape of the molecule! Who would have dreamed that such simple observations, correctly interpreted, could give us this kind of glimpse of the microcosm?

Lets go ahead and calculate how this works. Suppose a gas is allowed to expand without being able to exchange heat with the rest of the universe. The loss of thermal energy from the gas equals the work it does as it expands, and using the result of homework problem 2 on page 335, the work done in an infinitesimal expansion equals \(PdV\), so

\[\begin{equation*} dE + P dV = 0 . \end{equation*}\]

(If the gas had not been insulated, then there would have been a third term for the heat gained or lost by heat conduction.)

From section 5.2 we have \(E=(3/2)PV\) for a monoatomic ideal gas. More generally, the equipartition theorem tells us that the 3 simply needs to be replaced with the number of degrees of freedom \(\alpha\), so \(dE=(\alpha/2)PdV+(\alpha/2)VdP\), and the equation above becomes

\[\begin{equation*} 0 = \frac{\alpha+2}{2}PdV+\frac{\alpha}{2}VdP . \end{equation*}\]

Rearranging, we have

\[\begin{equation*} (\alpha+2)\frac{dV}{V} = -\alpha\frac{dP}{P} . \end{equation*}\]

Integrating both sides gives

\[\begin{equation*} (\alpha+2) \ln V = -\alpha \ln P + \text{constant} , \end{equation*}\]

and taking exponentials on both sides yields

\[\begin{equation*} V^{\alpha+2} \propto P^{-\alpha} . \end{equation*}\]

 

We now wish to reexpress this in terms of pressure and temperature. Eliminating \(V\propto(T/P)\) gives

\[\begin{equation*} T \propto P^b , \end{equation*}\]

where \(b=2/(\alpha+2)\) is equal to 2/5, 2/7, or 1/4, respectively, for a monoatomic, diatomic, or polyatomic gas.

 

Example 21: Efficiency of the Carnot engine

As an application, we now prove the result claimed earlier for the efficiency of a Carnot engine. First consider the work done during the constant-temperature strokes. Integrating the equation \(dW=PdV\), we have \(W = \int P dV\). Since the thermal energy of an ideal gas depends only on its temperature, there is no change in the thermal energy of the gas during this constant-temperature process. Conservation of energy therefore tells us that work done by the gas must be exactly balanced by the amount of heat transferred in from the reservoir.

\[\begin{align*} Q &= W \\ &= \int P dV \end{align*}\]

For our proof of the efficiency of the Carnot engine, we need only the ratio of \(Q_H\) to \(Q_L\), so we neglect constants of proportionality, and simply subsitutde \(P\propto T/V\), giving

\[\begin{equation*} Q \propto \int \frac{T}{V} dV \propto T \ln \frac{V_2}{V_1} \propto T \ln \frac{P_1}{P_2} . \end{equation*}\]

The efficiency of a heat engine is

\[\begin{equation*} \text{efficiency} = 1 - \frac{Q_L}{Q_H} . \end{equation*}\]

Making use of the result from the previous proof for a Carnot engine with a monoatomic ideal gas as its working gas, we have

\[\begin{equation*} \text{efficiency} = 1-\frac{T_L\:\ln(P_4/P_3)}{T_H\:\ln(P_1/P_2)} , \end{equation*}\]

where the subscripts 1, 2, 3, and 4 refer to figures d--g on page 311. We have shown above that the temperature is proportional to \(P^b\) on the insulated strokes 2-3 and 4-1, the pressures must be related by \(P_2/P_3=P_1/P_4\), which can be rearranged as \(P_4/P_3=P_1/P_2\), and we therefore have

\[\begin{equation*} \text{efficiency} = 1 - \frac{T_L}{T_H} . \end{equation*}\]

5.4.4 The arrow of time, or “this way to the Big Bang”

Now that we have a microscopic understanding of entropy, what does that tell us about the second law of thermodynamics? The second law defines a forward direction to time, “time's arrow.” The microscopic treatment of entropy, however, seems to have mysteriously sidestepped that whole issue. A graph like figure b on page 316, showing a fluctuation away from equilibrium, would look just as natural if we flipped it over to reverse the direction of time. After all, the basic laws of physics are conservation laws, which don't distinguish between past and future. Our present picture of entropy suggests that we restate the second law of thermodynamics as follows: low-entropy states are short-lived. An ice cube can't exist forever in warm water. We no longer have to distinguish past from future.

But how do we reconcile this with our strong psychological sense of the direction of time, including our ability to remember the past but not the future? Why do we observe ice cubes melting in water, but not the time-reversed version of the same process?

The answer is that there is no past-future asymmetry in the laws of physics, but there is a past-future asymmetry in the universe. The universe started out with the Big Bang. (Some of the evidence for the Big Bang theory is given on page 356.) The early universe had a very low entropy, and low-entropy states are short-lived. What does “short-lived” mean here, however? Hot coffee left in a paper cup will equilibrate with the air within ten minutes or so. Hot coffee in a thermos bottle maintains its low-entropy state for much longer, because the coffee is insulated by a vacuum between the inner and outer walls of the thermos. The universe has been mostly vacuum for a long time, so it's well insulated. Also, it takes billions of years for a low-entropy normal star like our sun to evolve into the high-entropy cinder known as a white dwarf.

The universe, then, is still in the process of equilibrating, and all the ways we have of telling the past from the future are really just ways of determining which direction in time points toward the Big Bang, i.e., which direction points to lower entropy. The psychological arrow of time, for instance, is ultimately based on the thermodynamic arrow. In some general sense, your brain is like a computer, and computation has thermodynamic effects. In even the most efficient possible computer, for example, erasing one bit of memory decreases its entropy from \(k \ln 2\) (two possible states) to \(k \ln 1\) (one state), for a drop of about \(10^{-23}\) J/K. One way of determining the direction of the psychological arrow of time is that forward in psychological time is the direction in which, billions of years from now, all consciousness will have ceased; if consciousness was to exist forever in the universe, then there would have to be a never-ending decrease in the universe's entropy. This can't happen, because low-entropy states are short-lived.

Relating the direction of the thermodynamic arrow of time to the existence of the Big Bang is a satisfying way to avoid the paradox of how the second law can come from basic laws of physics that don't distinguish past from future. There is a remaining mystery, however: why did our universe have a Big Bang that was low in entropy? It could just as easily have been a maximum-entropy state, and in fact the number of possible high-entropy Big Bangs is vastly greater than the number of possible low-entropy ones. The question, however, is probably not one that can be answered using the methods of science. All we can say is that if the universe had started with a maximum-entropy Big Bang, then we wouldn't be here to wonder about it. A longer, less mathematical discussion of these concepts, along with some speculative ideas, is given in “The Cosmic Origins of Time's Arrow,” Sean M. Carroll, Scientific American, June 2008, p. 48.

5.4.5 Quantum mechanics and zero entropy

The previous discussion would seem to imply that absolute entropies are never well defined, since any calculation of entropy will always end up having terms that depend on \(\Delta p\) and \(\Delta x\). For instance, we might think that cooling an ideal gas to absolute zero would give zero entropy, since there is then only one available momentum state, but there would still be many possible position states. We'll see later in this book, however, that the quantum mechanical uncertainty principle makes it impossible to know the location and position of a particle simultaneously with perfect accuracy. The best we can do is to determine them with an accuracy such that the product \(\Delta p\Delta x\) is equal to a constant called Planck's constant. According to quantum physics, then, there is a natural minimum size for rectangles in phase space, and entropy can be defined in absolute terms. Another way of looking at it is that according to quantum physics, the gas as a whole has some well-defined ground state, which is its state of minimum energy. When the gas is cooled to absolute zero, the scene is not at all like what we would picture in classical physics, with a lot of atoms lying around motionless. It might, for instance, be a strange quantum-mechanical state called the Bose-Einstein condensate, which was achieved for the first time recently with macroscopic amounts of atoms. Classically, the gas has many possible states available to it at zero temperature, since the positions of the atoms can be chosen in a variety of ways. The classical picture is a bad approximation under these circumstances, however. Quantum mechanically there is only one ground state, in which each atom is spread out over the available volume in a cloud of probability. The entropy is therefore zero at zero temperature. This fact, which cannot be understood in terms of classical physics, is known as the third law of thermodynamics.

5.4.6 Summary of the laws of thermodynamics

Here is a summary of the laws of thermodynamics:

  • The zeroth law of thermodynamics (page 303) If object A is at the same temperature as object B, and B is at the same temperature as C, then A is at the same temperature as C.
  • The first law of thermodynamics (page 298) Energy is
    [4] conserved.
  • The second law of thermodynamics (page 314) The entropy of a closed system always increases, or at best stays the same: \(\Delta S\ge0\).
  • The third law of thermodynamics (page 327) The entropy of a system approaches zero as its temperature approaches absolute zero.

From a modern point of view, only the first law deserves to be called a fundamental law of physics. Once Boltmann discovered the microscopic nature of entropy, the zeroth and second laws could be understood as statements about probability: a system containing a large number of particles is overwhelmingly likely to do a certain thing, simply because the number of possible ways to do it is extremely large compared to the other possibilities. The third law is also now understood to be a consequence of more basic physical principles, but to explain the third law, it's not sufficient simply to know that matter is made of atoms: we also need to understand the quantum-mechanical nature of those atoms, discussed in chapter 13. Historically, however, the laws of thermodynamics were discovered in the eighteenth century, when the atomic theory of matter was generally considered to be a hypothesis that couldn't be tested experimentally. Ideally, with the publication of Boltzmann's work on entropy in 1877, the zeroth and second laws would have been immediately demoted from the status of physical laws, and likewise the development of quantum mechanics in the 1920's would have done the same for the third law.

l / The Otto cycle. 1. In the exhaust stroke, the piston expels the burned air-gas mixture left over from the preceding cycle. 2. In the intake stroke, the piston sucks in fresh air-gas mixture. 3. In the compression stroke, the piston compresses the mixture, and heats it. 4. At the beginning of the power stroke, the spark plug fires, causing the air-gas mixture to burn explosively and heat up much more. The heated mixture expands, and does a large amount of positive mechanical work on the piston. An animated version can be viewed in the Wikipedia article “Four-stroke engine.”

Contributors