Skip to main content
Physics LibreTexts

13.4: The Second Law and Entropy

  • Page ID
    22284
  • \( \newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \)

    \( \newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}} \)

    \( \newcommand{\id}{\mathrm{id}}\) \( \newcommand{\Span}{\mathrm{span}}\)

    ( \newcommand{\kernel}{\mathrm{null}\,}\) \( \newcommand{\range}{\mathrm{range}\,}\)

    \( \newcommand{\RealPart}{\mathrm{Re}}\) \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\)

    \( \newcommand{\Argument}{\mathrm{Arg}}\) \( \newcommand{\norm}[1]{\| #1 \|}\)

    \( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\)

    \( \newcommand{\Span}{\mathrm{span}}\)

    \( \newcommand{\id}{\mathrm{id}}\)

    \( \newcommand{\Span}{\mathrm{span}}\)

    \( \newcommand{\kernel}{\mathrm{null}\,}\)

    \( \newcommand{\range}{\mathrm{range}\,}\)

    \( \newcommand{\RealPart}{\mathrm{Re}}\)

    \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\)

    \( \newcommand{\Argument}{\mathrm{Arg}}\)

    \( \newcommand{\norm}[1]{\| #1 \|}\)

    \( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\)

    \( \newcommand{\Span}{\mathrm{span}}\) \( \newcommand{\AA}{\unicode[.8,0]{x212B}}\)

    \( \newcommand{\vectorA}[1]{\vec{#1}}      % arrow\)

    \( \newcommand{\vectorAt}[1]{\vec{\text{#1}}}      % arrow\)

    \( \newcommand{\vectorB}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \)

    \( \newcommand{\vectorC}[1]{\textbf{#1}} \)

    \( \newcommand{\vectorD}[1]{\overrightarrow{#1}} \)

    \( \newcommand{\vectorDt}[1]{\overrightarrow{\text{#1}}} \)

    \( \newcommand{\vectE}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash{\mathbf {#1}}}} \)

    \( \newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \)

    \( \newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}} \)

    \(\newcommand{\avec}{\mathbf a}\) \(\newcommand{\bvec}{\mathbf b}\) \(\newcommand{\cvec}{\mathbf c}\) \(\newcommand{\dvec}{\mathbf d}\) \(\newcommand{\dtil}{\widetilde{\mathbf d}}\) \(\newcommand{\evec}{\mathbf e}\) \(\newcommand{\fvec}{\mathbf f}\) \(\newcommand{\nvec}{\mathbf n}\) \(\newcommand{\pvec}{\mathbf p}\) \(\newcommand{\qvec}{\mathbf q}\) \(\newcommand{\svec}{\mathbf s}\) \(\newcommand{\tvec}{\mathbf t}\) \(\newcommand{\uvec}{\mathbf u}\) \(\newcommand{\vvec}{\mathbf v}\) \(\newcommand{\wvec}{\mathbf w}\) \(\newcommand{\xvec}{\mathbf x}\) \(\newcommand{\yvec}{\mathbf y}\) \(\newcommand{\zvec}{\mathbf z}\) \(\newcommand{\rvec}{\mathbf r}\) \(\newcommand{\mvec}{\mathbf m}\) \(\newcommand{\zerovec}{\mathbf 0}\) \(\newcommand{\onevec}{\mathbf 1}\) \(\newcommand{\real}{\mathbb R}\) \(\newcommand{\twovec}[2]{\left[\begin{array}{r}#1 \\ #2 \end{array}\right]}\) \(\newcommand{\ctwovec}[2]{\left[\begin{array}{c}#1 \\ #2 \end{array}\right]}\) \(\newcommand{\threevec}[3]{\left[\begin{array}{r}#1 \\ #2 \\ #3 \end{array}\right]}\) \(\newcommand{\cthreevec}[3]{\left[\begin{array}{c}#1 \\ #2 \\ #3 \end{array}\right]}\) \(\newcommand{\fourvec}[4]{\left[\begin{array}{r}#1 \\ #2 \\ #3 \\ #4 \end{array}\right]}\) \(\newcommand{\cfourvec}[4]{\left[\begin{array}{c}#1 \\ #2 \\ #3 \\ #4 \end{array}\right]}\) \(\newcommand{\fivevec}[5]{\left[\begin{array}{r}#1 \\ #2 \\ #3 \\ #4 \\ #5 \\ \end{array}\right]}\) \(\newcommand{\cfivevec}[5]{\left[\begin{array}{c}#1 \\ #2 \\ #3 \\ #4 \\ #5 \\ \end{array}\right]}\) \(\newcommand{\mattwo}[4]{\left[\begin{array}{rr}#1 \amp #2 \\ #3 \amp #4 \\ \end{array}\right]}\) \(\newcommand{\laspan}[1]{\text{Span}\{#1\}}\) \(\newcommand{\bcal}{\cal B}\) \(\newcommand{\ccal}{\cal C}\) \(\newcommand{\scal}{\cal S}\) \(\newcommand{\wcal}{\cal W}\) \(\newcommand{\ecal}{\cal E}\) \(\newcommand{\coords}[2]{\left\{#1\right\}_{#2}}\) \(\newcommand{\gray}[1]{\color{gray}{#1}}\) \(\newcommand{\lgray}[1]{\color{lightgray}{#1}}\) \(\newcommand{\rank}{\operatorname{rank}}\) \(\newcommand{\row}{\text{Row}}\) \(\newcommand{\col}{\text{Col}}\) \(\renewcommand{\row}{\text{Row}}\) \(\newcommand{\nul}{\text{Nul}}\) \(\newcommand{\var}{\text{Var}}\) \(\newcommand{\corr}{\text{corr}}\) \(\newcommand{\len}[1]{\left|#1\right|}\) \(\newcommand{\bbar}{\overline{\bvec}}\) \(\newcommand{\bhat}{\widehat{\bvec}}\) \(\newcommand{\bperp}{\bvec^\perp}\) \(\newcommand{\xhat}{\widehat{\xvec}}\) \(\newcommand{\vhat}{\widehat{\vvec}}\) \(\newcommand{\uhat}{\widehat{\uvec}}\) \(\newcommand{\what}{\widehat{\wvec}}\) \(\newcommand{\Sighat}{\widehat{\Sigma}}\) \(\newcommand{\lt}{<}\) \(\newcommand{\gt}{>}\) \(\newcommand{\amp}{&}\) \(\definecolor{fillinmathshade}{gray}{0.9}\)

    The second law of thermodynamics is really little more than a formal statement of the observation that heat always flows spontaneously from a warmer to a colder object, and never in reverse.

    More precisely, consider two systems, at different temperatures, that can exchange heat with each other but are otherwise isolated from the rest of the world. The second law states that under those conditions the heat will only flow from the warmer to the colder one.

    The closure of the system—its isolation from any sources of energy—is important in the above statement. It is certainly possible to build devices that will remove heat from a relatively cold place (like the inside of your house on a hot summer day) and exhaust it to a warmer environment. These devices are called refrigerators or heat pumps, and the main thing about them is that they need to be plugged in to operate: that is, they require an external energy source.

    If you have an energy source, then, you can move heat from a colder to a warmer object. To avoid unnecessary complications and loopholes (what if the energy source is a battery that is physically inside your “closed” system?) an alternative formulation of the basic principle, due to Clausius, goes as follows:

    No process is possible whose sole result is the transfer of heat from a cooler to a hotter body.

    The words “sole result” are meant to imply that in order to accomplish this “unnatural” transfer of heat you must draw energy from some source, and so you must be, in some way, depleting that source (the battery, for instance). On the other hand, for the reverse, “spontaneous” process—the flow from hotter to cooler—no such energy source is necessary.

    A mathematical way to formulate the second law would be as follows. Consider two systems, in thermal equilibrium at temperatures \(T_1\) and \(T_2\), that you place in contact so they can exchange heat. For simplicity, assume that exchange of heat is all that happens; no work is done either by the systems or on them, and no heat is transferred to or from the outside world either. Then, if \(Q_1\) and \(Q_2\) are the amounts of heat gained by each system, we must have, by the conservation of energy, \(Q_2 = −Q_1\), so one of these is positive and the other one is negative, and, by the second law, the system with the positive \(Q\) (the one that gains thermal energy) must be the colder one. This is ensured by the following inequality:

    \[ Q_{1}\left(T_{2}-T_{1}\right) \geq 0 \label{eq:13.9} .\]

    So, if \(T_2 > T_1\), \(Q_1\) must be positive, and if \(T_1 > T_2\), \(Q_1\) must be negative. (The equal sign is there to allow for the case in which \(T_1 = T_2\), in which case the two systems are initially in thermal equilibrium already, and no heat transfer takes place.)

    Equation (\ref{eq:13.9}) is valid regardless of the temperature scale. If we use the Kelvin scale, in which all the temperatures are positive3, we can rewrite it by dividing both sides by the product \(T_1T_2\), and using \(Q_2 = −Q_1\), as

    \[ \frac{Q_{1}}{T_{1}}+\frac{Q_{2}}{T_{2}} \geq 0 \label{eq:13.10} .\]

    This more symmetric statement of the second law is a good starting point from which to introduce the concept of entropy, which I will proceed to do next.


    3For a while in the 1970’s some people were very excited by the concept of negative absolute temperatures, but that is mostly an artificial contrivance used to describe systems that are not really in thermal equilibrium anyway.


    Entropy

    In Equations (\ref{eq:13.9}) and (\ref{eq:13.10}), we have taken \(T_1\) and \(T_2\) to be the initial temperatures of the two systems, but in general, of course, these temperatures will change during the heat transfer process. It is useful to consider an “infinitesimal” heat transfer, \(dQ\), so small that it leads to a negligible temperature change, and then define the change in the system’s entropy by

    \[ d S=\frac{d Q}{T} \label{eq:13.11} .\]

    Here, \(S\) denotes a new system variable, the entropy, which is implicitly defined by Equation (\ref{eq:13.11}). That is to say, suppose you take a system from one initial state to another by adding or removing a series of infinitesimal amounts of heat. We take the change in entropy over the whole process to be

    \[ \Delta S=S_{f}-S_{i}=\int_{i}^{f} \frac{d Q}{T} \label{eq:13.12} .\]

    Starting from an arbitrary state, we could use this to find the entropy for any other state, at least up to a (probably) unimportant constant (a little like what happens with the energy: the absolute value of the energy does not typically matter, it is only the energy differences that are meaningful). This may be easier said than done, though; there is no a priori guarantee that any two arbitrary states of a system could be connected by a process for which (\ref{eq:13.12}) could be calculated, and conversely, it might also happen that two states could be connected by several possible processes, and the integral in (\ref{eq:13.12}) would have different values for all those. In other words, there is no guarantee that the entropy thus defined will be a true state function—something that is uniquely determined by the other variables that characterize a system’s state in thermal equilibrium.

    Nevertheless, it turns out that it is possible to show that the integral (\ref{eq:13.12}) is indeed independent of the “path” connecting the initial and final states, at least as long as the physical processes considered are “reversible” (a constraint that basically amounts to the requirement that heat be exchanged, and work done, only in small increments at a time, so that the system never departs too far from a state of thermal equilibrium). I will not attempt the proof here, but merely note that this provides the following, alternative formulation of the second law of thermodynamics:

    For every system in thermal equilibrium, there is a state function, the entropy, with the property that it can never decrease for a closed system.

    You can see how this covers the case considered in the previous section, of two objects, 1 and 2, in thermal contact with each other but isolated from the rest of the world. If object 1 absorbs some heat \(dQ_1\) while at temperature \(T_1\) its change in entropy will be \(dS_1 = dQ_1/T_1\), and similarly for object 2. The total change in the entropy of the closed system formed by the two objects will then be

    \[ d S_{\text {total }}=d S_{1}+d S_{2}=\frac{d Q_{1}}{T_{1}}+\frac{d Q_{2}}{T_{2}} \label{eq:13.13} \]

    and the requirement that this cannot be negative (that is, \(S_{total}\) must not decrease) is just the same as Equation (\ref{eq:13.10}), in differential form.

    Once again, this simply means that the hotter object gives off the heat and the colder one absorbs it, but when you look at it in terms of entropy it is a bit more interesting than that. You can see that the entropy of the hotter object decreases (negative \(dQ\)), and that of the colder one increases (positive \(dQ\)), but by a different amount: in fact, it increases so much that it makes the total change in entropy for the system positive. This shows that entropy is rather different from energy (which is simply conserved in the process). You can always make it increase just by letting a process “take its normal course”—in this case, just letting the heat flow from the warmer to the colder object until they reach thermal equilibrium with each other (at which point, of course, the entropy will stop increasing, since it is a function of the state and the state will no longer change).

    Although not immediately obvious from the above, the absolute (or Kelvin) temperature scale plays an essential role in the definition of the entropy, in the sense that only in such a scale (or another scale linearly proportional to it) is the entropy, as defined by Equation (\ref{eq:13.12}), a state variable; that is, only when using such a temperature scale is the integral (\ref{eq:13.12}) path-independent. The proof of this (which is much too complicated to even sketch here) relies essentially on the Carnot principle, to be discussed next.

    The Efficiency of Heat Engines

    By the beginning of the 19th century, an industrial revolution was underway in England, due primarily to the improvements in the efficiency of steam engines that had taken place a few decades earlier. It was natural to ask how much this efficiency could ultimately be increased, and in 1824, a French engineer, Nicolas Sadi Carnot, wrote a monograph that provided an answer to this question.

    Carnot modeled a “heat engine” as an abstract machine that worked in a cycle. In the course of each cycle, the engine would take in an amount of heat \(Q_h\) from a “hot reservoir,” give off (or “exhaust”) an amount of heat |\(Q_c\)| to a “cold reservoir,” and produce an amount of work |\(W\)|. (I am using absolute value bars here because, from the point of view of the engine, \(Q_c\) and \(W\) must be negative quantities.) At the end of the cycle, the engine should be back to its initial state, so \(\Delta E_{engine} = 0\). The hot and cold reservoirs were supposed to be systems with very large heat capacities, so that the change in their temperatures as they took in or gave off the heat from or to the engine would be negligible.

    If \(\Delta E_{engine} = 0\), we must have

    \[ \Delta E_{\text {engine }}=Q_{h}+Q_{c}+W=Q_{h}-\left|Q_{c}\right|-|W|=0 \label{eq:13.14} \]

    that is, the work produced by the engine must be

    \[ |W|=Q_{h}-\left|Q_{c}\right| \label{eq:13.15} .\]

    The energy input to the engine is \(Q_h\), so it is natural to define the efficiency as \(\epsilon=|W| / Q_{h}\); that is to say, the Joules of work done per Joule of heat taken in. A value of \(\epsilon = 1\) would mean an efficiency of 100%, that is, the complete conversion of thermal energy into macroscopic work. By Equation (\ref{eq:13.15}), we have

    \[ \epsilon=\frac{|W|}{Q_{h}}=\frac{Q_{h}-\left|Q_{c}\right|}{Q_{h}}=1-\frac{\left|Q_{c}\right|}{Q_{h}} \label{eq:13.16} \]

    which shows that \(\epsilon\) will always be less than 1 as long as the heat exhausted to the cold reservoir, \(Q_c\), is nonzero. This is always necessarily the case for steam engines: the steam needs to be cooled off at the end of the cycle, so a new cycle can start again.

    Carnot considered a hypothetical “reversible” engine (sometimes called a Carnot machine), which could be run backwards, while interacting with the same two reservoirs. In backwards mode, the machine would work as a refrigerator or heat pump. It would take in an amount of work \(W\) per cycle (from some external source) and use that to absorb the amount of heat |\(Q_c\)| from the cold reservoir and dump the amount \(Q_h\) to the hot reservoir. Carnot argued that no heat engine could have a greater efficiency than a reversible one working between the same heat reservoirs, and, consequently, that all reversible engines, regardless of their composition, would have the same efficiency when working in between the same temperatures. His argument was based on the observation that a hypothetical engine with a greater efficiency than the reversible one could be used to drive a reversible one in refrigerator mode, to produce as the sole result the transfer of some net amount of heat from the cold to the hot reservoir4, something that we argued in Section 1 should be impossible.

    What makes this result more than a theoretical curiosity is the fact that an ideal gas would, in fact, provide a suitable working substance for a Carnot machine, if put through the following cycle (the so-called “Carnot cycle”): an isothermal expansion, followed by an adiabatic expansion, then an isothermal compression, and finally an adiabatic compression. What makes this ideally reversible is the fact that the heat is exchanged with each reservoir only when the gas is at (nearly) the same temperature as the reservoir itself, so by just “nudging” the temperature up or down a little bit you can get the exchange to go either way. When the ideal gas laws are used to calculate the efficiency of such a machine, the result (the Carnot efficiency) is

    \[ \epsilon_{C}=1-\frac{T_{c}}{T_{h}} \label{eq:13.17} \]

    where the temperatures must be measured in degrees Kelvin, the natural temperature scale for an ideal gas.

    It is actually easy to see the connection between this result and the entropic formulation of the second law presented above. Suppose for a moment that Carnot’s principle does not hold, that is to say, that we can build an engine with \(\epsilon > \epsilon_C = 1 − T_c/T_h\). Since (\ref{eq:13.16}) must hold in any case (because of conservation of energy), we find that this would imply

    \[ 1-\frac{\left|Q_{c}\right|}{Q_{h}}>1-\frac{T_{c}}{T_{h}} \label{eq:13.18} \]

    and then some very simple algebra shows that

    \[ -\frac{Q_{h}}{T_{h}}+\frac{\left|Q_{c}\right|}{T_{c}}<0 \label{eq:13.19} .\]

    But now consider the total entropy of the system formed by the engine and the two reservoirs. The engine’s entropy does not change (because it works in a cycle); the entropy of the hot reservoir goes down by an amount \(−Q_h/T_h\); and the entropy of the cold reservoir goes up by an amount \(|Q_c|/T_c\). So the left-hand side of Equation (\ref{eq:13.19}) actually equals the total change in entropy, and Equation (\ref{eq:13.19}) is telling us that this change is negative (the total entropy goes down) during the operation of this hypothetical heat engine whose efficiency is greater than the Carnot limit (\ref{eq:13.17}). Since this is impossible (the total entropy of a closed system can never decrease), we conclude that the Carnot limit must always hold.

    As you can see, the seemingly trivial observation with which I started this section (namely, that heat always flows spontaneously from a hotter object to a colder object, and never in reverse) turns out to have profound consequences. In particular, it means that the complete conversion of thermal energy into macroscopic work is essentially impossible5, which is why we treat mechanical energy as “lost” once it is converted to thermal energy. By Carnot’s theorem, to convert some of that thermal energy back to work we would need to introduce a colder reservoir (and take advantage, so to speak, of the natural flow of heat from hotter to colder), and then we would only get a relatively small conversion efficiency, unless the cold reservoir is really at a very low Kelvin temperature (and to create such a cold reservoir would typically require refrigeration, which again consumes energy). It is easy to see that Carnot efficiencies for reservoirs close to room temperature are rather pitiful. For instance, if \(T_h\) = 300 K and \(T_c\) = 273 K, the best conversion efficiency you could get would be 0.09, or 9%.


    4The greater efficiency engine could produce the same amount of work as the reversible one while absorbing less heat from the hot reservoir and dumping less heat to the cold one. If all the work output of this engine were used to drive the reversible one in refrigerator mode, the result would be, therefore, a net flow of heat out of the cold one and a net flow of heat into the hot one.

    5At least it is impossible to do using a device that runs in a cycle. For a one-use-only, you might do something like pump heat into a gas and allow it to expand, doing work as it does so, but eventually you will run out of room to do your expanding into...


    But What IS Entropy, Anyway?

    The existence of this quantity, the entropy, which can be measured or computed (up to an arbitrary reference constant) for any system in thermal equilibrium, is one of the great discoveries of 19th century physics. There are tables of entropies that can be put to many uses (for instance, in chemistry, to figure out which reactions will happen spontaneously and which ones will not), and one could certainly take the point of view that those tables, plus the basic insight that the total entropy can never decrease for a closed system, are all one needs to know about it. From this perspective, entropy is just a convenient number that we can assign to any equilibrium state of any system, which gives us some idea of which way it is likely to go if the equilibrium is perturbed.

    Nonetheless, it is natural for a physicist to ask to what, exactly, does this number correspond? What property of the equilibrium state is actually captured by this quantity? Especially, in the context of a microscopic description, since that is, by and large, how physicists have always been trying to explain things, by breaking them up into little pieces, and figuring out what the pieces were doing. What are the molecules or atoms of a system doing in a state of high entropy that is different from a state of low entropy?

    The answer to this question is provided by the branch of physics known as Statistical Mechanics, which today is mostly quantum-based (since you need quantum mechanics to describe most of what atoms or molecules do, anyway), but which started in the context of pure classical mechanics in the mid-to-late 1800’s and, despite this handicap, was actually able to make surprising headway for a while.

    From this microscopic, but still classical, perspective (which applies, for instance, moderately well to an ideal gas), the entropy can be seen as a measure of the spread in the velocities and positions of the molecules that make up the system. If you think of a probability distribution, it has a mean value and a standard deviation. In statistical mechanics, the molecules making up the system are described statistically, by giving the probability that they might have a certain velocity or be at some point or another. These probability distributions may be very narrow (small standard deviation), if you are pretty certain of the positions or the velocities, or very broad, if you are not very certain at all, or rather expect the actual velocities and positions to be spread over a considerable range of values. A state of large entropy corresponds to a broad distribution, and a state of small entropy to a narrow one.

    For an ideal gas, the temperature determines both the average molecular speed and the spread of the velocity distribution. This is because the average velocity is zero (since it is just as likely to be positive or negative), so the only way to make the average speed (or root-mean-square speed) large is to have a broad velocity distribution, which makes large speeds comparatively more likely. Then, as the temperature increases, so does the range of velocities available to the molecules, and correspondingly the entropy. Similarly (but more simply), for a given temperature, a gas that occupies a smaller volume will have a smaller entropy, since the range of positions available to the molecules will be smaller.

    These considerations may help us understand an important property of entropy, which is that it increases in all irreversible processes. To begin with, note that this makes sense, since, by definition, these are processes that do not “reverse” spontaneously. If a process involves an increase in the total entropy of a closed system, then the reverse process will not happen, because it would require a spontaneous decrease in entropy, which the second law forbids. But, moreover, we can see the increase in entropy directly in many of the irreversible processes we have considered this semester, such as the ones involving friction. As I just pointed out above, in general, we may expect that increasing the temperature of an object will increase its entropy (other things being equal), regardless of how the increase in temperature comes about. Now, when mechanical energy is lost due to friction, the temperature of both of the objects (surfaces) involved increases, so the total entropy will increase as well. That marks the process as irreversible.

    Another example of an irreversible process might be the mixing of two gases (or of two liquids, like cream and coffee). Start with all the “brown” molecules to the left of a partition, and all the “white” molecules to the right. After you remove the partition, the system will reach an equilibrium state in which the range of positions available to both the brown and white molecules has increased substantially—and this is, according to our microscopic picture, a state of higher entropy (other things, such as the average molecular speeds, being equal6).

    For quantum mechanical systems, where the position and velocity are not simultaneously well defined variables, one uses the more abstract concept of “state” to describe what each molecule is doing. The entropy of a system in thermal equilibrium is then defined as a measure of the total number of states available to its microscopic components, compatible with the constraints that determine the macroscopic state (such as, again, total energy, number of particles, and volume).


    6In the case of cream and coffee, the average molecular speeds will not be equal—the cream will be cold and the coffee hot—but the resulting exchange of heat is just the kind of process I described at the beginning of the chapter, and we have seen that it, too, results in an increase in the total entropy.


    This page titled 13.4: The Second Law and Entropy is shared under a CC BY-SA 4.0 license and was authored, remixed, and/or curated by Julio Gea-Banacloche (University of Arkansas Libraries) via source content that was edited to the style and standards of the LibreTexts platform.