# 4.5: Statistical Model of Thermodynamics

#### New Construct Definitions

**Microstates of a Physical System and Accessible Microstates of a Physical System**

Before we can make the connection of the state function entropy to concepts of probability, we need to introduce the notion of *accessible microstates* of a system. To do this, we will use an analogy. The story that we will tell is not about entropy. It is about how things become disordered, which is not quite the same as increasing entropy. It is useful, however, because it will help us to get a better idea of what we mean by states and microstates and how probabilities enter the picture. Pay particular attention to how the words *“state" * and *“microstate”* are used. They mean very different things.

The story of the boy and his toys goes like this. Once there was a little boy who had some toys. Mostly, the toys stayed in the boy’s room. Every now and then his mother would clean up his room and put the toys away where “they belonged.” But as soon as she finished picking up and left the little boy alone in his room, he began to interact with the toys. Soon they were again scattered all over the room. And no matter how much the mother wished it would be so, it never happened that after the boy and the toys interacted, they ended up back “where they belonged.” That is, after the boy and the toys “came to equilibrium,” the toys were always scattered all over the room.

An interesting question arises here: “How long does it take for the boy and toys to “come to equilibrium?” It depends on details of the interaction: how vigorously the boy plays with his toys, his attention span, etc. These questions require much *more detailed information* to answer. This will also be true in thermodynamics. The “how long” type of questions are generally not answerable from a focus on initial and final states of a system.

Now we are going to simplify the analogy so the math won’t be too complicated, but the basic ideas are all still here. The toys can be in many different locations; the toy box, which is where “they belong” is one of the possible locations. For the other locations, let’s divide the floor up into one foot squares. Suppose the child’s room is 12 ft by 12 ft, so there is a total of 144 sq ft of floor space. Further, suppose the bed, dresser, etc., take up 45 sq ft, so there are 99 sq ft of bare floor on which toys can be scattered. A particular toy can be on any of these 99 one-foot squares. A toy can also be in the toy box. Thus, there are 100 actual places a particular toy can be located.

Let’s assume we have two toys, a red and a green wooden block. A particular *microstate* of this system might be the red block in the toy box and the green block on the floor at square number 57. This is one of many possible microstates. After the mother has picked up the room, both toys will be in the toy box. This is another *one* of the many possible microstates of the system.

How many *accessible *microstates** **are there in this system? By accessible, we mean microstates that could actually occur consistent with any *constraints* imposed on the system. For example, if the mother puts the blocks in the toy box and closes the lid and sits on it, the microstates consisting of blocks on particular squares of the floor are not accessible. But suppose the mother leaves the room. Now the boy can open the box. From the *imposed* constraints (the toys must be in the room in one of the allowed places) the toys must be either in the toy box or on the floor on one of the squares. Note that we have imposed the constraint that the toys cannot be under or on the bed, on the dresser, etc. The red block could be in one of 100 positions. For any position of the red block, the green block could also be in any of 100 positions. The total number of *accessible microstates* is 100 x 100 = 10,000. Note that the number of accessible microstates is determined by the constraints that are imposed on the system. For example, suppose we removed the constraint that the toys had to stay in the room. If the window were opened, the little boy might throw one of the toys outside. We could divide up the ground into one-foot squares just like we did the floor. Now we would have more than the 10,000 accessible microstates we just counted. The point is this: there will always be constraints that limit the number of accessible microstates of the system. Interesting things happen when a constraint is removed, thereby increasing the number of accessible states.

**Number of microstates in a particular state (not necessarily in an equilibrium state)**

Let’s go back to our analogy briefly. Suppose the mother picks up all the toys and puts them in the toy box. The boy starts to get the toys out, but simply dumps them all on the floor right in front of the toy box. He is intending to dig through them and find the ones he wants to play with, take away from the toy box, etc. But then his friend comes over and they go outside to play. The toys are NOT in their equilibrium state (defined to be scattered all over the room). They are in a *state* (still pretty much bunched up right at the toy box). There will be many fewer microstates in this non-equilibrium state than in the true equilibrium state, once the boy and the toys have had a chance to “come to equilibrium.” The point here, is that we have specified a state that is not an equilibrium state. And as we shall see shortly, we can ask questions about the probabilities of finding the toys in this non-equilibrium state.

**Total number of (accessible) microstates of a physical system, Ω**

There will always be some constraints limiting the number of microstates of a physical system composed of many particles. It is frequently things like available volume and total energy. Of course, either of these constraints could be relaxed, by increasing the volume, for example, or adding more energy to the system. But the point is, given whatever constraints actually exist, there is a total number of microstates that the system could find itself in. This total number of accessible microstates is usually represented with the uppercase Greek Omega, Ω**.**

#### Meaning of the Model Relationships

1) Given an isolated (thermodynamic) system in equilibrium, it will be found with equal probability in each of its accessible microstates.

Now we make a fundamental assumption about a system. ** Once a system has reached equilibrium, it is equally likely to be in any of its accessible microstates. **In our analogy, we assume that after the little boy has been in the room for awhile, the red block and green block have equal probability of being at any location. So the system is equally likely to be in any of its accessible 10,000 microstates. Having both toys in the toy box is one of the 10,000 accessible microstates. The probability of finding a system in one particular microstate is simply one divided by the total number of microstates. So the probability of finding both toys in the toy box is 1/10,000 = 10-4. Likewise, the probability of finding the toys in any particular one of the 10,000 microstates is also 10-4, e.g., the red block on square 27 and the green block on square 76, for example.

2) The probability of finding a physical system in a particular state (not necessarily an equilibrium state) is equal to the ratio of the number of microstates in that state to the total number of microstates accessible to the physical system, Ω**.**

In our analogy, suppose we now ask the question, “What is the probability of finding both toys on the side of the room where the toy box is located. The probability of this ** state** is given by taking the ratio of the number of microstates in this state to the total number of accessible microstates. The total number of microstates in the state we specified, (all on one side of room) is 50 x 50 = 2,500. So the probability of this state occurring is

2,500/10,000 = 1/4

3) The total number of accessible microstates, Ω, in an isolated physical system is related to the thermodynamic state function entropy, S. The entropy, S, is directly proportional to the natural logarithm of the number of microstates, Ω.

#### Algebraic Representations

**Relationship (2)**

\( Prob(\text{state i}) = \text{(# of microstates in state i)} / \Omega \)

**Relationship (3)**

\( S = k_{B} \cdot ln( \Omega ) \) , where k_{B }is Boltzmann’s constant

k_{B }= 1.381 x 10^{-23} J/K

Why Entropy Increases from the Statistical Point of View

Now we will take our analogy a little further and imagine that the toys, i.e., the blocks, can interact. The interaction consists of the blocks being stacked and being unstacked. Often, the little boy stacks the blocks, one on top of the other, when he finds them near each other. When they are stacked, the blocks can be moved as a unit from one square to another. (Think of them as being locked together.)

Now we ask the question: How do the number of accessible microstates compare in the two situations of being stacked and being unstacked? When stacked, there are only 100 accessible microstates. But when not-stacked, there are 10,000 accessible microstates. Which way is the interaction most likely to proceed? Once the blocks become unstacked, they are very likely to become widely separated. In the configuration of being widely separated, there are many more accessible microstates than the number of microstates corresponding to being close to each other. So simply by probabilities, it is most likely that the interaction proceeds in a direction that causes the blocks to become unstacked. That is, it proceeds in the direction that increases the number of microstates accessible to the system.

The reaction

blocks stacked → blocks not stacked

spontaneously occurs. The opposite reaction does not. The toys are found in states that have the largest probabilities of occurring. The reaction proceeds in a direction that ensures that the number of accessible microstates, Ω, increases.

How could we state the “law of nature” that is illustrated by this analogy? The “law of nature” that explains how systems which are composed of many objects behave when a constraint is removed might be stated in this way:

*When a constraint is removed or when systems interact, they evolve in such a way that Ω always increases. *

This is an interesting “law of nature.” It depends only on notions of the probabilities of certain configurations occurring. Does it really apply to physical systems? The answer is a definite yes. However, when we apply it to real systems, composed not just of two things (two blocks in our story) but of 1020 or greater number of particles, the number of accessible microstates, Ω, gets unmanageably large. Question: How do you make a large number small? Answer: Take the logarithm of it. After doing that, you can also multiply it by a very small number. If we do this to the number of accessible microstates and make the small number Boltzmann’s constant, k_{B}, we get the thermodynamic state function, entropy, S.

Entropy

Entropy is a measure of the number of microscopic configurations corresponding to a thermodynamic system in a certain macroscopic state.

\[ S = k_{B} \cdot ln \Omega \]

where Ω is the total number of accessible microstates and k_{B} is Boltzmann’s constant.

Note that taking the log of Ω to get S is consistent with S being an *extensive* state property like energy. Doubling the size of the system, which doubles the number of particles, causes Ω to increase by raising it to the second power. The total number of accessible microstates when two systems are brought together increases as the product of the separate number of accessible microstates. Taking logs, makes the entropy increase as the sum of the separate entropies, which is what we want an extensive state variable to do. That is,

\( \text{Total Number of Microstates of Sys. 1 and Sys. 2} = \Omega_{1} \cdot \Omega_{2} \)

\( S_{tot} = k_{B} ln( \Omega_{1} \Omega_{2} ) = ln \Omega_{1} + ln \Omega_{2} \)

so, since \(S = k_{B} ln \Omega\),

\( S_{tot} = S_{1} + S_{2} \)

We can state our new law of nature in terms of entropy instead of in terms of accessible microstates.

Second Law of Thermodynamics

When systems interact, the interaction always proceeds in a way such that the *total* entropy increases or at best stays constant.

\[ \Delta S \geq 0 \]

This hold for any closed system.

As an example, consider what happens when two chemicals are brought together (a positional constraint has been removed). The reaction proceeds in a direction such that the total number of accessible microstates increases. That is, the total entropy, S_{total}, will increase, if possible. It might stay the same, but in no case, will it decrease. What “drives” the entropy to do this? Answer: the laws of chance!

To make this more clear, let’s consider a reaction in which two reactants, A and B in solution, form the compound AB. When substance A is initially allowed to come into contact with substance B, some A atoms will randomly bond with B atoms. Once molecules of AB have been formed, some AB molecules will randomly break apart.

Now we consider how the number of microstates changes as AB molecules are formed. Does the total number of accessible microstates increase or decrease? The total system will, simply by chance, find itself in a state (particular numbers of molecules, separate ions, etc.) that is for all practical purposes the *most likely* state to occur. If the total number of microstates happens to increase as AB molecules form, then the most likely configurations will be those that have more AB molecules present.

If on the other hand the total number of accessible microstates increases when AB molecules dissociate, then the most likely states will be those that have fewer AB molecules. So the system would then evolve so that fewer AB molecules exist.

The essential point is that when constraints are removed and systems are allowed to interact, they will evolve, obeying the laws of chance, in the direction in which the total number of accessible microstates (of all systems involved in the interaction, including the environment) is maximized.

**What about energy considerations?**

The discussion we have had so far has not considered the effect of energy and how that also drives the direction of a reaction. Now we add that part in as well.

We focus our attention on a system that interacts with its environment (other systems). We know from experience that a process tends to occur spontaneously if it is exothermic. That is, it occurs if it transfers energy to thermal systems. The system in question decreases its non-thermal energy by increasing thermal energy somewhere. Apparently, in exothermic spontaneous processes, nature acts to *minimize non-thermal energy*. But we also know that the 2nd law of thermodynamics requires that the *total*** entropy increase **in the process. Note that the 2nd law does not say that the entropy of the system we are focusing on must increase, but only that the combined entropy of the system and the systems it interacts with must increase. In exothermic reactions, the entropy of the environment increases because heat has been transferred to it. This increase more than makes up for any decrease in entropy that might have occurred in the system of interest.

Can an endothermic process be spontaneous? Suppose, the entropy of a system actually increases during an interaction. The second law could be satisfied even if the reaction is endothermic in this case. Thermal energy can decrease, which decreases the entropy associated with the thermal energy, provided the entropy of the system increases even more in non-thermal ways. In this case, the non-thermal energy of the system increases, while the thermal energy decreases.

Further Discussion of Gibbs Energy

To make quantitative the question of whether energy or entropy “wins out” in determining whether something spontaneously happens, recall that the Gibbs function, G, is defined such that at constant temperature,

dG = dH -TdS, or

∆G = ∆H - T∆S.

Changes in the Gibbs energy reflect changes in the enthalpy, ∆H, as well as changes in the entropy through the term T∆S.

In the past, the term “Gibbs free energy” or simply “free energy” has been used for this function, but the preferred term is Gibbs energy.

Whether a particular interaction occurs spontaneously depends on whether the resulting change in the Gibbs energy is negative or positive. If the Gibbs energy decreases, i.e., ∆G < 0, than the interaction occurs without the addition of energy. It proceeds spontaneously.

∆G < 0 spontaneous

∆G > 0 not spontaneous

∆G = 0 equilibrium reached

Why does the Gibbs energy work this way? We can understand this using the basic ideas we have been discussing. There are two sources of the total change in entropy. One is the change of entropy of the system itself, expressed as ∆S. The other is the change in the entropy of the environment when the system transfers or absorbs heat, which is equal to -∆H/T. For a process to occur, the total change in entropy must be positive.

∆(entropy of system) + ∆(entropy of environment) > 0

∆S - ∆H/T > 0 or T∆S- ∆H > 0

switching the signs, we get

-T∆S+ ∆H < 0 for a process to occur spontaneously.

Re-arranging gives

∆H - T∆S < 0 for a process to occur spontaneously.

So, requiring the change in Gibbs energy to be negative for a process to occur spontaneously is simply a way of applying the constraint of the 2nd law of thermodynamics.

What about the first law? Does it still apply? Definitely. In any interaction, conservation of energy must always be satisfied. Energy conservation governs which states are accessible. Second, the systems must always be in configurations that satisfy energy conservation. Putting both energy and entropy considerations together is precisely what the Gibbs function does. Minimization of G is the “best” trade off for both lowering the energy of the system and maximizing the *total* entropy.

So how do we answer the question, “Why do endothermic reactions occur spontaneously?” For example, the reaction that occurs in a “cold pack” is endothermic. Barium hydroxide and ammonium nitrate spontaneously combine to produce barium nitrate, water and aqueous ammonia. In this reaction, the change in Gibbs energy is negative. The entropy of the environment decreases, as heat moves from the environment to the reaction products, but the entropy of the system increases even more. So, although ∆H is positive, T∆S is even larger than ∆H.