# 9.2: Nuclear Reactions

- Page ID
- 17269

# Binding Energy

It is standard practice to define "free particles" as being in states where their total energy is greater than zero, while particles in a bound state have a total energy of less than zero – it is no different with nucleons. We define the *binding energy* as the total energy needed to tear the nucleons apart and make them all free with zero kinetic energy.

Alert

*The energy of a bound state is negative, but the binding energy is defined as the positive energy that must be added to the state to free all the particles. This is sometimes misinterpreted as meaning that there is positive energy stored in the bonds. This confusion is compounded (in the case of chemical reactions) by the fact that an eventual release of energy often begins with a breaking of bonds. Don't fall into this trap – bonds always represent negative total energy, and if breaking them eventually results in energy being extracted, it's because new bonds that are later formed represent an amount of energy that is even more negative.*

Note that unstable nuclei actually have *negative* binding energy, as they are in the well with a positive total energy. This is not to say that the particles are not bound, but if we do the energy accounting before and after freeing the particles, we find that whatever energy we added to separate them is less than the kinetic energy they have after being separated.

The accounting that we do for binding energy is a bit different than we have done in the past for things like the hydrogen atom. For one thing, while they exist, we will not concern ourselves with excited states of nuclei – we'll just be comparing the bound state with the free state. Another difference is the sheer scales of energies involved. While it requires \(13.6eV\) to ionize a hydrogen atom from its ground state, to separate the proton from the neutron in a deuteron, it requires an addition of more than 2 *million* \(eV\).

With numbers this big, it is convenient to measure these energy changes in terms of changes of mass of the particles involved. There is nothing special about nuclei in this regard – we could have done it with hydrogen also. The sum of the mass of the free proton and free electron is a bit more than the mass of a ground-state hydrogen atom (since there is \(13.6 eV\) less total energy in the hydrogen atom system than in when they are free), but the mass difference is minuscule – the fractional mass change is on the order of \(10^{-8}\). For a deuteron the fractional change is much larger: \(10^{-3}\).

So when it comes to computing binding energy, all we need is the mass of the nucleus, and the masses of its constituent parts. Suppose we are given an isotope of an atom and are asked to compute the binding energy. Consider what numbers are typically given:

- We know the atomic number of the atom (\(Z\)).
- We are given the isotope, so we know the number of neutrons (\(N\)). We get this by subtracting the atomic number from the isotope.

With these, we can easily look up the atomic weight of the atom \(M\), which gives a measure of the total energy in the bound system when multiplied by \(c^2\). The binding energy is this quantity subtracted from the sum of the energies of the constituent parts, which can be written as the sum of the energies of the number of (protons + electrons) (which we can get by multiplying the energy of a hydrogen atom by \(Z\)), and the energy of all the neutrons (which we can get by multiplying the energy of a single neutron atom by the number of neutrons). That is:

\[ binding\;energy = Z\left(m_Hc^2\right) + N\left(m_nc^2\right) - Mc^2\]

The most convenient way to compute the binding energy is to have just a few constants at your fingertips:

\[\begin{array}{l}\text{the energy equivalence of one atomic mass unit:} & \left(1u\right)c^2 &= &931.5MeV \\ \text{the mass of a hydrogen atom in atomic mass units:} & m_H &= &1.007825u & \Rightarrow & m_H c^2 & = & 938.8MeV \\ \text{the mass of a neutron in atomic mass units:} & m_n &=& 1.008665u & \Rightarrow & m_n c^2 & = & 939.6MeV \end{array}\]

# Fusion

There are a couple of nuclear reactions that revolve around the idea of binding energy. The simplest to understand is called *nuclear fusion*. As is implied by the name, this consists of the "fusing" of nucleons, which is another way of saying "putting free nucleons into a bound state." We know that once they are bound together, they are in a lower energy state than when free, so the extra energy they started with must go somewhere, such as the kinetic energy of the newly-fused nucleus.

By far the most common form of nuclear fusion (and the one which powers our sun) is one which ultimately fuses hydrogen (the most abundant element in the universe) into helium-4. This obviously takes some effort, given that there are 4 protons available to start, and two protons and two neutrons in the final product. In fact this process occurs in three steps, each resulting in a final state with a higher binding energy than the state before (thereby providing excess energy output). It should be noted that this process occurs at very high temperatures, which means that the matter is in plasma form (the electrons are all disassociated from the atoms), so “hydrogen-1” is simply a proton, hydrogen-2 a deuteron, etc.

- Two hydrogen nuclei form a deuteron. This requires one of the protons to undergo inverse beta decay. The amount of energy this provides is relatively small (less than \(0.5MeV\)). We write the transition this way:

\[_1^1H \;+\; _1^1H \;\rightarrow \; _1^2H \;+ \;\beta^+ \;+ \;\nu_e \]

- The deuteron fuses with another free proton, to form a helium-3 nucleus. This is difficult to do, because of the proton-proton repulsion (more on this shortly). But the three-way interactions of the three nucleons deepens the string force well a great deal, releasing an energy equal to about \(5.5MeV\):

\[_1^2H \;+\; _1^1H \;\rightarrow \; _2^3He \]

- Two helium-3 nuclei interact by each shedding a proton and fusing the two deuterons into helium-4. This yields nearly \(13MeV\):

\[_2^3He \;+\; _2^3He \;\rightarrow \; _2^4He \;+\; _1^1H \;+\; _1^1H \]

Let's take a closer look at step 2. This step requires getting two positively-charged particles close enough to each other for the strong force to take over. Let's see what that takes...

The two particles have to get to within about one femtometer of each other. The kinetic energy they must have to accomplish this is easy to calculate – it is equal to the potential energy gained when two charges get this close:

\[V\left(r\right) = \dfrac{e^2}{4\pi\epsilon_o r} \;\;\; \Rightarrow \;\;\; V\left(r=1fm\right)= \dfrac{e\left(1.6\times10^{-19}C\right)}{4\pi\left(8.85\times10^{-12}CV^{-1}m^{-1}\right) \left(10^{-15}m\right)} = 1.44MeV\]

This is a lot of kinetic energy, but it's pretty hot in the sun, so maybe the protons/deuterons are this energetic. We can use this as the average energy per particle to determine the temperature. Treating this as a classical 3-dimensional gas, we get:

\[KE = \frac{3}{2}k_BT=1.44MeV \;\;\; \Rightarrow \;\;\; T=\dfrac{1.44MeV}{8.62\times10^{-5}eV\cdot K^{-1}}= 1.67\times10^{10}K \]

The sun is hot, but it's not *this* hot! Its temperature is on the order of \(10^7K\). So how does it liberate energy from this fusion process? Persistence and luck, otherwise known as tunneling. Here is a diagram of the potential function involved:

**Figure 9.2.1 – Potential Well for Deuteron-Proton Fusion**

The particles tunnel through the barrier that is located around \(r=1fm\) to get into the promised land of the deep well created by the strong nuclear force.

# Radioactive Decay

Perhaps it occurred to the reader that if nucleons can tunnel into their wells to fuse, then larger nuclei should be able to tunnel out of their wells to split apart (called *fission*). In fact, spontaneous fission can occur on small scales, but it turns out that there are much easier (i.e. more common) ways for large nuclei to go to lower energy states. Rather than two large chunks separating, large nuclei will eject much smaller particles. A very common form is called *alpha decay*. An alpha particle is a helium-4 nucleus (2 protons and 2 neutrons, designated by \(\alpha\) or \(_2^4He\)), which are extremely tightly-bound to each other. As with the case of beta particles, this name came along before an accounting of subatomic structure had begun. That this combination of nucleons was discovered so early shows how likely it is to find nucleons in this configuration, which is a tribute to how tightly-bound they are. Indeed, in our discussion of fusion, the end product was and alpha particle, and a great deal of binding energy was the result.

Suppose we have a large collection of identical nuclei. At any given instant in time, all of these nuclei have the same probability of decaying by ejecting one or more nucleons. Once a nucleus has done so, it is now a new nucleus, and the probability of doing it again changes (typically to a much lower probability, as the nucleus heads for greater stability). If we watch how fast this collection is decaying (i.e. how many are decaying per unit time), it is not surprising that we see more of them decaying per second when we have more of them around, since they all have the same probability. This gives us a simple relation between the decay rate and the population of nuclei able to decay:

\[ \dfrac{dN}{dt} = -\lambda\;N \]

The constant \(\lambda\) is the constant of proportionality that relates the rate of decay to the number of nuclei, and the minus sign comes in because the number of nuclei is decreasing with every decay. This is a simple differential equation to solve, yielding:

\[ N\left(t\right) = N_o\;e^{-\lambda \; t} \]

The constant \(N_o\) is of course the number of nuclei present at \(t=0\). The constant \(\lambda\) has a say in how fast the collection of nuclei decays. The half-life of this nucleus is the amount of time required for half of the collection of nuclei to decay. Writing this quantity in terms of \(\lambda\) is easy: Set the final number equal to one half the starting number, and solve for \(t\):

\[ \frac{1}{2} N_o = N_o\;e^{-\lambda \; t_{1/2}} \;\;\; \Rightarrow \;\;\; t_{1/2}=\dfrac{\ln 2}{\lambda} \]

Note that it doesn’t matter how many nuclei there are initially. If you start with \(N\) and wait for the half-life, then \(\frac{N}{2}\) nuclei remain intact. If you wait that long again, the remainder do *not* decay – only half of them do, leaving one quarter of the nuclei intact.

One other thing to note about radioactive decay. You may be wondering what determines the \(\lambda\) constant – the quantity that essentially reflects how easy it is for a nucleus to decay at any given instant. For this we go back to our discussion of tunneling. The transmission probability is determined by the thickness and height of the barrier. So very large nuclei, which tend to be higher in the well than smaller nuclei, typically have a higher probability of decay, and therefore a shorter half-life.

# Carbon Dating

A very nice application of radioactive decay is *carbon dating*, and it works like this...

Our atmosphere is mostly nitrogen-14, a stable isotope. But the earth is constantly bombarded by cosmic rays, and the energy of these rays will occasionally induce inverse beta decay in these nitrogen atoms, turning one of the nitrogen protons into a neutron. With the same number of nucleons but now 6 protons, this has now become a carbon-14 isotope, and like any isotope, it behaves chemically exactly like its stable cousin, carbon-12. It is radioactive (not stable), but has a relatively long half-life (~5730 years). Over the billions of years that this has been happening, it’s clear the earth’s carbon population of carbon atoms must reach a steady-state ratio of these two isotopes of carbon, with the decay rate matching the production rate.

Carbon is of course ubiquitous in the cycle of life on earth, which means that every plant or animal participating in the carbon cycle should possess the same ratio of these two isotopes as the planet. But if we take organic matter out of this replacement cycle (like a bone of a long-dead animal, or a piece of parchment made from plant fibers), then it has no way for new carbon-14 to get into it. Those carbon-14 atoms decay into nitrogen-14, while the stable carbon-12 atoms remain intact. If we can measure the ratio of carbon-14 atoms to the total amount of carbon, we can use the radioactive decay law to compute how long that object has been out of the carbon cycle, thereby “dating” it. The uncertainty in this dating is generally around 100 years or so (depending on many factors), and it is really only reliable for time spans of a few tens of thousands of years, which coincidentally is about the span of human recorded history.

So how do we do this process? Let’s start with a number that we need for every calculation – the equilibrium ratio of carbon-14 to overall carbon (what we find in living organic material), which is measured to be:

\[ \dfrac{N\left(_{\;6}^{14}C\right)}{N\left(_{\;6}^{12}C\right)} \approx 1.3\times 10^{-12} \]

Next we determine the decay constant for carbon-14. It was stated above that the half-life of carbon-14 is 5730 years, so:

\[ t_{1/2}=\dfrac{\ln 2}{\lambda} \;\;\; \Rightarrow \;\;\; \lambda=\frac{\ln 2}{5730y} = 3.83\times 10^{-12}s^{-1} \]

Now take measurements from the sample we wish to date, and perform the following steps:

- Measure the total number of carbon atoms in the sample (number of moles \(\times\) Avagadro’s number): \(N_{tot}\)
- Determine how many of those atoms started off (when the sample was still alive) as carbon-14:

\[ N_{14}\left(when\;alive\right) = 1.3\times 10^{-12} \times N_{tot} \]

- Measure the current decay rate \(\frac{dN_{14}}{dt}\) of the carbon-14, and use it to solve for the number of carbon-14 atoms currently present in the sample:

\[ \dfrac{dN_{14}}{dt} = -\lambda\;N \;\;\; \Rightarrow \;\;\; N_{14}\left(now\right) = -\dfrac{dN_{14}/dt}{\lambda} = \dfrac{measured\;decay\;rate}{3.83\times 10^{-12}s^{-1}} \]

- Use the radioactive decay formula to determine the time over which the decays have been occurring. That is, solve for \(t\), which is now the only unknown in the equation below.

\[ N_{14}\left(now\right) = N_{14}\left(when\;alive\right)\;e^{-\lambda\;t} \;\;\; \Rightarrow \;\;\; t = \left(5730y\right)\dfrac{\ln\left[\dfrac{N_{14}\left(initial\right)}{N_{14}\left(now\right)}\right]}{\ln 2} \]

As a quick check, we note that if there are half as many carbon-14 atoms now as there were initially, then the ratio of the natural logs is 1, and the time period is one half-life.

It should be noted that counting the rather rare decays is easier when the sample size is large, which makes this process difficult to implement when there isn't much of the organic material to work with. A better means of counting the number of carbon-14 atoms in the sample has been developed (using mass spectroscopy).

# Fission

While our understanding of binding energy explains nicely the source of energy in the process of nuclear fusion, it is not so clear for a case with which everyone is quite aware – *nuclear fission*. This is the process that released energy in the first nuclear bomb and in modern-day nuclear power plants. In nuclear fission, energy is released when nuclei are *split* (thus the ubiquitous phrase "splitting the atom"), which would seem to contradict what we know about binding energy needing to be *added* to systems to free the bound particles.

We know that as the nucleus gets larger, some of the nucleons we add are too far away from nucleons on the other side of the nucleus to interact with them through the strong nuclear force. If they are protons, then they are repelling, but not attracting. The nucleon still “sticks” to the nucleus because of the attraction of its nearest neighbors, but the drop in total energy is not as great as when a nucleon was added to a smaller nucleus. This brings down the *average binding energy per nucleon*. That means if we split this nucleus, we end up with two nuclei, each with higher average binding energy per nucleon. But the total number of nucleons hasn’t changed, so the binding energy *increases* after the split. This means that the total energy has gone down, and energy must have been released in the split (going to kinetic energy of the fragments which repel each other through coulomb repulsion).

**Figure 9.2.2 – Fission of a Large Nucleus**

There is of course much more to fission than this over-simplified model. Earlier we said that as nuclei get larger, they need to take on more neutrons than protons, which means that the neutron-to-proton ratio grows as the nucleus grows. If we split a large nucleus apart, we end up with two smaller nuclei, with the same neutron-to-proton ratio. But we know that smaller nuclei are not stable with so many neutrons, so some of those need to be shed as well. So part of the process of fission is the release of free neutrons.

How do we initiate a fission event? That is, by what means do we add the necessary energy to the nucleus to get it to split? Typically this is done by firing high-energy neutrons into the nucleus. But if fission *produces* high-energy neutrons as well, you can see that there is the potential for a *chain reaction*. In order to use the energy that comes from fission, this chain reaction needs to be controlled, but of course its first use was as a weapon, where the chain reaction was uncontrolled. Large nuclei are decaying all the time – shooting-out neutrons, deuterons and alpha particles. If we increase the density, then there is a greater chance that one of these ejected neutrons will split another nucleus, whose free neutrons could split two more, etc. The trick is getting the fissile material to a sufficiently dense state (known as *critical mass*) to trigger and sustain this chain reaction.