# 27: The Fourier Transform

- Page ID
- 52739

[** This chapter is under construction **]

In the next chapter we will introduce the wave equation due to its importance in understanding the dynamics of the primordial plasma. In one dimension the wave equation can be written as

\[\frac{\partial^2 \Psi (x, t)} { \partial t^2} = v^2 \frac{\partial^2 \Psi (x, t)} {\partial x^2}. \]

We will leave a discussion of the physics of this equation and the primordial plasma to the next chapter. Here, we will focus on the use of Fourier methods to solve for the evolution of \( \Psi(x,t) \) assuming it obeys the above equation and that we are given the value of \(\Psi\) and its time derivative at some initial time for all values of \(x\). Fourier methods have a broad range of applications in physics. They have utility well beyond the dynamics of the wave equation in both experimental and theoretical physics. For the student of physics, time spent developing facility with Fourier transforms is time well spent.

#### Solving the Wave Equation in Fourier Space

You may already be familiar with a method for solving partial differential equations known as separation of variables. Using separation of variables to solve the wave equation, we would guess a solution of the form \( \Psi (x, t) = X(x)T(t) \). Plugging this into the wave equation yields two simple ODE's: one for \( T(t) \) and one for \( X(x) \). Now though, we'd like to introduce you to another way to analyze partial differential equations (PDE's): Fourier methods.

The basic idea here is that we transform from a basis in which the time evolution is complicated (one in which the field is described as a function of position), to a basis in which the time evolution is remarkably simple (one in which the field is described as a collection of Fourier modes). We do the time evolution in this new basis, and then we transform back to our original basis.

We will use the discrete version of the Fourier transform here, as that is perhaps an easier starting point to wrap one's mind around first. We include a discussion of the continuous Fourier transform, which is easy to understand as the continuum limit of the discrete version.

[To be done: all this needs to be translated to discrete from continuous and then we need to create a section on the continuum limit.]

We start off, in a manner that may seem a little backwards, by defining the inverse Fourier transformation:

\[ h(x) = \frac{1}{2\pi}\int_{-\infty}^{\infty} dk e^{ikx} \tilde h(k). \label{eqn:IFT} \]

The \(\tilde h(k)\) are complex (have real and imaginary parts) and recall that \(\exp(ikx) = \cos(kx)+i\sin(kx)\). We start here because there is a theorem that states that a broad class of functions of \(x\) can all be written as sums over \(\exp(ikx)\) for a continuum of values of \(k\), and for appropriately chosen complex coefficients of the \(\exp(ikx)\). That is, we can represent the information in a function \(h(x)\) by its Fourier coefficients \(\tilde h(k)\), with the relationship between the two given by Equation\ref{eqn:IFT}. The functions, \(\exp(ikx)\) are known as Fourier modes. Since \(\exp(ikx) = \exp(ik(x+2\pi/k))\) we see that a Fourier mode has a wavelength of \(2\pi/k\). We call \(k\) the 'wavenumber.'

One can do Fourier transforms in time or in space or both. Here we are only going to be doing Fourier transforms in space, although we will consider Fourier transforms in space *at all points in time*. To be explicit about this, we can rewrite Equation \ref{eqn:IFT} to include a \(t\) argument of the functions:

\[ h(x,t) = \frac{1}{2\pi}\int_{-\infty}^{\infty} dk e^{ikx} \tilde h(k,t). \label{eqn:IFTwithTime} \]

It's the same transformation, but now we are explicit that we do this transformation at all values of \(t\).

Recall that we claimed that the evolution of the \(\tilde h(k,t)\) would be simple. To figure out what equation governs the evolution of these coefficients, we need to know how to figure out for a given \(h(x,t)\) what is \(\tilde h(k,t)\). But we are going to return to leaving off the \(t\) dependence, for simplicity. We already know how to go from \(\tilde h(k)\) to \(h(x)\), that is what we called the inverse Fourier transform, Equation \ref{eqn:IFT}. So we are looking now for the inverse of this, what we will naturally call the Fourier transform.

Let's work our way toward the Fourier transform by first pointing out an important property of Fourier modes: they are *orthonormal. *This means that if we integrate over all space one Fourier mode, \(e^{-ikx}\), multiplied by the complex conjugate of another Fourier mode \(e^{ik'x}\) the result is \(2\pi\) times the Dirac delta function:

\[\int_{-\infty}^{\infty} dx e^{-ikx}e^{ik'x} = 2\pi \delta(k-k') \label{eqn:OrthoNormal}\]

where the Dirac delta function is a continuum version of the Kronecker delta function, defined by its integral over \(k\) such that

\[ \int_{-\infty}^{\infty} dk \delta(k-k') f(k) = f(k'). \label{eqn:DiracDelta} \]

You can loosely think of the Dirac delta function as being zero for all non-zero values of its argument and \(+\infty\) when its argument is zero.

From these equations one can derive what we call the Fourier transform:

\[ \tilde h(k) = \int_{-\infty}^{\infty} dx e^{-i kx} h(x) \label{eqn:FT} \]

and thus the answer to the question of how we deduced \(\tilde h(k,t)\) from \(h(x,t)\).

Box \(\PageIndex{1}\)

**Exercise 27.1.1:** Show that one can derive Equation \ref{eqn:FT} from Equations \ref{eqn:IFT}, \ref{eqn:OrthoNormal}, and \ref{eqn:DiracDelta}.

Before deriving the evolution equation for the Fourier coefficients, let's look at an example of a function in the position basis and what it looks like in the Fourier basis. The following image shows a wave on the top panel, \(\Psi(x)\), and the Fourier transform of that wave on the bottom panel. (Note that \( \mathcal{F}(\Psi) \) indicates the operation of Fourier transforming the function \( \Psi(x) \); i.e., \( \mathcal{F}(\Psi) = \tilde \Psi(k) \). Notice how the Fourier transform 'picks out' the two spatial frequencies of which the wave is composed.

[Problem: this is a discreet FT and we have only talked about continuum.]

For a \(\Psi(x,t)\) that obeys the wave equation, let's now find the equation that its Fourier coefficients, \(\tilde \Psi(k,t)\), satisfy. Starting from the wave equation,

\[\frac{\partial^2 \Psi (x, t)} { \partial t^2} = v^2 \frac{\partial^2 \Psi (x, t)} {\partial x^2}, \]

and then substituting in the inverse Fourier transform \( \Psi(x,t) = \frac{1}{2\pi} \int_{- \infty} ^ {\infty} dk \tilde \Psi(k,t) e^{-ikx} \) we find:

\[ \frac{ \partial^2 } { \partial t^2} \int_{- \infty} ^ {\infty} dk \tilde \Psi(k,t) e^{-ikx} = v^2 \frac{ \partial^2 } { \partial x^2} \int_{- \infty} ^ {\infty} dk \tilde \Psi(k,t) e^{-ikx} \]

Distributing the derivatives gives:

\[ \int_{- \infty} ^ {\infty} dk \frac{ \partial^2 \tilde \Psi(k,t) e^{-ikx}} { \partial t^2} = - \int_{- \infty} ^ {\infty} dk (kv)^2 \tilde \Psi(k,t) e^{-ikx}.\]

We can then rearrange terms to find:

\[ \int_{- \infty} ^ {\infty} dk \bigg [ \frac{ \partial^2 \tilde \Psi(k,t)} { \partial t^2} + (kv)^2 \tilde \Psi(k,t) \bigg ] e^{-ikx} = 0. \]

It turns out that the only way the left-hand side can be zero for all values of \(x\) is if the quantity in square brackets is zero for all values of \(k\) (see Box below) so we get that

\[ \frac{ \partial^2 \tilde \Psi(k,t)} { \partial t^2} + (kv)^2 \tilde \Psi(k,t) = 0. \label{eqn:WaveEquationInFourierSpace}\]

Box \(\PageIndex{2}\)

**Exercise 27.2.1:** Prove that if

\[ \int_{- \infty} ^ {\infty} dk f(k) e^{-ikx} = 0 \label{eqn:IntegratesToZero}\]

for all \(x\), then \(f(k) = 0\) for all \(k\).

First, multiply the left-hand side of Equation \ref{eqn:IntegratesToZero} by \(\exp(-ik'x)\), integrate it over all \(k'\), and identify the Dirac delta function to end up with:

\[ \frac{ \partial^2 \tilde \Psi(k')} { \partial t^2} + (k'v)^2 \tilde \Psi(k') = 0. \]

Finally, note that since this is true for all \(k'\) it's also true for all \(k\).

Equation \ref{eqn:WaveEquationInFourierSpace} is a very common differential equation. You've probably solved it many times! You may recognize it better if we let \( y = \tilde \Psi(k,t) \), so that it reads \( \ddot{y} + k^2 v^2 y = 0\). We can easily write down a solution:

\[ \tilde \Psi(k,t) = A(k) \sin{ (kvt) } + B(k) \cos{ (kvt) } . \]

Thus our general solution back in the space basis is

\[\Psi(x,t) = \frac{1}{2\pi} \int_{-\infty}^{\infty} dk \bigg [ A(k) \sin{ (kvt) } + B(k) \cos{ (kvt) } \bigg ] e^{ikx}. \]

We can find \(A(k)\) and \(B(k)\) if we know \(\Psi(x,t)\) and \(\dot \Psi(x,t)\) at \(t=0\) because

\[\Psi(x,t=0) = \frac{1}{2\pi} \int_{-\infty}^{\infty} dk B(k) e^{ikx} \]

and

\[\dot \Psi(x,t=0) = \frac{1}{2\pi}\int_{-\infty}^{\infty} dk kv A(k) e^{ikx} \].

Given these relationships we see that to get \(B(k)\) and \(A(k)\) we Fourier transform the initial value of \(\Psi\) and its time derivative:

\[ B(k) = \int_{-\infty}^{\infty} dx \Psi(x,t=0) e^{-ikx} \]

and

\[A(k) = \frac{1}{kv} \int_{-\infty}^{\infty} dx \dot \Psi(x,t=0) e^{-ikx}. \]

To summarize, we found that in a Fourier basis, rather than the original space basis, the wave equation simplifies from a partial differential equation to a set of uncoupled ordinary differential equations. The wave equation is easily solved in the Fourier basis and we provided the general solution. This general solution depends on two functions of \(k\) that can be derived from the initial conditions.

Consider the following initial conditions on our string \( \Psi (x, t = 0) = \sin ( 2 x) \). This is a single wave with k = 2. Taking the Fourier transform, we find: \( \mathcal{F} \bigg ( \Psi (x, t = 0) \bigg ) = \delta (x - 2) \). The Fourier transform is 1 where k = 2 and 0 otherwise. We see that over time, the amplitude of this wave oscillates with cos(2 v t). The solution to the wave equation for these initial conditions is therefore \( \Psi (x, t) = \sin ( 2 x) \cos (2 v t) \). This wave and its Fourier transform are shown below. The power spectrum is merely the Fourier transform squared.

Now consider we have initial conditions which are more complicated, but can be written as an infinite sum of sine waves as follows:

\[ \Psi (x, t =0) = \sum_{i=1}^{\infty} A_i \sin (k_i x) \]

Taking the Fourier transform, we find the following sum of delta functions:

\[ \mathcal{F} \bigg ( \Psi (x, t = 0) \bigg ) = \sum_{i = 1}^{\infty} A_i \delta (k- k_i) \]

Which oscillate in time according to:

\[ \mathcal{F} \bigg ( \Psi (x, t) \bigg ) = \sum_{i = 1}^{\infty} A_i \delta (k - k_i) \cos ( k_i v t) \]

Returning to real space we find:

\[ \Psi (x, t) = \sum_{i = 1}^{\infty} A_i \sin ( k_i x ) \cos ( k_i v t) \]

The takeaway here is that the solution to the wave equation can always be written as a sum of independent standing waves. Some examples are shown below. The top panel shows the wave and the bottom panel shows the Fourier transform of that wave. Notice how the evolution seems very complex in real space, but in Fourier space it is merely independent delta functions of oscillating amplitude. This is the beauty of using Fourier methods to analyze the wave equation. If you wanted to see the power spectrum, you would simply square the Fourier transform.

Exercise \(\PageIndex{1}\)

Consider the heat equation for a straight rod: \( \frac{d \Psi} {dt} = \alpha \frac{d^2 \Psi} {dt^2} \), where \( \Psi (x, t) \) is the temperature at a certain point on the beam. Using the techniques from the previous section, find the evolution of Fourier modes. How can this physically be interpreted?

**Answer**-
We plug in the Fourier representation of \( \Psi \) into the heat equation:

\( \frac{ d } { dt} \int_{- \infty} ^ {\infty} \mathcal{F} ( \Psi) e^{-ikx} dk = \alpha \frac{ \partial^2 } { \partial x^2} \int_{- \infty} ^ {\infty} \mathcal{F} ( \Psi) e^{-ikx} dk \)

Distributing the derivatives and some algebra gives:

\( \int_{- \infty} ^ {\infty} \bigg [ \frac{ d \mathcal{F} (\Psi) } { dt } e^{-ikx} + \alpha k^2 e^{-ikx} \mathcal{F} ( \Psi) \bigg ] dk = 0 \)

Which is satisfied if:

\( \frac{d \mathcal{F}(\Psi)} { dt} + \alpha k^2\mathcal{F}( \Psi) = 0 \)

Using separation of variables, we find:

\( \mathcal{F}(\Psi) = C e^{-\alpha k^2 t} \), where C is a constant determined by initial conditions.

Therefore, we see that higher frequencies decay faster. This makes sense, as we would expect spikes in temperature (high curvature) to disappear quickly, whereas more smooth temperature gradients will decay more slowly.