# 19.10: Appendix - Waveform analysis

- Page ID
- 32465

## Harmonic Waveform Decomposition

Any linear system that is subject to a time-dependent forcing function \(F( t)\), can be expressed as a linear superposition of frequency-dependent solutions of the individual harmonic decomposition \(a(\omega )\) of the forcing function. Similarly, any linear system subject to a spatially-dependent forcing function \(F(x)\) can be expressed as a linear superposition of the wavenumber-dependent solutions of the individual harmonic decomposition \(a(k_x)\) of the forcing function. Fourier analysis provides the mathematical procedure for the transformation between the periodic waveforms and the harmonic content, that is, \(F( t) \Leftrightarrow a(\omega )\), or \(F(x) \Leftrightarrow a(k_x)\). Fourier’s theorem states that any arbitrary forcing function \(F( t)\) can be decomposed into a sum of harmonic terms. For example for a time-dependent periodic forcing function the decomposition can be a cosine series of the form

\[F( t) = \sum^{\infty}_{n=1} \alpha_n \cos(n\omega_0 t + \phi_n) \label{I.1}\]

where \(\omega_0\) is the lowest (fundamental) frequency solution. For an aperiodic function a cosine decomposition can be of the form

\[F( t) = \int^{\infty}_0 \alpha (\omega ) \cos(\omega t + \phi (\omega ))d\omega \label{I.2}\]

Either of the complementary functions \(F( t) \Leftrightarrow a(\omega )\), or \(F(x) \Leftrightarrow a(k_x)\) are equivalent representations of the harmonic content that can be used to describe signals and waves. The following two sections give an introduction to Fourier analysis.

### Periodic systems and the Fourier series

Discrete solutions occur for systems when periodic boundary conditions exist. The response of periodic systems can be described in either the time versus angular frequency domains, or equivalently, the spatial coordinate \(x\) versus the corresponding wave number \(k_x\). For periodic systems this decomposition leads to the Fourier series where a generalized phase coordinate \(\phi\) can be used to represent either the time or spatial coordinates, that is, with \(\phi = \omega_0 t\) or \(\phi = k_xx\) respectively. The Fourier series relates the two representations of the discrete wave solutions for such periodic systems.

Fourier’s theorem states that for a general periodic system any arbitrary forcing function \(F(\phi)\) can be decomposed into a sum of sinusoidal or cosinusoidal terms. The summation can be represented by three equivalent series expansions given below, where \(\phi = \omega_0 t\) or \(\phi = \mathbf{k}_0\cdot \mathbf{r}\), and where \(\omega_0, \mathbf{k}_0\) are the fundamental angular frequency and fundamental wave number respectively.

\[f (\phi) = \frac{a_0}{2} + \sum^{\infty}_{n=1} [a_n \cos (n\phi) + b_n \sin (n\phi)] \label{I.3}\]

\[f (\phi) = \frac{a_0}{2} + \sum^{\infty}_{n=0} c_n \cos (n\phi + \varphi_n) \label{I.4}\]

\[f (\phi) = \frac{a_0}{2} + \sum^{\infty}_{n=0} d_n \sin (n\phi + \theta_n) \label{I.5}\]

where \(n\) is an integer, and \(\varphi_n, \theta_n\) are phase shifts fit to the initial conditions.

The normal modes of a discrete system form a complete set of solutions that satisfy the following orthogonality relation

\[\int^{2\pi}_0 f_n (\phi) f_m (\phi) d\phi = c_n \delta_{mn} \label{I.6}\]

where \(\delta_{mn}\) is the Kronecker delta symbol defined in equation \((9.2.10)\). Orthogonality can be used to determine the coefficients for equations \ref{I.3} to be

\[a_0 = \frac{1}{ \pi} \int^{+\pi}_{ −\pi} f (\phi) d\phi \label{I.7}\]

\[a_n = \frac{1}{ \pi} \int^{+\pi}_{ −\pi} f (\phi) \cos (n\phi) d\phi \label{I.8}\]

\[b_n = \frac{1}{ \pi} \int^{+\pi}_{ −\pi} f (\phi) \sin (n\phi) d\phi \label{I.9}\]

Similarly the coefficients for \ref{I.4} and \ref{I.5} are related to the above coefficients by

\[c^2_n = d^2_n = a^2_n + b^2_n \nonumber\]

Instead of the simple trigonometric form used in equations (\ref{I.3} − \ref{I.5}) the cosine and sine functions can be expanded into the exponential form where

\[\cos \phi = \frac{1}{ 2} ( e^{i\phi} + e^{-i\phi}) \label{I.10} \\ \sin \phi = \frac{−i}{ 2} ( e^{i\phi} − e^{-i\phi})\]

then Equation \ref{I.3} becomes

\[f (\phi) = \sum^{\infty}_{ n=−\infty} g_n e^{in\phi} \label{I.11}\]

where \(n\) is any integer and, from the orthogonality, the Fourier coefficients are given by

\[g_n = \frac{1}{ 2\pi} \int^{+\pi}_{ −\pi} f (\phi) e^{n\phi} d\phi \label{I.12}\]

These coefficients are related to the cosine plus sine series amplitudes by

\[g_n = \frac{1}{ 2} (a_n − ib_n) \tag{when \(n\) is positive}\]

\[g_n = \frac{1}{ 2} (a_n + ib_n) \tag{when \(n\) is negative}\]

These results show that the coefficients of the exponential series are in general *complex*, and that they occur in conjugate pairs (that is, the imaginary part of a coefficient \(a_n\) is equal but opposite in sign to that for the coefficient \(a_{−n}\)). Although the introduction of complex coefficients may appear unusual, it should be remembered that the real part of a pair of coefficients denotes the magnitude of the cosine wave of the relevant frequency, and that the imaginary part denotes the magnitude of the sine wave. If a particular pair of coefficients \(a_n\) and \(a_{−n}\) are real, then the component at the frequency \(n\omega_0\) is simply a cosine; if \(a_n\) and \(a_{−n}\) are purely imaginary, the component is just a sine; and if, as is the general case, \(a_n\) and \(a_{−n}\) are complex, both cosine and a sine terms are present.

The use of the exponential form of the Fourier series gives rise to the notion of ‘negative frequency’. Of course, \(f ( t) = a_n \cos \omega_n t\) is a wave of a single frequency \(\omega_n = n\omega_0\) radians/second, and may be represented by a single line of height \(a_n\) in a normal spectral diagram. However, using the exponential form of the Fourier series results in both positive and negative \(\omega\) components.

The coexistence of both negative and positive angular frequencies \(\pm \omega\) can be understood by consideration of the Argand diagram where the real component is plotted along the \(x\)-axis and the imaginary component along the \(y\)-axis. The function \(g_ne^{+i\omega t}\) represents a vector of length \(g_n\) that rotates with an angular velocity \(\omega\) in a positive direction, that is counterclockwise, whereas, \(g_ne^{−i\omega t}\) represents the vector rotating in a negative direction, that is clockwise. Thus the sum of the two rotating vectors, according to equations \ref{I.3}, leads to cancellation of the opposite components on the imaginary \(y\) axis and addition of the two \(g_n \cos \omega t\) real components on the \(x\) axis. Subtraction leads to cancellation of the real \(x\) components and addition of the imaginary \(y\) axis components.

### Aperiodic systems and the Fourier Transform

The Fourier transform (also called the Fourier integral) does for the non-repetitive signal waveform what the Fourier series does for the repetitive signal. It was shown that the line spectrum of a recurrent periodic pulse waveform is modified as the pulse duration decreases, assuming the period of the waveform (and hence its fundamental component) remains unchanged. Suppose now that the duration of the pulses remain fixed but the separation between them increases, giving rise to an increasing period. In the limit, only a single rectangular pulse remains, its neighbors having moved away on either side towards \(\pm \infty \). In this case, the fundamental frequency \(\omega_0\) tends towards zero and the harmonics become extremely closely spaced and of vanishingly small amplitudes, that is, the system approximates a continuous spectrum.

Mathematically, this situation may be expressed by modifications to the exponential form of the Fourier series already derived. Let the phase factor \(\phi = \omega_0 t\) in Equation \ref{I.11} then

\[g_n = \frac{\omega_0 }{2\pi} \int^{+\pi}_{ −\pi} f ( t) e^{n\omega_0 t} d t = \frac{1}{ \tau} \int^{\frac{\tau }{2}}_{ − \frac{\tau}{ 2}} f ( t) e^{n\omega_0 t} d t \label{I.13}\]

where \(\tau\) is the period of the periodic force. Let \(G (\omega ) = \tau g_n\), \(\omega = n\omega_0\), and take the limit for \(\tau \rightarrow \infty \), then Equation \ref{I.12} can be written as

\[G (\omega ) = \int^{+\infty}_{ −\infty} f ( t) e^{\omega t}d t \label{I.14}\]

Similarly making the same limit for \(\tau \rightarrow \infty\) then \(\omega_0 = \frac{2\pi}{ \tau} \rightarrow d\omega\) and Equation \ref{I.11} becomes

\[f ( t) = \sum^{\infty}_{ n=−\infty} \frac{G (\omega )}{ \tau} e^{in\omega_0 t} = \sum^{\infty}_{ n=−\infty} G (\omega ) \frac{\omega_0 }{2\pi} e^{i\omega t} = \frac{1}{ 2\pi} \int^{ +\infty}_{ −\infty} G (\omega ) e^{i\omega t} d\omega \label{I.15}\]

Equation \ref{I.15} shows how a non-repetitive time-domain wave form is related to its continuous spectrum. These are known as Fourier integrals or Fourier transforms. They are of central importance for signal processing. For convenience the transforms often are written in the operator formalism using the \(\mathcal{F}\) symbol in the form

\[f ( t) = \frac{1}{ 2\pi } \int^{ +\infty}_{ −\infty} G (\omega ) e^{i\omega t} d\omega \equiv \mathcal{F}^{−1} \left[ \frac{1}{ 2\pi} G(\omega ) \right] \label{I.16}\]

\[G (\omega ) = \int^{ +\infty}_{ −\infty} f ( t) e^{−i\omega t} d t \equiv \mathcal{F}f( t) \label{I.17}\]

It is very important to grasp the significance of these two equations. The first tells us that the Fourier transform of the waveform \(f( t)\) is continuously distributed in the frequency range between \(\omega = \pm \infty \), whereas the second shows how, in effect, the waveform may be synthesized from an infinite set of exponential functions of the form \(e^{\pm i\omega t}\), each weighted by the relevant value of \(G(\omega )\). It is crucial to realize that this transformation can go either way equally, that is, from \(G(\omega )\) to \(f ( t)\) or vice versa.^{1}

Example \(\PageIndex{1}\): Fourier transform of a single isolated square pulse

Consider a single isolated square pulse of width \(\tau\) that is described by the rectangular function \(\prod\) defined as

\[\prod( t) = \begin{cases} 1 & | t|< \frac{\tau}{ 2} \\ 0 & | t| > \frac{\tau}{2} \end{cases}\nonumber\]

That is, assume that the amplitude of the pulse is unity between \(−\frac{\tau }{2} \leq t \leq \frac{\tau }{2}\). Then the Fourier transform

\[G (\omega ) = \int^{+\tau}_{ −\tau} 1.e^{−i\omega t} d t = \tau \left(\frac{\sin \frac{\omega \tau}{ 2}}{ \frac{\omega \tau}{ 2 }}\right) \nonumber\]

which is an unnormalized \(sinc(\omega \tau )\) function. Note that the width of the pulse \(\Delta t = \pm \frac{\tau }{2}\) leads to a frequency envelope that has the first zeros at \(\Delta\omega = \pm \frac{\pi}{ \tau}\). Thus the product of these widths \(\Delta t \cdot \Delta\omega = \pm \pi\) which is independent of the width of the pulse, that is \(\Delta\omega = \frac{\pi}{ \Delta t}\) which is an example of the uncertainty principle which is applicable to all forms of wave motion.

Example \(\PageIndex{2}\): Fourier transform of the Dirac delta function

The Dirac delta function, \(\delta ( t − t^{\prime} )\), is a pulse of extremely short duration and unit area at \(t = t^{\prime}\) and is zero at all other times. That is,

\[1 = \int^{ +\infty}_{ −\infty} \delta ( t − t^{\prime} ) d t \nonumber\]

The Dirac function, which is sometimes referred to as the impulse function, has many important applications to physics and signal processing. For example, a shell shot from a gun is given a mechanical impulse imparting a certain momentum to the shell in a very short time. Other things being equal, one is interested only in the impulse imparted to the shell, that is, the time integral of the force accelerating the shell in the gun, rather than the details of the time dependence of the force. Since the force acts for a very short time the Dirac delta function can be employed in such problems.

As described in section \(3.11\) and **appendix J**, the Dirac delta function is employed in signal processing when signals are sampled for short time intervals. The Fourier transform of the delta function is needed for discussion of sampling of signals

\[G (\omega ) = \int^{ +\infty}_{ −\infty} \delta ( t − t^{\prime} ) e^{−i\omega t} d t = e^{−i\omega t^{\prime}} \nonumber\]

Since \(e^{−i\omega t}\) essentially is constant over the infinitesimal time duration of the \(\delta ( t − t^{\prime} )\) function, and the time integral of the \(\delta\) function is unity, thus the term \(e^{−i\omega t}\) has unit magnitude for any value of \(\omega\) and has a phase shift of \(−\omega ( t − t^{\prime} )\) radians. For \(t^{\prime} = 0\) the phase shift is zero and thus the Fourier transform of a Dirac \(\delta ( t)\) function is \(G(\omega )=1\). That is, this is a uniform white spectrum for all values of \(\omega \).

## Time-sampled waveform analysis

An alternative approach for unloosing periodic signals, that is complementary to the Fourier analysis harmonic decomposition, is time-sampled (discrete-sample) waveform analysis where the signal amplitude is measured repetitively at regular time intervals in a time-ordered sequence, that is, a sequence of samples of the instantaneous delta-function amplitudes is recorded. Typically an amplitude-to-digital converter is used to digitize the amplitude for each measured sample and the digital numbers are recorded; this process is called **digital signal processing**.

The general principles are best explained by first considering the response of a linear system to a step function impulse, followed by a square impulse, and leading to the response of a \(\delta \)-function impulsive driving force.

### Delta-function impulse response

Consider the damped oscillator equation

\[\ddot{x} + \Gamma \dot{x} + \omega^2_0x = \frac{F ( t)}{ m} \label{I.18}\]

and assume that a step function is applied at time \(t = 0\). That is;

\[\begin{align} \frac{F ( t)}{ m} = 0 && t < 0 && \frac{F ( t)}{ m } = a && t> 0 \label{I.19} \end{align} \]

where \(a\) is a constant. The initial conditions are that \(x(0) = \dot{x}(0) = 0\).

The transient or complementary solution is the solution of the linearly-damped harmonic oscillator

\[\ddot{x} + \Gamma \dot{x} + \omega^2_0x = 0 \label{I.20}\]

This is independent of the driving force and the solution is given in the chapter \(3.5\) discussion of the linearly-damped harmonic oscillator.

The particular, steady-state, solution is easy to obtain just by inspection since the force is a constant, that is, the particular solution is

\[\begin{aligned} x_S = \frac{a}{ \omega^2_0} && t > 0 && x_S = 0 && t < 0 \end{aligned}\]

Taking the sum of the transient and particular solutions, using the initial conditions, gives the final solution to be

\[x( t) = \frac{a}{ \omega^2_0} \left[ 1 − e^{− \frac{\Gamma}{2} t} \cos \omega_1 t − \frac{\Gamma e^{− \frac{\Gamma}{2} t}}{ 2\omega_1} \sin \omega_1 t \right] \label{I.21}\]

where \(\omega_1 \equiv \sqrt{ \omega^2_0 − ( \frac{\Gamma}{2} )^2} \). This functional form is shown in Figure \(\PageIndex{1a}\). Note that the amplitude of the transient response equals \(−a\) at \(t = 0\) to cancel the particular solution when it jumps to \(+a\). The oscillatory behavior then is just that of the transient response.

A square impulse can be generated by the superposition of two opposite-sign stepfunctions separated by a time \(\tau\) as shown in Figure \(\PageIndex{1b}\).

The square impulse can be taken to the limit where the width \(\tau\) is negligibly small relative to the response times of the system. It can be shown that letting \(\tau \rightarrow 0\), but keeping the magnitude of the total impulse \(P = a\tau\) finite for the impulse at time \(t_0\), leads to the solution for the \(\delta \)-function impulse occurring at \(t_0\)

\[x( t) = \frac{P}{ \omega_1} e^{− \frac{\Gamma}{2} ( t− t_0)} \sin \omega_1 ( t − t_0) \quad t> t_0 \label{I.22}\]

This response to a delta function impulse is shown in Figure \(\PageIndex{1c}\) for the case where \(t_0 = 0\). An example is the response when the hammer strikes a piano string at \(t = 0\).

### Green’s function waveform decomposition

The response of the linearly-damped linear oscillator to an delta function impulse, that has been expressed above, can be used to exploit the powerful Green’s technique for decomposition of any general forcing function. That is, if the driven system is linear, then the principle of superposition is applicable and allowing expression of the inhomogeneous part of the differential equation as the sum of individual delta functions. That is;

\[\ddot{x} + \Gamma \dot{x} + \omega^2_0 x = \sum^{\infty}_{ n=−\infty} \frac{F_n ( t)}{ m} = \sum^{\infty}_{ n=−\infty} I_n ( t) \label{I.23}\]

As illustrated in Figure \(\PageIndex{2}\) discrete-time waveform analysis involves repeatedly sampling the instantaneous amplitude in a regular and repetitive sequence of \(\delta \)-function impulses. Since the superposition principle applies for this linear system then the waveform can be described by a sum of an ordered series of deltafunction impulses where \(t^{\prime}\) is the time of an impulse. Integrating over all the \(\delta \)-function responses that have occurred at time \(t^{\prime} \), that is prior to the time of interest \(t\), leads to

\[x ( t) = \int^t_{ −\infty} \frac{F ( t^{\prime} )}{ m\omega_1} e^{− \frac{\Gamma}{2} ( t− t^{\prime} )} \sin \omega_1 ( t − t^{\prime} ) d t^{\prime} \quad t \geq t^{\prime} \label{I.24}\]

The Green’s function \(G ( t − t^{\prime} )\) is defined by

\[G( t − t^{\prime} ) = \frac{1}{ m\omega_1} e^{− \frac{\Gamma}{2} ( t− t^{\prime} )} \sin \omega_1 ( t − t^{\prime} ) \quad t \geq t^{\prime} \label{I.25} \\ = 0 \quad t< t^{\prime} \]

Superposition allows the summed response of the system to be written in an integral form

\[x( t) = \int^t_{ −\infty} F( t^{\prime} )G( t − t^{\prime} )d t^{\prime} \label{I.26}\]

which gives the final time dependence of the forced system. This repetitive time-sampling approach avoids the need of using Fourier analysis. Note that the Green’s function \(G ( t − t^{\prime} )\) includes implicitly the frequency of the free undamped linear oscillator \(\omega_0\), the free damped linear oscillator \(\omega_1 \equiv \sqrt{\omega^2_0 − ( \frac{\Gamma}{2} )^2}\), as well as the damping coefficient \(\Gamma \). Access to the combination of fast microcomputers coupled to fast digital sampling techniques has made digital signal sampling the pre-eminent technique for signal recording of audio, video, and detector signal processing.

## References

^{1}The only asymmetry in the Fourier transform relations comes from the \(2\pi\) factor originating from the fact that by convention physicists use the angular frequency \(\omega = 2\pi\nu\) rather than the frequency \(\nu\). In order to restore symmetry many papers use the factor \(\frac{1}{\sqrt{ 2\pi}}\) in both relations rather than using the \(\frac{1}{ 2\pi}\) factor in Equation \ref{I.16} and unity in Equation \ref{I.17}.