Skip to main content
Physics LibreTexts

1.6: Some Important Math Tricks

  • Page ID
    94096
  • \( \newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \)

    \( \newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}} \)

    \( \newcommand{\id}{\mathrm{id}}\) \( \newcommand{\Span}{\mathrm{span}}\)

    ( \newcommand{\kernel}{\mathrm{null}\,}\) \( \newcommand{\range}{\mathrm{range}\,}\)

    \( \newcommand{\RealPart}{\mathrm{Re}}\) \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\)

    \( \newcommand{\Argument}{\mathrm{Arg}}\) \( \newcommand{\norm}[1]{\| #1 \|}\)

    \( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\)

    \( \newcommand{\Span}{\mathrm{span}}\)

    \( \newcommand{\id}{\mathrm{id}}\)

    \( \newcommand{\Span}{\mathrm{span}}\)

    \( \newcommand{\kernel}{\mathrm{null}\,}\)

    \( \newcommand{\range}{\mathrm{range}\,}\)

    \( \newcommand{\RealPart}{\mathrm{Re}}\)

    \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\)

    \( \newcommand{\Argument}{\mathrm{Arg}}\)

    \( \newcommand{\norm}[1]{\| #1 \|}\)

    \( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\)

    \( \newcommand{\Span}{\mathrm{span}}\) \( \newcommand{\AA}{\unicode[.8,0]{x212B}}\)

    \( \newcommand{\vectorA}[1]{\vec{#1}}      % arrow\)

    \( \newcommand{\vectorAt}[1]{\vec{\text{#1}}}      % arrow\)

    \( \newcommand{\vectorB}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \)

    \( \newcommand{\vectorC}[1]{\textbf{#1}} \)

    \( \newcommand{\vectorD}[1]{\overrightarrow{#1}} \)

    \( \newcommand{\vectorDt}[1]{\overrightarrow{\text{#1}}} \)

    \( \newcommand{\vectE}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash{\mathbf {#1}}}} \)

    \( \newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \)

    \( \newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}} \)

    Odd and Even Functions

    We are well familiar with cases where integrals of functions can equal zero.  Essentially all that is required is that the function creates an equal amount of area above the horizontal axis as below it:

    zero integral.png

    If the areas \(A_1\) and \(A2\) are equal, then:

    \[\int\limits_a^bf\left(x\right)dx=0\]

    This fact allows us to solve certain integrals very fast, if we know something about the symmetry of the function we are integrating over the interval of integration.  For simplicity, we will limit this discussion to symmetries about the vertical axis, but keep in mind that these can be shifted in either direction by a simple change of \(x\)-coordinates.  The simplest function that exhibits this is a line that passes through the origin, integrated between limits equidistant on both sides of the \(y\)-axis:

    Figure 1.6.2 – Line Through the Origin

    line.png

    This integral obviously vanishes thanks to similar triangles, but we can also confirm it "the long way":

    \[\int\limits_{-a}^{+a}f\left(x\right)dx=\int\limits_{-a}^{+a}\alpha xdx=\left[\frac{1}{2}\alpha x^2\right]_{-a}^{+a}=0\]

    It should be clear that we should get this result for functions like \(f\left(x\right)=\alpha x^3\), \(f\left(x\right)=\alpha x^11\), and indeed any function that is just an odd power of \(x\), because the integral will result in a function that has an even power, and the difference of the two endpoints will always vanish:

    \[\int\limits_{-a}^{+a}f\left(x\right)dx=\int\limits_{-a}^{+a}\alpha x^ndx=\left[\frac{1}{n+1}\alpha x^\left(n+1\right)\right]_{-a}^{+a}=0~,~~~n\text{ odd}\]

    It should be equally clear that adding two such functions together results in another function with the same property, since the integral of each term in the sum vanishes.  Taking this to its extreme, it means that any function that can even be expressed as a power series that includes only odd-powered terms will also have a vanishing integral over limits equidistant from the \(x=0\) axis. Such functions are called odd functions, for obvious reasons.

    The counterpart of these functions are even functions, which are expressible as a power series of even powers of the argument.  [Functions that are odd are said to have odd parity, and even functions even parity. Functions that fall into neither category are said to not have definite parity.] Even functions are similar to odd functions in that the areas they define with the horizontal axis are equal on both sides of the vertical axis, but the difference is that these areas have the same sign, so they don't cancel. Knowing a function is even does help simplify the work a bit (not as much as just knowing the integral is zero, of course!), in that we can change one of the limits of integration to zero, and multiply by two:

    \[\int\limits_{-a}^{+a}f_{even}\left(x\right)dx=2\int\limits_0^{+a}f_{even}\left(x\right)dx\]

    The most common (and in our case, most useful) examples of odd and even functions are sine and cosine, respectively:

    \[\sin x=\frac{1}{1!}x-\frac{1}{3!}x^3+\frac{1}{5!}x^5-\dots~~~~~\cos x=1-\frac{1}{2!}x^2+\frac{1}{4!}x^4-\dots\]

    The reader can verify for themself that the integral of these functions over an interval symmetric about the \(x\)-axis gives the expected results for odd and even functions. As stated above, this odd/even property doesn't only apply across the origin – a function can be odd or even over any specific interval.  For example, a sine function is odd over the interval from 0 to \(2\pi\), but even over the interval from 0 to \(\pi\). The cosine function is precisely the opposite – even over the interval from 0 to \(2\pi\), and odd over the interval from 0 to \(\pi\).

    There is one more property of these kinds of functions that is important to point out.  Whenever a new function is formed from the product of two odd or two even functions, the result is an even function.  To see this, consider multiplying all the terms in the power series.  The powers add in each product, and adding two odd or two even numbers results in an even number. This should make it equally clear that the product of one odd function and one even function results in an odd function.  So the integral of a product of two functions may look very complicated, but if one of the functions is odd and the other is even, and the limits of the integral are symmetrically-placed across the vertical axis, then we know immediately that the integral vanishes.  If there are more than two functions multiplying each other, then the parity of any pair can first be determined, then the parity of that pair can be combined with the third function's parity, and so on.

    Orthogonal Functions

    While the details are slightly beyond the scope of this course, it's useful to know that the properties of, and operations involving, vectors that we first encountered in Physics 9HA extend far beyond entities that "have magnitude and direction". For example, similar properties can be attributed to polynomials and functions in general (which are expressible as power series – polynomials with an infinite number of terms).  In particular, we can create a consistent definition of the "orthogonality" of two functions. We define two functions to be orthogonal when the integral of their product over all values vanishes (sometimes we limit the range of the integral, like when the functions are periodic and the integral is clearly repeating itself).

    You can think of two functions as vectors, and the integral of their product as their dot product. This perspective is quite useful in many contexts, as well as being strictly accurate in a mathematical sense, though it is more abstract than what we have seen so far.  So clearly all odd functions are orthogonal with all even functions.  But having opposite parity across the origin is not the only mode by which two functions can be orthogonal.  We will see examples of this as we go through this course (as well as in future physics classes), but right now the most important example involves harmonic functions.  We already know that \(\sin k_1x\) is orthogonal to \(\cos k_2x\) for any values of \(k_1\) and \(k_2\), over any interval that is symmetric about the origin, thanks to their parities. But there is another example involving a pair of sine functions or a pair of cosine functions, though the arguments of these functions and the intervals of integration are restricted. 

    \[\int\limits_0^{2\pi}\cos\left(m\theta\right)\cos\left(n\theta\right)d\theta=\int\limits_0^{2\pi}\sin\left(m\theta\right)\sin\left(n\theta\right)d\theta=\left\{\begin{array}{l} \pi && m=n \\ 0&&m\ne n\end{array}\right.~~~~~~~~m,~n\text{  are integers}\]

    [Actually, the limits of integration do not need to be 0 to \(2\pi\) for this result to hold. Any limits that differ by \(2\pi\) will produce the same result.  That is, the integral just has to be over a single full cycle, starting at any phase.]

    This remarkable fact indicates that these cosine and sign functions are orthogonal to each other over this interval of integration whenever the integers \(m\) and \(n\) are not equal. And we already know that the cosine and sine functions are orthogonal to each other (even when \(m=n\)) over this interval, thanks to their parities. This is another way in which the view of these functions as vectors differs from what we are used to: There are an infinite number of these vectors, all perpendicular to each other!

    A Brief Foray Into Abstract Mathematics

    Let's take a moment to have a closer look at this notion of treating functions like vectors.  While we will use some of the notation that follows only sparingly in the chapters to come, in more advanced courses it becomes the standard, so it is a good idea to get exposed to it early.

    We saw in our study of 4-vectors in Physics 9HB that the vector itself is a well-defined object, independent of the coordinate system we use to describe it (and define its components). We know of a similar concept for functions – changing variables.  We can write out the function \(f\left(x\right)\) or we can make the substitution \(y=\alpha x+3\) and write out the function in terms of \(y\).  In some abstract (and technically imprecise) sense, we can think of \(f\) as the "vector" and \(f\left(x\right)\) as the components of that vector. One of the things that makes this description difficult to compare with vectors we are used to is that these "function vectors" have an infinite number of components – one for each value of \(x\).  But the notion of changing variables to get a whole new (infinite) set of components for the same vector is a reasonable one.

    There is a notation that has been invented, by a physicist and in the context of quantum theory, that does a good job of expressing this functions-as-vectors idea. It is called Dirac bra-ket notation.  There is much more to this notation than will be covered here (most notably the role of complex numbers), but the basics are as follows:

    • A bracket "\(\left<~|~\right>\)" is broken into two halves, the left half "\(\left<~|\right.\)" known as a "bra", and the right half "\(\left.|~\right>\)" called a "ket".
    • Whether a bra or a ket, it is a vector, and the combined bracket is the dot product between the two vectors represented by the bra on the left and the key on the right:

    \[\left<u|\right.\leftrightarrow \vec u~,~~\left.|v\right>\leftrightarrow \vec v~~~\Rightarrow~~~\left<u|v\right>\leftrightarrow \vec u\cdot\vec v\]

    • We consider the function to be an abstract vector \(\left.|f\right>\), and the variable used to in the function as a sort of unit vector \(\left<x|\right.\).  Taking the dot product of a vector with a unit vector yield the component of the vector along that unit vector's direction:

    \[\hat i\cdot\vec v = v_x~~~\leftrightarrow~~~\left<x|f\right>=f\left(x\right)\]

    • The dot product between two vectors can be written in terms of the sum of the product of their components in the same coordinate system. In the case of functions, there are a continuum of unit vectors, so the sum taken of components is an integral:

    \[\vec u\cdot\vec v =\left(\hat i\cdot\vec u\right)\left(\hat i\cdot\vec v\right)+\left(\hat j\cdot\vec u\right)\left(\hat j\cdot\vec v\right)+\left(\hat k\cdot\vec u\right)\left(\hat k\cdot\vec v\right)=u_xv_x+u_yv_y+u_zv_z~~~\leftrightarrow~~~\left<f|g\right>=\int\left<f|x\right>\left<x|g\right>dx=\int f\left(x\right)g\left(x\right)dx\]

    While the placements on the left or right side of the brackets shown above are technically important, this is intended as a basic introduction to this notation and the notion of functions as vectors, and not a formal exposition. The official name for this functions-as-vectors formalism is Hilbert space.


    This page titled 1.6: Some Important Math Tricks is shared under a CC BY-SA license and was authored, remixed, and/or curated by Tom Weideman.

    • Was this article helpful?