Skip to main content
Physics LibreTexts

24.6: Tensors 101

  • Page ID
    30517
  • \( \newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \) \( \newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}} \)\(\newcommand{\id}{\mathrm{id}}\) \( \newcommand{\Span}{\mathrm{span}}\) \( \newcommand{\kernel}{\mathrm{null}\,}\) \( \newcommand{\range}{\mathrm{range}\,}\) \( \newcommand{\RealPart}{\mathrm{Re}}\) \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\) \( \newcommand{\Argument}{\mathrm{Arg}}\) \( \newcommand{\norm}[1]{\| #1 \|}\) \( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\) \( \newcommand{\Span}{\mathrm{span}}\) \(\newcommand{\id}{\mathrm{id}}\) \( \newcommand{\Span}{\mathrm{span}}\) \( \newcommand{\kernel}{\mathrm{null}\,}\) \( \newcommand{\range}{\mathrm{range}\,}\) \( \newcommand{\RealPart}{\mathrm{Re}}\) \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\) \( \newcommand{\Argument}{\mathrm{Arg}}\) \( \newcommand{\norm}[1]{\| #1 \|}\) \( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\) \( \newcommand{\Span}{\mathrm{span}}\)\(\newcommand{\AA}{\unicode[.8,0]{x212B}}\)

    We see that the “inertia tensor” defined above as

    \begin{equation}
    I_{i k}=\sum_{n} m_{n}\left(x_{n l}^{2} \delta_{i k}-x_{n i} x_{n k}\right)
    \end{equation}

    is a \(3×3\) two-dimensional array of terms, called components, each of which is made up (for this particular tensor) of products of vector components.

    Obviously, if we had chosen a different set of Cartesian axes from the same origin \(O\) the vector components would be different: we know how a vector transforms under such a change of axes, \((x, y, z) \rightarrow\left(x^{\prime}, y^{\prime}, z^{\prime}\right) \text { where }\)

    \begin{equation}
    \left(\begin{array}{l}
    x^{\prime} \\
    y^{\prime} \\
    z^{\prime}
    \end{array}\right)=\left(\begin{array}{ccc}
    \cos \theta & \sin \theta & 0 \\
    -\sin \theta & \cos \theta & 0 \\
    0 & 0 & 1
    \end{array}\right)\left(\begin{array}{l}
    x \\
    y \\
    z
    \end{array}\right)
    \end{equation}

    This can be written more succinctly as

    \begin{equation}
    x_{i}^{\prime}=R_{i j} x_{j}, \text { or } \mathbf{x}^{\prime}=\mathbf{R} \mathbf{x}
    \end{equation}

    the bold font indicating a vector or matrix.

    In fact, a transformation from any set of Cartesian axes to any other set having the same origin is a rotation about some axis. This can easily be seen by first rotating so that the \(x^{\prime}\) axis coincides with the x axis, then rotating about that axis. (Of course, both sets of axes must have the same handedness.) We’ll discuss these rotation transformations in more detail later, for now we’ll just mention that the inverse of a rotation is given by the transpose matrix (check for the example above),

    \begin{equation}
    \mathbf{R}^{\mathbf{T}}=\mathbf{R}^{-1}, \quad \text { or } \quad R_{j i}=R_{i j}^{-1}
    \end{equation}

    so if the column vector

    \begin{equation}
    x_{i}^{\prime}=R_{i j} x_{j}, \text { or } \mathbf{x}^{\prime}=\mathbf{R} \mathbf{x}
    \end{equation}

    the row vector

    \begin{equation}
    \mathbf{x}^{\prime \mathbf{T}}=\mathbf{x}^{\mathrm{T}} \mathbf{R}^{\mathbf{T}}=\mathbf{x}^{\mathbf{T}} \mathbf{R}^{-\mathbf{1}}
    \end{equation}

    a.k.a. \(x_{i}^{\prime}=R_{i j} x_{j}=x_{j} R_{j i}^{T}=x_{j} R_{j i}^{-1}\), and the length of the vector doesn’t change:

    \(x_{i}^{\prime} x_{i}^{\prime}=\mathbf{x}^{\prime \mathrm{T}} \mathbf{x}^{\prime}=\mathbf{x}^{\mathrm{T}} \mathbf{R}^{\mathrm{T}} \mathbf{R} \mathbf{x}=\mathbf{x}^{\mathrm{T}} \mathbf{R}^{-1} \mathbf{R} \mathbf{x}=\mathbf{x}^{\mathrm{T}} \mathbf{x}=x_{i} x_{i}\)

    It might be worth spelling out explicitly here that the transpose of a square matrix (and almost all our matrices are square) is found by just swapping the rows and columns, or equivalently swapping elements which are the reflections of each other in the main diagonal, but the transpose of a vector, written as a column, has the same elements as a row, and the product of vectors follows the standard rules for matrix multiplication:

    \begin{equation}
    (A B)_{i j}=A_{i k} B_{k j}
    \end{equation}

    with the dummy suffix \(k\) summed over.

    Thus,

    \begin{equation}
    \left(\begin{array}{l}
    a_{1} \\
    a_{2} \\
    a_{3}
    \end{array}\right)^{T}=\left(\begin{array}{lll}
    a_{1} & a_{2} & a_{3}
    \end{array}\right)
    \end{equation}

    and

    \begin{equation}
    \mathbf{a}^{\mathbf{T}} \mathbf{a}=\left(\begin{array}{lll}
    a_{1} & a_{2} & a_{3}
    \end{array}\right)\left(\begin{array}{l}
    a_{1} \\
    a_{2} \\
    a_{3}
    \end{array}\right)=a_{1}^{2}+a_{2}^{2}+a_{3}^{2}
    \end{equation}

    but

    \begin{equation}
    \mathbf{a} \mathbf{a}^{\mathbf{T}}=\left(\begin{array}{l}
    a_{1} \\
    a_{2} \\
    a_{3}
    \end{array}\right)\left(\begin{array}{lll}
    a_{1} & a_{2} & a_{3}
    \end{array}\right)=\left(\begin{array}{ccc}
    a_{1}^{2} & a_{1} a_{2} & a_{1} a_{3} \\
    a_{1} a_{2} & a_{2}^{2} & a_{2} a_{3} \\
    a_{1} a_{3} & a_{2} a_{3} & a_{3}^{2}
    \end{array}\right)
    \end{equation}

    This will perhaps remind you of the Hilbert space vectors in quantum mechanics: the transposed vector above is analogous to the bra, the initial column vector being the ket. One difference from quantum mechanics is that all our vectors here are real, if that were not the case it would be natural to add complex conjugation to the transposition, to give \(\mathbf{a}^{*} \mathbf{a}=\left|a_{1}\right|^{2}+\left|a_{2}\right|^{2}+\left|a_{3}\right|^{2}\), the length squared of the vector.

    The difference shown above between \(\mathbf{a}^{\mathbf{T}} \mathbf{a} \text { and } \mathbf{a} \mathbf{a}^{\mathbf{T}}\) is exactly parallel to the difference between \(\langle a \mid a\rangle \text { and }|a\rangle\langle a|\) in quantum mechanics—the first is a number, the norm of the vector, the second is an operator, a projection into the state \(|a\rangle\)


    This page titled 24.6: Tensors 101 is shared under a not declared license and was authored, remixed, and/or curated by Michael Fowler.

    • Was this article helpful?