Skip to main content
Physics LibreTexts

19.6: Appendix - Tensor Algebra

  • Page ID
    9689
  • \( \newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \) \( \newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}} \)\(\newcommand{\id}{\mathrm{id}}\) \( \newcommand{\Span}{\mathrm{span}}\) \( \newcommand{\kernel}{\mathrm{null}\,}\) \( \newcommand{\range}{\mathrm{range}\,}\) \( \newcommand{\RealPart}{\mathrm{Re}}\) \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\) \( \newcommand{\Argument}{\mathrm{Arg}}\) \( \newcommand{\norm}[1]{\| #1 \|}\) \( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\) \( \newcommand{\Span}{\mathrm{span}}\) \(\newcommand{\id}{\mathrm{id}}\) \( \newcommand{\Span}{\mathrm{span}}\) \( \newcommand{\kernel}{\mathrm{null}\,}\) \( \newcommand{\range}{\mathrm{range}\,}\) \( \newcommand{\RealPart}{\mathrm{Re}}\) \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\) \( \newcommand{\Argument}{\mathrm{Arg}}\) \( \newcommand{\norm}[1]{\| #1 \|}\) \( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\) \( \newcommand{\Span}{\mathrm{span}}\)\(\newcommand{\AA}{\unicode[.8,0]{x212B}}\)

    Tensors

    Mathematically scalars and vectors are the first two members of a hierarchy of entities, called tensors, that behave under coordinate transformations as described in appendix \(19.4\). The use of the tensor notation provides a compact and elegant way to handle transformations in physics.

    A scalar is a rank 0 tensor with one component, that is invariant under change of the coordinate system.

    \[\phi (x^{\prime} y^{\prime} z^{\prime} ) = \phi (xyz) \label{E.1}\]

    A vector is a rank 1 tensor which has three components, that transform under rotation according to matrix relation

    \[\mathbf{x}^{\prime} = \boldsymbol{\lambda} \cdot \mathbf{x} \label{E.2}\]

    where \(\boldsymbol{\lambda}\) is the rotation matrix. Equation \ref{E.2} can be written in the suffix form as

    \[x^{\prime}_i = \sum^3_{j=1} \lambda_{ij} x_j \label{E.3}\]

    The above definitions of scalars and vectors can be subsumed into a class of entities called tensors of rank \(n\) that have \(3^n\) components. A scalar is a tensor of rank \(r = 0\), with only \(3^0 = 1\) component, whereas a vector has rank \(r = 1\), that is, the vector \(\mathbf{x}\) has one suffix \(i\) and \(3^1 = 3\) components.

    A second-order tensor \(T_{ij}\) has rank \(r = 2\) with two suffixes, that is, it has \(3^2 = 9\) components that transform under rotation as

    \[T^{\prime}_{ij} = \sum^3_{k=1} \sum^3_{l=1} \lambda_{ik}\lambda_{jl}T_{kl} \label{E.4}\]

    For second-order tensors, the transformation formula given by Equation \ref{E.4} can be written more compactly using matrices. Thus the second-order tensor can be written as a \(3 \times 3\) matrix

    \[\mathbf{T} \equiv \begin{pmatrix} T_{11} & T_{12} & T_{13} \\ T_{21} & T_{22} & T_{23} \\ T_{31} & T_{32} & T_{33} \end{pmatrix} \label{E.5}\]

    The rotational transformation given in Equation \ref{E.4} can be written in the form

    \[T^{\prime}_{ij} = \sum^3_{l=1} \left( \sum^3_{k=1} \lambda_{ik}T_{kl}\right) \lambda_{jl} = \sum^3_{l=1} \left( \sum^3_{k=1} \lambda_{ik}T_{kl}\right) \lambda^T_{lj} \label{E.6}\]

    where \(\lambda^T_{lj}\) are the matrix elements of the transposed matrix \(\boldsymbol{\lambda}^T\). The summations in \ref{E.6} can be expressed in both the tensor and conventional matrix form as the matrix product

    \[\mathbf{T}^{\prime} = \boldsymbol{\lambda} \cdot \mathbf{T} \cdot \boldsymbol{\lambda}^T \label{E.7}\]

    Equation \ref{E.7} defines the rotational properties of a spherical tensor.

    Tensor products

    Tensor outer product

    Tensor products feature prominently when using tensors to represent transformations. A second-order tensor \(\mathbf{T}\) can be formed by using the tensor product, also called outer product, of two vectors \(\mathbf{a}\) and \(\mathbf{b}\) which, written in suffix form, is

    \[\mathbf{T} \equiv \mathbf{a} \otimes \mathbf{b} = \begin{pmatrix} a_1b_1 & a_1b_2 & a_1b_3 \\ a_2b_1 & a_2b_2 & a_2b_3 \\ a_3b_1 & a_3b_2 & a_3b_3 \end{pmatrix} \label{E.8}\]

    In component form the matrix elements of this matrix are given by

    \[T_{ij} = a_ib_j \label{E.9}\]

    This second-order tensor product has a rank \(r = 2\), that is, it equals the sum of the ranks of the two vectors. Equation \ref{E.8} is called a dyad since it was derived by taking the dyadic product of two vectors. In general, multiplication, or division, of two vectors leads to second-order tensors. Note that this second-order tensor product completes the triad of tensors possible taking the product of two vectors. That is, the scalar product \(\mathbf{a} \cdot \mathbf{b}\), has rank \(r = 0\), the vector product \(\mathbf{a} \times \mathbf{b}\), rank \(r = 1\) and the tensor product \(\mathbf{a} \otimes \mathbf{b}\) has rank1 \(r = 2\).

    Higher-order tensors can be created by taking more complicated tensor products. For example, a rank-3 tensor can be created by taking the tensor outer product of the rank-2 tensor \(T_{ij}\) and a vector \(c_k\) which, for a dyadic tensor, can be written as the tensor product of three vectors. That is,

    \[T_{ijk} = T_{ij} c_k = a_ib_j c_k \label{E.10}\]

    In summary, the rank of the tensor product equals the sum of the ranks of the tensors included in the tensor product.

    Tensor Inner Product

    The lowest rank tensor product, which is called the inner product, is obtained by taking the tensor product of two tensors for the special case where one index is repeated, and taking the sum over this repeated index. Summing over this repeated index, which is called contraction, removes the two indices for which the index is repeated, resulting in a tensor that has rank \(r\) equal to the sum of the ranks minus 2 for one contraction. That is, the product tensor has rank \(r = r_1 + r_2 − 2\).

    The simplest example is the inner product of two vectors which has rank \(r =1+1 − 2=0\), that is, it is the scalar product that equals the trace of the inner product matrix, and this inner product is commutative.

    An especially important case is the inner product of a rank-2 dyad \(\mathbf{a} \otimes \mathbf{b}\), given by Equation \ref{E.8}, with a vector \(\mathbf{c}\), that is, the inner product \(\mathbf{T} = \mathbf{a} \otimes \mathbf{b} \cdot \mathbf{c}\). Written in component form, the inner product is

    \[\sum^3_i a_ib_ic_j = \left( \sum^3_i a_ib_i \right) c_j = (\mathbf{a} \cdot \mathbf{b}) c_j \label{E.11}\]

    The scalar product \(\mathbf{a} \cdot \mathbf{b}\) is a scalar number, and thus the inner-product tensor is the vector \(\mathbf{c}\) renormalized by the magnitude of the scalar product \(\mathbf{a} \cdot \mathbf{b}\). That is, it has a rank \(r = 2+1−2=1\). Thus the inner product of this rank-2 tensor with a vector gives a vector. The inner product of a rank-2 tensor with a rank-1 tensor is used in this book for handling the rotation matrix, the inertia tensor for rigid-body rotation, and for the stress and the strain tensors used to describe elasticity in solids.

    Example \(\PageIndex{1}\): Displacement gradient tensor

    The displacement gradient tensor provides an example of the use of the matrix representation to manipulate tensors. Let \(\boldsymbol{\phi}(x_1, x_2, x_3)\) be a vector field expressed in a cartesian basis. The definition of the gradient \(G = \boldsymbol{\nabla}\boldsymbol{\phi}\) gives that

    \[d\boldsymbol{\phi} = \mathbf{G} \cdot d\mathbf{x} \nonumber\]

    Calculating the components of \(d\boldsymbol{\phi}\) in terms of \(\mathbf{x}\) gives

    \[d\phi_1 = \dfrac{\partial \phi_1}{\partial x_1} dx_1 + \dfrac{\partial \phi_1}{ \partial x_2} dx_2 + \dfrac{\partial \phi_1}{ \partial x_3} dx_3 \nonumber\]

    \[d\phi_2 = \dfrac{\partial \phi_2}{ \partial x_1} dx_1 + \dfrac{\partial \phi_2}{ \partial x_2} dx_2 + \dfrac{\partial \phi_2}{ \partial x_3} dx_3 \nonumber\]

    \[d\phi_3 = \dfrac{\partial \phi_3}{ \partial x_1} dx_1 + \dfrac{\partial \phi_3}{ \partial x_2} dx_2 + \dfrac{\partial \phi_3}{ \partial x_3} dx_3 \nonumber\]

    Using index notation this can be written as

    \[d\phi_i = \dfrac{\partial \phi_i}{ \partial x_j} dx_j \nonumber\]

    The second-rank gradient tensor \(\mathbf{G}\) can be represented in the matrix form as

    \[\mathbf{G} = \begin{vmatrix} \dfrac{\partial \phi_1}{ \partial x_1} & \dfrac{\partial \phi_1}{ \partial x_2} & \dfrac{\partial \phi_1}{ \partial x_3} \\ \dfrac{\partial \phi_2}{ \partial x_1} & \dfrac{\partial \phi_2}{ \partial x_2} & \dfrac{\partial \phi_2}{ \partial x_3} \\ \dfrac{\partial \phi_3}{ \partial x_1} & \dfrac{\partial \phi_3}{ \partial x_2} & \dfrac{\partial \phi_3}{ \partial x_3} \end{vmatrix} \nonumber\]

    Then the vector \(\boldsymbol{\phi}\) can be expressed compactly as the inner product of \(\mathbf{G}\) and \(\mathbf{x}\), that is

    \[d\boldsymbol{\phi} = \mathbf{G} \cdot d\mathbf{x} \nonumber\]

    Tensor Properties

    In principle one must distinguish between a \(3\times 3\) square matrix, and the tensor component representations of a rank-2 tensor. However, as illustrated by the previous discussion, for orthogonal transformations, the tensor components of the second rank tensor transform identically with the matrix components. Thus functionally, the matrix formulation and tensor representations are identical. As a consequence, all the terminology and operations used in matrix mechanics are equally applicable to the tensor representation.

    The tensor representation of the rotation matrix provides the simplest example of the equivalence of the matrix and tensor representations of transformations. Appendix \(19.4.2\) showed that the unitary rotation matrix \(\boldsymbol{\lambda}\), acting on a vector \(\mathbf{x}\) transforms it to the vector \(\mathbf{x}^{\prime}\) that is rotated with respect to \(\mathbf{x}\). That is, the transformation is

    \[\mathbf{x}^{\prime} = \boldsymbol{\lambda} \cdot \mathbf{x} \label{D5}\]

    where

    \[\mathbf{x}^{\prime} \equiv \begin{pmatrix} x^{\prime}_1 \\ x^{\prime}_2 \\ x^{\prime}_3 \end{pmatrix} \quad \mathbf{x} \equiv \begin{pmatrix} x_1 \\ x_2 \\ x_3 \end{pmatrix} \quad \boldsymbol{\lambda} \equiv \begin{pmatrix} \mathbf{\hat{e}}^{\prime}_1 \cdot \mathbf{\hat{e}}_1 & \mathbf{\hat{e}}^{\prime}_1 \cdot \mathbf{\hat{e}}_2 & \mathbf{\hat{e}}^{\prime}_1 \cdot \mathbf{\hat{e}}_3 \\ \mathbf{\hat{e}}^{\prime}_2 \cdot \mathbf{\hat{e}}_1 & \mathbf{\hat{e}}^{\prime}_2 \cdot \mathbf{\hat{e}}_2 & \mathbf{\hat{e}}^{\prime}_2 \cdot \mathbf{\hat{e}}_3 \\ \mathbf{\hat{e}}^{\prime}_3 \cdot \mathbf{\hat{e}}_1 & \mathbf{\hat{e}}^{\prime}_3 \cdot \mathbf{\hat{e}}_2 & \mathbf{\hat{e}}^{\prime}_3 \cdot \mathbf{\hat{e}}_3 \end{pmatrix} \label{D6}\]

    Appendix \(19.4.2\) showed that the rotation matrix \(\boldsymbol{\lambda}\) requires 9 components to fully specify the transformation from the initial 3-component vector \(\mathbf{x}\) to the rotated vector \(\mathbf{x}^{\prime} \). The rotation tensor is a dyad as well as being unitary and dimensionless. Note that Equation \ref{D5} is an example of the inner product of a rank−2 rotation tensor acting on a vector leading to a another vector that is rotated with respect to the first vector.

    In general, rank-2 tensors have dimensions and are not unitary. For example, the angular velocity vector \(\boldsymbol{\omega}\) and the angular momentum vector \(\mathbf{L}\) are related by the inner product of the inertia tensor \(\{\mathbf{I}\}\) and \(\boldsymbol{\omega}\). That is

    \[\mathbf{L} =\{\mathbf{I}\} \cdot \boldsymbol{\omega} \label{11.6}\]

    The inertia tensor has dimensions of \(mass \times length^2\) and relates two very different vector observables. The stress tensor and the strain tensor, discussed in chapter \(15\), provide another example of second-order tensors that are used to transform one vector observable to another vector observable analogous to the case of the rotation matrix or the inertia tensor.

    Note that pseudo-tensors can be used to make a rotational transformation plus a change in the sign. That is, they lead to a parity inversion.

    The tensor notation is used extensively in physics since it provides a powerful, elegant, and compact representation for describing transformations.

    Contravariant and covariant tensors

    In general the configuration space used to specify a dynamical system is not a Euclidean space in that there may not be a system of coordinates for which the distance between any two neighboring points can be represented by the sum of the squares of the coordinate differentials. For example, a set of cartesian coordinate does not exist for the two-dimension motion of a single particle constrained to the curved surface of a fixed sphere. Such curved spaces need to be represented in terms of Riemannian geometry rather than Euclidean geometry. Curved configuration spaces occur in some branches of physics such as Einstein’s General Theory of Relativity.

    Tensors have transformation properties that can be either contravariant or covariant. Consider a set of generalized coordinates \(q^{\prime}\) that are a function of the coordinates \(q\). Then infinitessimal changes \(dq^m\) will lead to infinitessimal changes \(dq^{\prime n}\) where

    \[dq^{\prime n} = \sum_m \dfrac{\partial q^{\prime n}}{ \partial q^m } dq^m \label{E.12}\]

    Contravariant components of a tensor transform according to the relation

    \[\lambda^{\prime n} = \sum_m \dfrac{\partial q^{\prime n}}{ \partial q^m} \lambda^m \label{E.13}\]

    Equation \ref{E.13} relates the contravariant components in the unprimed and primed frames.

    Derivatives of a scalar function \(\phi \), such as

    \[\lambda^{\prime}_n = \dfrac{\partial \phi}{ \partial q^n} = \sum_m \dfrac{\partial \phi}{ \partial q^m} \dfrac{ \partial q^m}{ \partial q^n} = \sum_m \dfrac{\partial q^m }{\partial q^n} \lambda^m \label{E.14}\]

    That is, covariant components of the tensor transform according to the relation

    \[\lambda^{\prime}_n = \sum_m \dfrac{\partial q^m}{ \partial q^n} \lambda^m \label{E.15}\]

    It is important to differentiate between contravariant and covariant vectors. The superscript/subscript convention for distinguishing between these two flavours of tensors is given in table \(\PageIndex{1}\)

    \(x^{\mu}\) denotes a contravariant vector
    \(x_{\nu}\) denotes a covariant vector
    Table \(\PageIndex{1}\): Einstein notation for tensors.

    In linear algebra one can map from one coordinate system to another as illustrated in appendix \(19.4\). That is, the tensor \(\mathbf{x}\) can be expressed as components with respect to either the unprimed or primed coordinate frames

    \[\mathbf{x} = \mathbf{\hat{e}}^{\prime}_1x^{\prime}_1 + \mathbf{\hat{e}}^{\prime}_2x^{\prime}_2 + \mathbf{\hat{e}}^{\prime}_3x^{\prime}_3 = \mathbf{\hat{e}}_1x_1 + \mathbf{\hat{e}}_2x_2 + \mathbf{\hat{e}}_3x_3 \label{E.16}\]

    For a \(n\)−dimensional manifold the unit basis column vectors \(\mathbf{\hat{e}}\) transform according to the transformation matrix \(\boldsymbol{\lambda}\)

    \[\mathbf{\hat{e}}^{\prime} = \boldsymbol{\lambda} \cdot \mathbf{\hat{e}} \label{E.17}\]

    Since the tensor \(\mathbf{x}\) is independent of the coordinate basis, the components of \(\mathbf{x}\) must have the opposite transform

    \[\mathbf{x}^{\prime} = \left( \boldsymbol{\lambda}^{−1}\right)^T \cdot \mathbf{x} \label{E.18}\]

    This normal vector \(\mathbf{x}\) is called a “contravariant vector” because it transforms contrary to the basis column vector transformation.

    The inverse of Equation \ref{E.18} gives that the column vector element

    \[x_{\mu} = \sum_{\nu} \boldsymbol{\lambda}_{\mu \nu} x^{\prime}_{\nu} \label{E.19}\]

    Consider the case of a gradient with respect to the coordinate \(\mathbf{x}\) in both the unprimed and primed bases. Using the chain rule for the partial derivative then the component of the gradient in the primed frame can be expanded as

    \[(\nabla f)^{\prime}_{\mu} = \dfrac{\partial f}{\partial x^{\prime}_{\mu}} = \sum_{\nu} \dfrac{ \partial f}{ \partial x_{\nu}} \dfrac{ \partial x_{\nu}} { \partial x^{\prime}_{ \mu}} = \sum_{\nu} \dfrac{ \partial f}{ \partial x_{\nu}} \boldsymbol{\lambda}_{\nu \mu} \delta_{\mu \nu} = \lambda_{\mu \mu} \dfrac{ \partial f}{ \partial x_{\mu}} \label{E.20}\]

    That is, the gradient transforms as

    \[\boldsymbol{\nabla}^{\prime} f = \boldsymbol{\lambda} \cdot \boldsymbol{\nabla}f \label{E.21}\]

    That is, a gradient transforms as a covariant vector, like the unit vectors, whereas a vector \(x\) is contravariant under transformation.

    Normally the basis is orthonormal, \(\left( \boldsymbol{\lambda}^{−1}\right)^T = \boldsymbol{\lambda}\), and thus there is no difference between contravariant and covariant vectors. However, for curved coordinate systems, such as non-Euclidean geometry in the General Theory of Relativity, the covariant and contravariant vectors behave differently.

    The Einstein convention is extended to apply to matrices by writing the elements of the matrix \(\mathbf{A}\) as \(A^{\mu}_{\nu}\) while the elements of the transposed matrix \(\mathbf{A}^{−1}\) are written as \(A_{\mu}^{\nu}\). The matrix product for \(\mathbf{A}\) with a contravariant vector \(\mathbf{X}\) is written as

    \[X^{\prime \mu} = \sum_{\nu} A^{\mu}_{\nu} X^{\nu} \label{E.22}\]

    where the summation over \(\nu\) effectively cancels the identical superscript and subscript \(\nu \).

    Similarly a covariant vector, such as a gradient, is written as,

    \[\left( \boldsymbol{\nabla}^{\prime} f \right)_{\mu} = \sum_{\nu} \left( A^{−1} \right)^{T \nu}_{ \mu} (\boldsymbol{\nabla}f)_{\nu} = \sum_{\nu} \left( A^{-1}\right)^{\nu}_{ \mu} (\boldsymbol{\nabla}f)_{\nu} \label{E.23}\]

    Again the summation cancels the \(\nu\) superscript and subscript. The Kronecker delta symbol is written as

    \[\sum_{\nu} \delta^{\mu}_{\nu} X^{\nu} = X^{\mu} \label{E.24}\]

    Generalized inner product

    The generalized definition of an inner product is

    \[S = \sum_{\mu \nu} g_{\mu \nu} X^{\mu} Y^{\nu} \label{E.25}\]

    where \(g_{\mu \nu}\) is a unitary matrix called a covariant metric. The covariant metric transforms a contravariant to a covariant tensor. For example the matrix element of a covariant tensor \(X_{\nu}\) can be written as

    \[X_{\nu} = \sum_{\mu} g_{\mu \nu} X^{\mu} \label{E.26}\]

    By association of the covariant metric with either of the vectors in the inner product gives

    \[S = \sum_{\mu \nu} g_{\mu \nu} X^{\mu} Y^{\nu} = \sum_{\nu} X_{\nu} Y^{\nu} = \sum_{\mu} X^{\mu} Y_{\mu} \label{E.27}\]

    Similarly it can be defined in terms of an orthogonal contravariant metric \(g^{\mu \nu}\) where

    \[S = \sum_{\mu \nu} g^{\mu \nu} X_{\mu} Y_{\nu} \label{E.28}\]

    Then

    \[X^{\nu} = \sum_{\mu} g^{\mu \nu} X_{\mu} \label{E.29}\]

    Association of the contravariant metric with one of the vectors in the inner product gives the inner product

    \[S = \sum_{\mu \nu} g^{\mu \nu} X_{\mu} Y_{\nu} = \sum_{\nu} X^{\nu} Y_{\nu} = \sum_{\mu} X_{\mu} Y^{\mu} \label{E.30}\]

    For most situations in this book the metric \(g_{\mu \nu}\) is diagonal and unitary.

    Transformation Properties of Observables

    In physics, observables can be represented by spherical tensors which specify the angular momentum and parity characteristics of the observable, and the tensor rank is independent of the time dependence. The transformation properties of these tensors, coupled with their time-reversal invariance, specify the fundamental characteristics of the observables.

    Table \(\PageIndex{2}\) summarizes the transformation properties under rotation, spatial inversion and time reversal for observables encountered in classical mechanics and electrodynamics. Note that observables can be scalar, vector, pseudovector, or second-order tensors, under rotation, and even or odd under either space inversion or time inversion. For example, in classical mechanics the inertia tensor \(\mathbf{I}\) relates the angular velocity vector \(\boldsymbol{\omega}\) to the angular momentum vector \(\mathbf{L }\) by taking the inner product \(\mathbf{L} = \mathbf{I} \cdot \boldsymbol{\omega}\). In general \(\mathbf{I}\) is not diagonal and thus the angular momentum is not parallel to the angular velocity \(\boldsymbol{\omega}\). A similar example in electrodynamics is the dielectric tensor \(\mathbf{K}\) which relates the displacement field \(\mathbf{D}\) to the electric field \(\mathbf{E}\) by \(\mathbf{D} = \mathbf{K} \cdot \mathbf{E}\). For anisotropic crystal media \(\mathbf{K}\) is not diagonal leading to the electric field vectors \(\mathbf{E}\) and \(\mathbf{D}\) not being parallel.

    As discussed in chapter \(7\), Noether’s Theorem states that symmetries of the transformation properties lead to important conservation laws. The behavior of classical systems under rotation relates to the conservation of angular momentum, the behavior under spatial inversion relates to parity conservation, and time-reversal invariance relates to conservation of energy. That is, conservative forces conserve energy and are time-reversal invariant.

    Physical Observable   Rotation (Tensor rank) Space inversion Time reversal Name
    1) Classical Mechanics          
    Mass density \(\rho\) 0 Even Even Scalar
    Kinetic energy \(p^2/2m\) 0 Even Even Scalar
    Potential energy \(U(r)\) 0 Even Even Scalar
    Lagrangian \(L\) 0 Even Even Scalar
    Hamiltonian \(H\) 0 Even Even Scalar
    Gravitational potential \(\phi\) 0 Even Even Scalar
    Coordinate \(\mathbf{r}\) 1 Odd Even Vector
    Velocity \(\mathbf{v}\) 1 Odd Odd Vector
    Momentum \(\mathbf{p}\) 1 Odd Odd Vector
    Angular momentum \(\mathbf{L} = \mathbf{r} \times \mathbf{p}\) 1 Even Odd Pseudovector
    Force \(\mathbf{F}\) 1 Odd Even Vector
    Torque \(\mathbf{N} = \mathbf{r} \times \mathbf{F}\) 1 Even Even Pseudovector
    Gravitational field \(\mathbf{g}\) 1 Odd Even Vector
    Inertia tensor \(\mathbf{I} \) 2 Even Even Tensor
    Elasticity stress tensor \(\mathbf{T}_{ik}\) 2 Even Even Tensor
               
    2) Electromagnetism          
    Charge density \(\rho\) 0 Even Even Scalar
    Current density \(\mathbf{j}\) 1 Odd Odd Vector
    Electric field \(\mathbf{E}\) 1 Odd Even Vector
    Polarization \(\mathbf{P}\) 1 Odd Even Vector
    Displacement \(\mathbf{D}\) 1 Odd Even Vector
    Magnetic \(B\) field \(\mathbf{B}\) 1 Even Odd Pseudovector
    Magnetization \(\mathbf{M}\) 1 Even Odd Pseudovector
    Magnetic \(H\) field \(\mathbf{H}\) 1 Even Odd Pseudovector
    Poynting vector \(\mathbf{S} = \mathbf{E} \times \mathbf{H}\) 1 Odd Odd Vector
    Dielectric tensor \(\mathbf{K}\) 2 Even Even Tensor
    Maxwell stress tensor \(\mathbf{T}_{ik}\) 2 Even Even Tensor
    Table \(\PageIndex{2}\): Transformation properties of scalar, vector, pseudovector, and tensor observables under rotation, spatial inversion, and time reversal2

    References

    1The common convention is to denote the scalar product as \(\mathbf{a} \cdot \mathbf{b}\), the vector product as \(\mathbf{a} \times \mathbf{b}\), and tensor product as \(\mathbf{a} \otimes \mathbf{b}\).

    2Based on table 6.1 in "Classical Electrodynamics" \(2^{nd}\) edition, by J.D. Jackson [Jac75]


    This page titled 19.6: Appendix - Tensor Algebra is shared under a CC BY-NC-SA 4.0 license and was authored, remixed, and/or curated by Douglas Cline via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit history is available upon request.