Loading [MathJax]/jax/output/HTML-CSS/fonts/TeX/fontdata.js
Skip to main content
Library homepage
 

Text Color

Text Size

 

Margin Size

 

Font Type

Enable Dyslexic Font
Physics LibreTexts

19.6: Appendix - Tensor Algebra

( \newcommand{\kernel}{\mathrm{null}\,}\)

Tensors

Mathematically scalars and vectors are the first two members of a hierarchy of entities, called tensors, that behave under coordinate transformations as described in appendix 19.4. The use of the tensor notation provides a compact and elegant way to handle transformations in physics.

A scalar is a rank 0 tensor with one component, that is invariant under change of the coordinate system.

ϕ(xyz)=ϕ(xyz)

A vector is a rank 1 tensor which has three components, that transform under rotation according to matrix relation

x=λx

where λ is the rotation matrix. Equation ??? can be written in the suffix form as

xi=3j=1λijxj

The above definitions of scalars and vectors can be subsumed into a class of entities called tensors of rank n that have 3n components. A scalar is a tensor of rank r=0, with only 30=1 component, whereas a vector has rank r=1, that is, the vector x has one suffix i and 31=3 components.

A second-order tensor Tij has rank r=2 with two suffixes, that is, it has 32=9 components that transform under rotation as

Tij=3k=13l=1λikλjlTkl

For second-order tensors, the transformation formula given by Equation ??? can be written more compactly using matrices. Thus the second-order tensor can be written as a 3×3 matrix

T(T11T12T13T21T22T23T31T32T33)

The rotational transformation given in Equation ??? can be written in the form

Tij=3l=1(3k=1λikTkl)λjl=3l=1(3k=1λikTkl)λTlj

where λTlj are the matrix elements of the transposed matrix λT. The summations in ??? can be expressed in both the tensor and conventional matrix form as the matrix product

T=λTλT

Equation ??? defines the rotational properties of a spherical tensor.

Tensor products

Tensor outer product

Tensor products feature prominently when using tensors to represent transformations. A second-order tensor T can be formed by using the tensor product, also called outer product, of two vectors a and b which, written in suffix form, is

Tab=(a1b1a1b2a1b3a2b1a2b2a2b3a3b1a3b2a3b3)

In component form the matrix elements of this matrix are given by

Tij=aibj

This second-order tensor product has a rank r=2, that is, it equals the sum of the ranks of the two vectors. Equation ??? is called a dyad since it was derived by taking the dyadic product of two vectors. In general, multiplication, or division, of two vectors leads to second-order tensors. Note that this second-order tensor product completes the triad of tensors possible taking the product of two vectors. That is, the scalar product ab, has rank r=0, the vector product a×b, rank r=1 and the tensor product ab has rank1 r=2.

Higher-order tensors can be created by taking more complicated tensor products. For example, a rank-3 tensor can be created by taking the tensor outer product of the rank-2 tensor Tij and a vector ck which, for a dyadic tensor, can be written as the tensor product of three vectors. That is,

Tijk=Tijck=aibjck

In summary, the rank of the tensor product equals the sum of the ranks of the tensors included in the tensor product.

Tensor Inner Product

The lowest rank tensor product, which is called the inner product, is obtained by taking the tensor product of two tensors for the special case where one index is repeated, and taking the sum over this repeated index. Summing over this repeated index, which is called contraction, removes the two indices for which the index is repeated, resulting in a tensor that has rank r equal to the sum of the ranks minus 2 for one contraction. That is, the product tensor has rank r=r1+r22.

The simplest example is the inner product of two vectors which has rank r=1+12=0, that is, it is the scalar product that equals the trace of the inner product matrix, and this inner product is commutative.

An especially important case is the inner product of a rank-2 dyad ab, given by Equation ???, with a vector c, that is, the inner product T=abc. Written in component form, the inner product is

3iaibicj=(3iaibi)cj=(ab)cj

The scalar product ab is a scalar number, and thus the inner-product tensor is the vector c renormalized by the magnitude of the scalar product ab. That is, it has a rank r=2+12=1. Thus the inner product of this rank-2 tensor with a vector gives a vector. The inner product of a rank-2 tensor with a rank-1 tensor is used in this book for handling the rotation matrix, the inertia tensor for rigid-body rotation, and for the stress and the strain tensors used to describe elasticity in solids.

Example 19.6.1: Displacement gradient tensor

The displacement gradient tensor provides an example of the use of the matrix representation to manipulate tensors. Let ϕ(x1,x2,x3) be a vector field expressed in a cartesian basis. The definition of the gradient G=ϕ gives that

dϕ=Gdx

Calculating the components of dϕ in terms of x gives

dϕ1=ϕ1x1dx1+ϕ1x2dx2+ϕ1x3dx3

dϕ2=ϕ2x1dx1+ϕ2x2dx2+ϕ2x3dx3

dϕ3=ϕ3x1dx1+ϕ3x2dx2+ϕ3x3dx3

Using index notation this can be written as

dϕi=ϕixjdxj

The second-rank gradient tensor G can be represented in the matrix form as

G=|ϕ1x1ϕ1x2ϕ1x3ϕ2x1ϕ2x2ϕ2x3ϕ3x1ϕ3x2ϕ3x3|

Then the vector ϕ can be expressed compactly as the inner product of G and x, that is

dϕ=Gdx

Tensor Properties

In principle one must distinguish between a 3×3 square matrix, and the tensor component representations of a rank-2 tensor. However, as illustrated by the previous discussion, for orthogonal transformations, the tensor components of the second rank tensor transform identically with the matrix components. Thus functionally, the matrix formulation and tensor representations are identical. As a consequence, all the terminology and operations used in matrix mechanics are equally applicable to the tensor representation.

The tensor representation of the rotation matrix provides the simplest example of the equivalence of the matrix and tensor representations of transformations. Appendix 19.4.2 showed that the unitary rotation matrix λ, acting on a vector x transforms it to the vector x that is rotated with respect to x. That is, the transformation is

x=λx

where

x(x1x2x3)x(x1x2x3)λ(ˆe1ˆe1ˆe1ˆe2ˆe1ˆe3ˆe2ˆe1ˆe2ˆe2ˆe2ˆe3ˆe3ˆe1ˆe3ˆe2ˆe3ˆe3)

Appendix 19.4.2 showed that the rotation matrix λ requires 9 components to fully specify the transformation from the initial 3-component vector x to the rotated vector x. The rotation tensor is a dyad as well as being unitary and dimensionless. Note that Equation ??? is an example of the inner product of a rank−2 rotation tensor acting on a vector leading to a another vector that is rotated with respect to the first vector.

In general, rank-2 tensors have dimensions and are not unitary. For example, the angular velocity vector ω and the angular momentum vector L are related by the inner product of the inertia tensor {I} and ω. That is

L={I}ω

The inertia tensor has dimensions of mass×length2 and relates two very different vector observables. The stress tensor and the strain tensor, discussed in chapter 15, provide another example of second-order tensors that are used to transform one vector observable to another vector observable analogous to the case of the rotation matrix or the inertia tensor.

Note that pseudo-tensors can be used to make a rotational transformation plus a change in the sign. That is, they lead to a parity inversion.

The tensor notation is used extensively in physics since it provides a powerful, elegant, and compact representation for describing transformations.

Contravariant and covariant tensors

In general the configuration space used to specify a dynamical system is not a Euclidean space in that there may not be a system of coordinates for which the distance between any two neighboring points can be represented by the sum of the squares of the coordinate differentials. For example, a set of cartesian coordinate does not exist for the two-dimension motion of a single particle constrained to the curved surface of a fixed sphere. Such curved spaces need to be represented in terms of Riemannian geometry rather than Euclidean geometry. Curved configuration spaces occur in some branches of physics such as Einstein’s General Theory of Relativity.

Tensors have transformation properties that can be either contravariant or covariant. Consider a set of generalized coordinates q that are a function of the coordinates q. Then infinitessimal changes dqm will lead to infinitessimal changes dqn where

dqn=mqnqmdqm

Contravariant components of a tensor transform according to the relation

λn=mqnqmλm

Equation ??? relates the contravariant components in the unprimed and primed frames.

Derivatives of a scalar function ϕ, such as

λn=ϕqn=mϕqmqmqn=mqmqnλm

That is, covariant components of the tensor transform according to the relation

λn=mqmqnλm

It is important to differentiate between contravariant and covariant vectors. The superscript/subscript convention for distinguishing between these two flavours of tensors is given in table 19.6.1

xμ denotes a contravariant vector
xν denotes a covariant vector
Table 19.6.1: Einstein notation for tensors.

In linear algebra one can map from one coordinate system to another as illustrated in appendix 19.4. That is, the tensor x can be expressed as components with respect to either the unprimed or primed coordinate frames

x=ˆe1x1+ˆe2x2+ˆe3x3=ˆe1x1+ˆe2x2+ˆe3x3

For a n−dimensional manifold the unit basis column vectors ˆe transform according to the transformation matrix λ

ˆe=λˆe

Since the tensor x is independent of the coordinate basis, the components of x must have the opposite transform

x=(λ1)Tx

This normal vector x is called a “contravariant vector” because it transforms contrary to the basis column vector transformation.

The inverse of Equation ??? gives that the column vector element

xμ=νλμνxν

Consider the case of a gradient with respect to the coordinate x in both the unprimed and primed bases. Using the chain rule for the partial derivative then the component of the gradient in the primed frame can be expanded as

(f)μ=fxμ=νfxνxνxμ=νfxνλνμδμν=λμμfxμ

That is, the gradient transforms as

f=λf

That is, a gradient transforms as a covariant vector, like the unit vectors, whereas a vector x is contravariant under transformation.

Normally the basis is orthonormal, (λ1)T=λ, and thus there is no difference between contravariant and covariant vectors. However, for curved coordinate systems, such as non-Euclidean geometry in the General Theory of Relativity, the covariant and contravariant vectors behave differently.

The Einstein convention is extended to apply to matrices by writing the elements of the matrix A as Aμν while the elements of the transposed matrix A1 are written as Aνμ. The matrix product for A with a contravariant vector X is written as

Xμ=νAμνXν

where the summation over ν effectively cancels the identical superscript and subscript ν.

Similarly a covariant vector, such as a gradient, is written as,

(f)μ=ν(A1)Tνμ(f)ν=ν(A1)νμ(f)ν

Again the summation cancels the ν superscript and subscript. The Kronecker delta symbol is written as

νδμνXν=Xμ

Generalized inner product

The generalized definition of an inner product is

S=μνgμνXμYν

where gμν is a unitary matrix called a covariant metric. The covariant metric transforms a contravariant to a covariant tensor. For example the matrix element of a covariant tensor Xν can be written as

Xν=μgμνXμ

By association of the covariant metric with either of the vectors in the inner product gives

S=μνgμνXμYν=νXνYν=μXμYμ

Similarly it can be defined in terms of an orthogonal contravariant metric gμν where

S=μνgμνXμYν

Then

Xν=μgμνXμ

Association of the contravariant metric with one of the vectors in the inner product gives the inner product

S=μνgμνXμYν=νXνYν=μXμYμ

For most situations in this book the metric gμν is diagonal and unitary.

Transformation Properties of Observables

In physics, observables can be represented by spherical tensors which specify the angular momentum and parity characteristics of the observable, and the tensor rank is independent of the time dependence. The transformation properties of these tensors, coupled with their time-reversal invariance, specify the fundamental characteristics of the observables.

Table 19.6.2 summarizes the transformation properties under rotation, spatial inversion and time reversal for observables encountered in classical mechanics and electrodynamics. Note that observables can be scalar, vector, pseudovector, or second-order tensors, under rotation, and even or odd under either space inversion or time inversion. For example, in classical mechanics the inertia tensor I relates the angular velocity vector ω to the angular momentum vector L by taking the inner product L=Iω. In general I is not diagonal and thus the angular momentum is not parallel to the angular velocity ω. A similar example in electrodynamics is the dielectric tensor K which relates the displacement field D to the electric field E by D=KE. For anisotropic crystal media K is not diagonal leading to the electric field vectors E and D not being parallel.

As discussed in chapter 7, Noether’s Theorem states that symmetries of the transformation properties lead to important conservation laws. The behavior of classical systems under rotation relates to the conservation of angular momentum, the behavior under spatial inversion relates to parity conservation, and time-reversal invariance relates to conservation of energy. That is, conservative forces conserve energy and are time-reversal invariant.

Physical Observable   Rotation (Tensor rank) Space inversion Time reversal Name
1) Classical Mechanics          
Mass density ρ 0 Even Even Scalar
Kinetic energy p2/2m 0 Even Even Scalar
Potential energy U(r) 0 Even Even Scalar
Lagrangian L 0 Even Even Scalar
Hamiltonian H 0 Even Even Scalar
Gravitational potential ϕ 0 Even Even Scalar
Coordinate r 1 Odd Even Vector
Velocity v 1 Odd Odd Vector
Momentum p 1 Odd Odd Vector
Angular momentum L=r×p 1 Even Odd Pseudovector
Force F 1 Odd Even Vector
Torque N=r×F 1 Even Even Pseudovector
Gravitational field g 1 Odd Even Vector
Inertia tensor I 2 Even Even Tensor
Elasticity stress tensor Tik 2 Even Even Tensor
           
2) Electromagnetism          
Charge density ρ 0 Even Even Scalar
Current density j 1 Odd Odd Vector
Electric field E 1 Odd Even Vector
Polarization P 1 Odd Even Vector
Displacement D 1 Odd Even Vector
Magnetic B field B 1 Even Odd Pseudovector
Magnetization M 1 Even Odd Pseudovector
Magnetic H field H 1 Even Odd Pseudovector
Poynting vector S=E×H 1 Odd Odd Vector
Dielectric tensor K 2 Even Even Tensor
Maxwell stress tensor Tik 2 Even Even Tensor
Table 19.6.2: Transformation properties of scalar, vector, pseudovector, and tensor observables under rotation, spatial inversion, and time reversal2

References

1The common convention is to denote the scalar product as ab, the vector product as a×b, and tensor product as ab.

2Based on table 6.1 in "Classical Electrodynamics" 2nd edition, by J.D. Jackson [Jac75]


This page titled 19.6: Appendix - Tensor Algebra is shared under a CC BY-NC-SA 4.0 license and was authored, remixed, and/or curated by Douglas Cline via source content that was edited to the style and standards of the LibreTexts platform.

Support Center

How can we help?