# 4.8: Tensor Operators

- Page ID
- 5616

### Introduction: Cartesian Vectors and Tensors

Physics is full of vectors: \(\vec{x}\), \(\vec{L}\), \(\vec{S}\) and so on. Classically, a (three-dimensional) vector is defined by its properties under rotation: the three components corresponding to the Cartesian \(x,y,z\) axes transform as \[ V_i\to \sum R_{ij}V_j \tag{4.8.1}\]

with the usual rotation matrix, for example

\[ R_z(\theta)=\begin{pmatrix} \cos\theta &-\sin\theta &0 \\ \sin\theta &\cos\theta &0 \\ 0 &0 &0 \end{pmatrix} \tag{4.8.2}\]

for rotation about the \(z\)-axis. (We’ll use \((x,y,z)\)

A *tensor* is a generalization of a such a vector to an object with more than one suffix, such as, for example, \(T_{ij}\)

\[ T_{ijk}\to \sum R_{il}R_{jm}R_{kn}T_{lmn} \tag{4.8.3}\]

where \(R\) is the same rotation matrix that transforms a vector. Tensors written in this way are called *Cartesian tensors* (since the suffixes refer to Cartesian axes). The number of suffixes is the *rank* of the Cartesian tensor, a rank \(n\) tensor has of course \(3^n\) components.

Tensors are common in physics: they are essential in describing stress, distortion and flow in solids and liquids. The inertial tensor is the basis for analyzing angular motion in classical mechanics. Tensor *forces* are important in the dynamics of the deuteron, and in fact tensors arise for any charge distribution more complicated than a dipole. Going to four dimensions, and generalizing from rotations to Lorentz transformations, Maxwell’s equations are most naturally expressed in tensor form, and tensors are central to General Relativity.

To get back to non-relativistic physics, since the defining property of a tensor is its behavior under rotations, spherical polar coordinates are sometimes a more natural basis than Cartesian coordinates. In fact, in that basis tensors (called *spherical tensors*) have rotational properties closely related to those of angular momentum eigenstates, as will become clear in the following sections.

### The Rotation Operator in Angular Momentum Eigenket Space

As a preliminary to discussing general tensors in quantum mechanics, we briefly review the rotation operator and quantum vector operators. (A full treatment is given in my 751 lecture.) Recall that the rotation operator turning a ket through an angle \(\vec{\theta}\) (the vector direction denotes the axis of rotation, its magnitude the angle turned through) is

\[ U(R(\vec{\theta}))=e^{-\frac{i\vec{\theta}\cdot\vec{J}}{\hbar}} \tag{4.8.4}\]

Since \(\vec{J}\) commutes with the total angular momentum squared \(\vec{J}^2=j(j+1)\hbar^2\), we can restrict our attention to a given *total *angular momentum \(j\), having as usual an orthonormal basis set \(|j,m\rangle\), or \(|m\rangle\) for short, with \(2j+1\) components, a general ket \(|\alpha\rangle\) in this space is then:

\[ |\alpha\rangle=\sum_{m=-j}^{j} \alpha_m |m\rangle . \tag{4.8.5}\]

Rotating this ket,\[ |\alpha\rangle\to |\alpha'\rangle=e^{-\frac{i\vec{\theta}\cdot\vec{J}}{\hbar}} |\alpha\rangle \tag{4.8.6}\]

Putting in a complete set of states, and using the standard notation for matrix elements of the rotation operator,

\[\begin{align} |\alpha'\rangle &=e^{-\frac{i\vec{\theta}\cdot\vec{J}}{\hbar}} |\alpha\rangle \\[5pt] &=\sum_{m',m} \alpha_m |m'\rangle \langle m'|e^{-\frac{i\vec{\theta}\cdot\vec{J}}{\hbar}}|m\rangle \\[5pt] &= \sum_{m',m} D^{(j)}_{m'm} (R(\vec{\theta})) \alpha_m |m'\rangle .\tag{4.8.7} \end{align}\]

\(D^{(j)}_{m'm}=\langle m'|e^{-\frac{i\vec{\theta}\cdot\vec{J}}{\hbar}}|m\rangle\) is standard notation (see the earlier lecture).

So the ket rotation transformation is

\[ \alpha'_{m'}=\sum_m D^{(j)}_{m'm} \alpha_m, \;\; or\; \alpha'=D\alpha . \tag{4.8.8}\]

with the usual matrix-multiplication rules.

### Rotating a Basis Ket

Now suppose we apply the rotation operator to one of the *basis kets *\(|j,m\rangle\) , what is the result? \[ e^{-\frac{i\vec{\theta}\cdot\vec{J}}{\hbar}}|j,m\rangle =\sum_{m'} |j,m'\rangle \langle j,m'|e^{-\frac{i\vec{\theta}\cdot\vec{J}}{\hbar}}|j,m\rangle =\sum_{m'} |j,m'\rangle D^{(j)}_{m'm}(R) \tag{4.8.9}\]

Note the *reversal* of *m*, *m*' compared with the operation on the set of component coefficients of the general ket.

(You may be thinking: wait a minute, \(|j,m\rangle\) *is* a ket in the space

\[\alpha'_{m'}=\sum_m D^{(j)}_{m'm''} \alpha_{m''}= \alpha'_{m'}=\sum_m D^{(j)}_{m'm''} \delta_{m''m}=D^{(j)}_{m'm}.\]

Reassuringly, this leads to the same result we just found.)

### Rotating an Operator, Scalar Operators

Just as in the Schrödinger versus Heisenberg formulations, we can either apply the rotation operator to the kets and leave the operators alone, or we can leave the kets alone, and rotate the operators:

\[ A\to e^{\frac{i\vec{\theta}\cdot\vec{J}}{\hbar}}Ae^{-\frac{i\vec{\theta}\cdot\vec{J}}{\hbar}}=U^{\dagger}AU \tag{4.8.10}\]

which will yield the same matrix elements, so the same physics.

A *scalar *operator is an operator which is *invariant* under rotations, for example the Hamiltonian of a particle in a spherically symmetric potential. (There are many less trivial examples of scalar operators, such as the dot product of two vector operators, as in a spin-orbit coupling.)

The transformation of an operator under an infinitesimal rotation is given by:

\[ S\to U^{\dagger}(R)SU(R) \]

with

\[U(R)=1-\frac{i\vec{\varepsilon}\cdot\vec{J}}{\hbar} \tag{4.8.11}\]

from which

\[ S \to S+\left[\frac{i\vec{\varepsilon}\cdot\vec{J}}{\hbar}, S\right]. \tag{4.8.12}\]

It follows that a scalar operator \(S\), which does not change at all, must commute with all the components of the angular momentum operator, and hence must have a common set of eigenkets with, say, \(\vec{J}^2\) and \(J_z\).

### Vector Operators: Definition and Commutation Properties

A quantum mechanical vector operator \(\vec{V}\) is *defined *by requiring that the expectation values of its three components in any state *transform like the components of a classical vector* under rotation.

It follows from this that the operator itself must transform vectorially,

\[ V'_i = U^{\dagger}(R)V_i U(R)=\sum R_{ij}V_j \tag{4.8.13}\]

To see what this implies, it is easiest to look at a simple case. For an infinitesimal rotation about the \(z\)-axis,

\[ R_z(\varepsilon)=\begin{pmatrix} 1&-\varepsilon&0 \\ \varepsilon&1&0 \\ 0&0&1 \end{pmatrix} \tag{4.8.14}\]

the vector transforms

\[ \begin{pmatrix} V_x \\ V_y \\ V_z \end{pmatrix} \to \begin{pmatrix} 1&-\varepsilon&0 \\ \varepsilon&1&0 \\ 0&0&1 \end{pmatrix} \begin{pmatrix} V_x \\ V_y \\ V_z \end{pmatrix} = \begin{pmatrix} V_x-\varepsilon V_y \\ V_y+\varepsilon V_x \\ V_z \end{pmatrix} \tag{4.8.15}\]

The unitary Hilbert space operator *U* corresponding to this rotation \(U(R_z(\varepsilon)=1-\frac{i\varepsilon J_z}{\hbar}\), so

\[\begin{align} U^{\dagger}V_i U &= (1+i\varepsilon J_z/\hbar)V_i(1-i\varepsilon J_z/\hbar) \\[5pt] &=V_i+\frac{i\varepsilon}{\hbar}[J_z,V_i] \tag{4.8.16} \end{align}\]

The requirement that the two transformations above, the infinitesimal classical rotation generated by \(R_z(\varepsilon)\) and the infinitesimal unitary transformation \(U^{\dagger}(R)V_i U(R)\) , are in fact the same thing yields the commutation relations of a vector operator with angular momentum:

\[ i[J_z,V_x]=-\hbar V_y \\ i[J_z,V_y]=+\hbar V_x \tag{4.8.17}\]

From this result and its cyclic equivalents, the components of *any* vector operator \(\vec{V}\) must satisfy:

\[ [V_i,J_j]=i\varepsilon_{ijk}\hbar V_k . \tag{4.8.18}\]

*Exercise*: verify that the components of \(\vec{x}\), \(\vec{L}\), \(\vec{S}\) do in fact satisfy these commutation relations.

(*Note*: Confusingly, there is a slightly different situation in which we need to rotate an operator, and it gives an opposite result. Suppose an operator *T* acts on a ket \(|\alpha\rangle\) to give the ket \(|\alpha'\rangle=T|\alpha\rangle\). For kets \(|\alpha\rangle\) and \(|\alpha'\rangle\) to go to \(U|\alpha\rangle\) and \(U|\alpha'\rangle\) respectively under a rotation \(U,T\) itself must transform as \(T \to UTU^{\dagger}\) (recall \(U^{\dagger}=U^{-1}\) ). The point is that this is a Schrödinger rather than a Heisenberg-type transformation: we’re rotating the kets, not the operators.)

*Warning*: Does a vector operator transform like the components of a vector or like the basis kets of the space? You’ll see it written both ways, so watch out!

We’ve already defined it as transforming like the components:

\[ V'_i = U^{\dagger}(R)V_i U(R)=\sum R_{ij}V_j \tag{4.8.13}\]

but if we now take the *opposite* rotation, the unitary matrix \(U(R)\) is replaced by its inverse \(U^{\dagger}(R)\) and *vice versa.* Remember also that the ordinary spatial rotation matrix \(R\) is orthogonal, so its inverse is its transpose, and the above equation is equivalent to

\[ V'_i = U(R)V_i U^{\dagger}(R)=\sum R_{ji}V_j . \tag{4.8.19}\]

*This* definition of a vector operator is that its elements transform just as do the basis kets of the space

This second form of the equation is the one in common use.

### Cartesian Tensor Operators

From the definition given earlier, under rotation the elements of a rank two Cartesian tensor transform as:

\[ T_{ij}\to T_{ij}'=\sum \sum R_{ii'}R_{jj'}T_{i'j'}. \tag{4.8.20}\]

where \(R_{ij}\) is the rotation matrix for a vector.

It is illuminating to consider a particular example of a second-rank tensor, \(T_{ij}=U_iV_j\), where \(\vec{U}\) and \(\vec{V}\) are ordinary three-dimensional vectors.

The problem with this tensor is that it is *reducible*, using the word in the same sense as in our discussion of group representations is discussing addition of angular momenta. That is to say, combinations of the elements can be arranged in sets such that rotations operate only within these sets. This is made evident by writing:

\[ U_iV_j=\frac{\vec{U}\cdot\vec{V}}{3}\delta_{ij}+\frac{(U_iV_j-U_jV_i)}{2}+\left( \frac{U_iV_j+U_jV_i}{2}-\frac{\vec{U}\cdot\vec{V}}{3}\delta_{ij} \right). \tag{4.8.21}\]

The first term, the dot product of the two vectors, is clearly a *scalar* under rotation, the second term, which is an antisymmetric tensor has three independent components which are the *vector *components of the vector product \(\vec{U}\times\vec{V}\), and the third term is a *symmetric traceless tensor*, which has five independent components. Altogether, then, there are \(1+3+5=9\) components, as required.

### Spherical Tensors

Notice the numbers of elements of these irreducible subgroups: \(1,3,5\) These are exactly the numbers of elements of angular momenta representations for *j* = 0, 1, 2!

This is of course no coincidence: as we shall make more explicit below, a three-dimensional vector is mathematically isomorphic to a quantum spin one, the tensor we have written is therefore a direct product of two spins one, so, exactly as we argues in discussing addition of angular momenta, it will be a reducible representation of the rotation group, and will be a sum of representations corresponding to the possible total angular momenta from adding two spins one, that is, \(j=0, 1, 2\).

As discussed earlier, the matrix elements of the rotation operator \[ U(R(\vec{\theta}))=e^{-\frac{i\vec{\theta}\cdot \vec{J}}{\hbar}} \tag{4.8.22}\]

within a definite \(j\) subspace are written \[ D^j_{m'm}(R(\vec{\theta}))=\langle j,m'|e^{-\frac{i\vec{\theta}\cdot \vec{J}}{\hbar}} |j,m\rangle \tag{4.8.23}\]

so under rotation operator a basis state \(|j,m\rangle\) transforms as: \[ e^{-\frac{i\vec{\theta}\cdot\vec{J}}{\hbar}} |j,m\rangle =\sum_{m'}|j,m'\rangle \langle j,m'|e^{-\frac{i\vec{\theta}\cdot\vec{J}}{\hbar}} |j,m\rangle =\sum_{m'}|j,m'\rangle D^{(j)}_{m'm}(R). \tag{4.8.24}\]

The essential point is that these irreducible subgroups into which Cartesian tensors decompose under rotation (generalizing from our one example) form a more natural basis set of tensors for problems with rotational symmetries.

Definition: spherical tensor

We define a *spherical tensor* of rank \(k\) as a set of \(2k+1\) operators \(T^q_k\) , \(q=k,k-1,\dots,-k\) such that under rotation they transform among themselves with exactly the same matrix of coefficients as that for the \(2j+1\) angular momentum eigenkets \(|m\rangle\) for \(k=j\), that is,\[ U(R)T^q_k U^{\dagger}(R)=\sum_{q'}D(k)q'qTq'k . \tag{4.8.25}\]

To see the properties of these spherical tensors, it is useful to evaluate the above equation for infinitesimal rotations, for which \[ D^{(k)}_{q'q}(\vec{\varepsilon})=\langle k,q'|I-i\vec{\varepsilon}\cdot \vec{J}/\hbar |k,q\rangle =\delta_{q'q}-i\vec{\varepsilon}\cdot \langle k,q'|\vec{J}/\hbar |k,q\rangle .\tag{4.8.26}\]

(The matrix element \(\langle k,q'|\vec{J}/\hbar |k,q\rangle\) is just the familiar Clebsch Gordan coefficient in changed notation: the rank \(k\) corresponds to the usual \(j\), and \(q\) to the “magnetic” quantum number \(m\).)

Specifically, consider an infinitesimal rotation \(\vec{\varepsilon}\cdot \vec{J}=\varepsilon J_+\). (Strictly speaking, this is not a real rotation, but the formalism doesn’t care, and the result we derive can be confirmed by rotation about the \(x\) and \(y\) directions and adding appropriate terms.)

The equation is \[ (1-i\varepsilon J_+/\hbar )T^q_k (1+i\varepsilon J_+/\hbar )=\sum_{q'}(\delta_{q'q}-i\varepsilon \langle k,q'|J_+/\hbar |k,q\rangle )Tq'k \tag{4.8.27}\]

and equating terms linear in \(\varepsilon\), \[ [J\pm ,T^q_k ]=\pm \hbar \sqrt{(k\mp q)(k\pm q+1)} T^{q\pm 1}_k \\ [Jz,T^q_k ]=\hbar qT^q_k . \tag{4.8.28}\]

Sakurai observes that this set of commutation relations could be taken as the *definition* of the spherical tensors.

*Notational note*: we have followed Shankar here in having the rank \(k\) as a subscript, the “magnetic” quantum number \(q\) as a superscript, the same convention used for the spherical harmonics (but not for the \(D\) matrices!) Sakurai, Baym and others have the rank above, usually in parentheses, and the magnetic number below. Fortunately, all use \(k\) for rank and \(q\) for magnetic quantum number.

### A Spherical Vector

The \(j=1\) angular momentum eigenkets are just the familiar spherical harmonics \[ Y^0_1=\sqrt{\frac{3}{4\pi}}\frac{z}{r},\; Y^{\pm 1}_1=\mp \sqrt{\frac{3}{4\pi}}\frac{x\pm iy}{\sqrt{2}r}. \tag{4.8.29}\]

The rotation operator will transform \((x,y,z)\) as an ordinary vector in three-space, and this is evidently equivalent to \[ |j=1,m\rangle \to \sum_{m'}|j=1,m'\rangle D^{(j)}_{m'm}(R) \tag{4.8.30}\]

It follows that the spherical representation of a three vector \((V_x, V_y, V_z)\) has the form:

\[ T^{\pm 1}_1=\mp \frac{V_x\pm iV_y}{\sqrt{2}}=V^{\pm 1}_1,\; T^0_1=V_z=V^0_1. \tag{4.8.31}\]In line with spherical tensor notation, the components \((T^1_1, T^0_1, T^{-1}_1)\) are denoted \(T^q_1\).

### Matrix Elements of Tensor Operators between Angular Momentum Eigenkets

By definition, an irreducible tensor operator \(T^q_k\) transforms under rotation like an angular momentum eigenket \(|k,q\rangle\). Therefore, rotating the ket \(T^q_k |j,m\rangle\), \[U T^q_k|j,m\rangle=U T^q_k U^{-1}U|j,m\rangle =\sum_{q'}D^{(k)}_{q'q}T^{q'}_k \sum_{m'}D^{(j)}_{m'm}|j,m'\rangle . \tag{4.8.32}\]

The product of the two \(D\) matrices appearing is precisely the set of coefficients to rotate *the direct product of eigenkets* \(|k,q\rangle \otimes |j,m\rangle\) where \(|k,q\rangle\) is the angular momentum eigenket having \(j=k, m=q\).

We have met this direct product of two angular momentum eigenkets before: this is just a system having two angular momenta, such as orbital plus spin angular momenta. So we see that \(T^q_k\) acting on \(|j,m\rangle\) generates a state having total angular momentum the sum of \((k,q)\) and \((j,m)\).

To link up (more or less) with Shankar’s notation: our direct product state \(|k,q\rangle \otimes |j,m\rangle\) is the same as \(|k,q;j,m\rangle\) in the notation \(|j_1,m_1;j_2,m_2\rangle\) for a product state of two angular momenta (possibly including spins). Such a state can be written as a sum over states of the form \(|j_{tot},m_{tot};j_1,j_2\rangle\) where this denotes a state of total angular momentum \(j_{tot}\), \(z\)- direction component \(m_{tot}\), made up of two spins having total angular momentum \(j_1,j_2\) respectively.

This is the standard Clebsch-Gordan sum: \[ |j_1,m_1;j_2,m_2\rangle =\sum_{j_{tot}=|j_1-j_2|}^{j_1+j_2} \sum_{m_{tot}=-j_{tot}}^{j_{tot}} |j_{tot},m_{tot};j_1,j_2\rangle \langle j_{tot},m_{tot};j_1,j_2|j_1,m_1;j_2,m_2\rangle . \tag{4.8.33}\]

The summed terms give a unit operator within this \((2j_1+1)(2j_2+1)\) dimensional space, the term \(\langle j_{tot},m_{tot};j_1,j_2|j_1,m_1;j_2,m_2\rangle\) is a Clebsch-Gordan coefficient. The only nonzero coefficients have \(m_{tot}=m_1+m_2\), and \(j_{tot}\) restricted as noted, so for given \(m_1, m_2\) we just set \(m_{tot}=m_1+m_2\), we don’t sum over \(m_{tot}\), and the sum over \(j_{tot}\) begins at \(|m_{tot}|\).

Translating into our \(|k,q\rangle \otimes |j,m\rangle\) notation, and cleaning up, \[ |k,q;j,m\rangle =\sum_{j_{tot}=|q+m|}^{k+j} |j_{tot},q+m;k,j\rangle \langle j_{tot},q+m;k,j|k,q;j,m\rangle . \tag{4.8.34}\]

We are now able to evaluate the angular component of the matrix element of a spherical tensor operator between angular momentum eigenkets: we see that it will only be nonzero for \(m_{tot}=m_1+m_2\), and \(j_{tot}\) at least \(|m_{tot}|\).

### The Wigner-Eckart Theorem

At this point, we must bear in mind that these tensor operators are not necessarily just functions of angle. For example, the position operator is a spherical vector multiplied by the radial variable \(r\), and kets specifying atomic eigenstates will include radial quantum numbers as well as angular momentum, so the matrix element of a tensor between two states will have the form \[ \langle \alpha_2,j_2,m_2|T^q_k |\alpha 1,j_1,m_1\rangle , \tag{4.8.35}\]

where the \(j\)’s and \(m\)’s denote the usual angular momentum eigenstates and the \(\alpha\)’s are *nonangular* quantum numbers, such as those for radial states.

The basic point of the Wigner-Eckart theorem is that* the angular dependence of these matrix elements can be factored out, and it is given by the Clebsch-Gordan coefficients*.

Having factored it out, the remaining dependence, which is only on the *total *angular momentum in each of the kets, *not* the relative orientation (and of course on the \(\alpha\) ’s), is traditionally written as a bracket with double lines, that is, \[ \langle \alpha_2,j_2,m_2j+1|T^q_k |\alpha 1,j_1,m_1\rangle =\frac{\langle \alpha_2,j_2||T_k||\alpha 1,j_1\rangle}{\sqrt{2j+1}}\cdot \langle j_2,m_2|k,q;j_1,m_1\rangle . \tag{4.8.36}\]

The denominator is the conventional normalization of the double-bar matrix element. The proof is given in, for example, Sakurai (page 239) and is not that difficult. The basic strategy is to put the defining identities \[ [J\pm ,T^q_k ]=\pm \hbar \sqrt{(k\mp q)(k\pm q+1)} T^{q\pm 1}{k} \\ [J_z,T^q_k ]=\hbar qT^q_k \tag{4.8.37}\]

between \(|\alpha ,j,m\rangle\) bras and kets, then get rid of the \(J_{\pm}\) and \(J_z\) by having them operate on the bra or ket. This generates a series of linear equations for \(\langle \alpha_2,j_2,m_2|T^q_k |\alpha 1,j_1,m_1\rangle\) matrix elements with \(m\) variables differing by one, and in fact this set of linear equations is *identical* to the set that generates the Clebsch-Gordan coefficients, so we must conclude that these spherical tensor matrix elements, ranging over possible \(m\) and \(j\) values, are exactly proportional to the Clebsch-Gordan coefficients - and that is the theorem.

**A Few Hints for Shankar’s problem 15.3.3**: that first matrix element comes from adding a spin \(j\) to a spin 1, writing the usual maximum \(m\) state, applying the lowering operator to both sides to get the total angular momentum \(j+1, m=j\) state, then finding the same m state orthogonal to that, which corresponds to total angular momentum \(j\) (instead of \(j+1\)).

For the operator \(J\), the Wigner-Eckart matrix element simplifies because \(J\) cannot affect \(\alpha\) , and also it commutes with \(J^2\), so cannot change the total angular momentum.

So, in the Wigner-Eckart equation, replace \(T^q_k\) on the left-hand side by \(J^0_1\) , which is just \(J_z\). The result of (1) should follow.

(2) First note that a scalar operator cannot change \(m\). Since \(c\) is independent of \(A\) we can take \(A=J\) to find \(c\).