Skip to main content
Physics LibreTexts

1.4: Projection Operators and Tensor Products

  • Page ID
    56502
  • \( \newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \) \( \newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}} \)\(\newcommand{\id}{\mathrm{id}}\) \( \newcommand{\Span}{\mathrm{span}}\) \( \newcommand{\kernel}{\mathrm{null}\,}\) \( \newcommand{\range}{\mathrm{range}\,}\) \( \newcommand{\RealPart}{\mathrm{Re}}\) \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\) \( \newcommand{\Argument}{\mathrm{Arg}}\) \( \newcommand{\norm}[1]{\| #1 \|}\) \( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\) \( \newcommand{\Span}{\mathrm{span}}\) \(\newcommand{\id}{\mathrm{id}}\) \( \newcommand{\Span}{\mathrm{span}}\) \( \newcommand{\kernel}{\mathrm{null}\,}\) \( \newcommand{\range}{\mathrm{range}\,}\) \( \newcommand{\RealPart}{\mathrm{Re}}\) \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\) \( \newcommand{\Argument}{\mathrm{Arg}}\) \( \newcommand{\norm}[1]{\| #1 \|}\) \( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\) \( \newcommand{\Span}{\mathrm{span}}\)\(\newcommand{\AA}{\unicode[.8,0]{x212B}}\)

    We can combine two linear vector spaces \(\mathscr{U}\) and \(\mathscr{V}\) into a new linear vector space \(\mathscr{W}=\mathscr{U} \oplus \mathscr{V}\). The symbol ⊕ is called the direct sum. The dimension of \(\mathscr{W}\) is the sum of the dimensions of \(\mathscr{U}\) and \(\mathscr{V}\):

    \[\operatorname{dim} \mathscr{W}=\operatorname{dim} \mathscr{U}+\operatorname{dim} \mathscr{V}\tag{1.34}\]

    A vector in \(\mathscr{W}\) can be written as

    \[|\Psi\rangle_{\mathscr{W}}=|\psi\rangle_{\mathscr{U}}+|\phi\rangle_{\mathscr{V}},\tag{1.35}\]

    where \(|\psi\rangle_{\mathscr{U}}\) and \(|\phi\rangle_{\mathscr{V}}\) are typically not normalized (i.e., they are not unit vectors). The spaces \(\mathscr{U}\) and \(\mathscr{V}\) are so-called subspaces of \(\mathscr{W}\).

    As an example, consider the three-dimensional Euclidean space spanned by the Cartesian axes \(x\), \(y\), and \(z\). The \(x y\)-plane is a two-dimensional subspace of the full space, and the \(z\)-axis is a one-dimensional subspace. Any three-dimensional form can be projected onto the \(x y\)-plane by setting the \(z\) component to zero. Similarly, we can project onto the \(z\)-axis by setting the \(x\) and \(y\) coordinates to zero. A projector is therefore associated with a subspace. It acts on a vector in the full space, and forces all components to zero, except those of the subspace it projects onto.

    The formal definition of a projector \(P_{\mathscr{U}}\) on \(\mathscr{U}\) is given by

    \[P_{\mathscr{U}}|\Psi\rangle_{\mathscr{W}}=|\psi\rangle_{\mathscr{U}}\tag{1.36}\]

    This is equivalent to requiring that \(P_{\mathscr{U}}^{2}=P_{\mathscr{U}}\), \(P_{\mathscr{U}}^{2}=P_{\mathscr{U}}\), or \(\boldsymbol{P}_{\mathscr{U}}\) is idempotent. One-dimensional projectors can be written as

    \[P_{j}=\left|\phi_{j}\right\rangle\left\langle\phi_{j}\right|\tag{1.37}\]

    Two projectors \(P_{1}\) and \(P_{2}\) are orthogonal is \(P_{1} P_{2}=0\). If \(P_{1} P_{2}=0\), then \(P_{1}+P_{2}\) is another projector:

    \[\left(P_{1}+P_{2}\right)^{2}=P_{1}^{2}+P_{1} P_{2}+P_{2} P_{1}+P_{2}^{2}=P_{1}^{2}+P_{2}^{2}=P_{1}+P_{2}\tag{1.38}\]

    When \(P_{1}\) and \(P_{2}\) commute but are non-orthogonal (i.e., they overlap), the general projector onto their combined subspace is

    \[P_{1+2}=P_{1}+P_{2}-P_{1} P_{2}\tag{1.39}\]

    (Prove this.) The orthocomplement of \(P\) is \(\mathbb{I}-P\), which is also a projector:

    \[P(\mathbb{I}-P)=P-P^{2}=P-P=0 \quad \text { and } \quad(\mathbb{I}-P)^{2}=\mathbb{I}-2 P+P^{2}=\mathbb{I}-P\tag{1.40}\]

    Another way to combine two vector spaces \(\mathscr{U}\) and \(\mathscr{V}\) is via the tensor product: \(\mathscr{W}=\mathscr{U} \otimes \mathscr{V}\), where the symbol ⊗ is called the direct product or tensor product. The dimension of the space \(\mathscr{W}\) is then

    \[\operatorname{dim} \mathscr{W}=\operatorname{dim} \mathscr{U} \cdot \operatorname{dim} \mathscr{V}\tag{1.41}\]

    Let \(|\psi\rangle \in \mathscr{U}\) and \(|\phi\rangle \in \mathscr{V}\). Then

    \[|\psi\rangle \otimes|\phi\rangle \in \mathscr{W}=\mathscr{U} \otimes \mathscr{V}\tag{1.42}\]

    If \(|\psi\rangle=\sum_{j} a_{j}\left|\psi_{j}\right\rangle\) and \(|\phi\rangle=\sum_{j} b_{j}\left|\phi_{j}\right\rangle\), then the tensor product of these vectors can be written as

    \[|\psi\rangle \otimes|\phi\rangle=\sum_{j k} a_{j} b_{k}\left|\psi_{j}\right\rangle \otimes\left|\phi_{k}\right\rangle=\sum_{j k} a_{j} b_{k}\left|\psi_{j}\right\rangle\left|\phi_{k}\right\rangle=\sum_{j k} a_{j} b_{k}\left|\psi_{j}, \phi_{k}\right\rangle,\tag{1.43}\]

    where we introduced convenient abbreviations for the tensor product notation. The inner product of two vectors that are tensor products is

    \[\left(\left\langle\psi_{1}\left|\otimes\left\langle\phi_{1}\right|\right)\left(\left|\psi_{2}\right\rangle \otimes\left|\phi_{2}\right\rangle\right)=\left\langle\psi_{1} \mid \psi_{2}\right\rangle\left\langle\phi_{1} \mid \phi_{2}\right\rangle\right.\right.\tag{1.44}\]

    Operators also obey the tensor product structure, with

    \[(A \otimes B)|\psi\rangle \otimes|\phi\rangle=(A|\psi\rangle) \otimes(B|\phi\rangle)\tag{1.45}\]

    and

    \[(A \otimes B)(C \otimes D)|\psi\rangle \otimes|\phi\rangle=(A C|\psi\rangle) \otimes(B D|\phi\rangle)\tag{1.46}\]

    General rules for tensor products of operators are

    1. \(A \otimes 0=0\) and \(0 \otimes B=0\),
    2. \(\mathbb{I} \otimes \mathbb{I}=\mathbb{I}\),
    3. \(\left(A_{1}+A_{2}\right) \otimes B=A_{1} \otimes B+A_{2} \otimes B\),
    4. \(a A \otimes b B=(a b) A \otimes B\),
    5. \((A \otimes B)^{-1}=A^{-1} \otimes B^{-1}\),
    6. \((A \otimes B)^{\dagger}=A^{\dagger} \otimes B^{\dagger}\).

    Note that the last rule preserves the order of the operators. In other words, operators always act on their own space. Often, it is understood implicitly which operator acts on which subspace, and we will write \(A \otimes \mathbb{I}=A\) and \(\mathbb{I} \otimes B=B\). Alternatively, we can add subscripts to the operator, e.g., \(A_{\mathscr{U}}\) and \(B_{\mathscr{V}}\).

    As a practical example, consider two two-dimensional operators

    \[A=\left(\begin{array}{ll}
    A_{11} & A_{12} \\
    A_{21} & A_{22}
    \end{array}\right) \quad \text { and } \quad B=\left(\begin{array}{ll}
    B_{11} & B_{12} \\
    B_{21} & B_{22}
    \end{array}\right)\tag{1.47}\]

    with respect to some orthonormal bases \(\left\{\left|a_{1}\right\rangle,\left|a_{2}\right\rangle\right\}\) and \(\left\{\left|b_{1}\right\rangle,\left|b_{2}\right\rangle\right\}\) for \(A\) and \(B\), respectively (not necessarily eigenbases). The question is now: what is the matrix representation of \(A \otimes B\)? Since the dimension of the new vector space is the product of the dimensions of the two vector spaces, we have \(\operatorname{dim} \mathscr{W}=2 \cdot 2=4\). A natural basis for \(A \otimes B\) is then given by \(\left\{\left|a_{j}, b_{k}\right\rangle\right\}_{j k}\), with \(j\), \(k=1,2\), or

    \[\left|a_{1}\right\rangle\left|b_{1}\right\rangle, \quad\left|a_{1}\right\rangle\left|b_{2}\right\rangle, \quad\left|a_{2}\right\rangle\left|b_{1}\right\rangle, \quad\left|a_{2}\right\rangle\left|b_{2}\right\rangle\tag{1.48}\]

    We can construct the matrix representation of \(A \otimes B\) by applying this operator to the basis vectors in Eq. (1.48), using

    \[A\left|a_{j}\right\rangle=A_{1 j}\left|a_{1}\right\rangle+A_{2 j}\left|a_{2}\right\rangle \quad \text { and } \quad B\left|a_{k}\right\rangle=B_{1 k}\left|b_{1}\right\rangle+B_{2 k}\left|b_{2}\right\rangle\tag{1.49}\]

    which leads to

    \[\begin{aligned}
    A \otimes B\left|a_{1}, b_{1}\right\rangle &=\left(A_{11}\left|a_{1}\right\rangle+A_{21}\left|a_{2}\right\rangle\right)\left(B_{11}\left|b_{1}\right\rangle+B_{21}\left|b_{2}\right\rangle\right) \\
    A \otimes B\left|a_{1}, b_{2}\right\rangle &=\left(A_{11}\left|a_{1}\right\rangle+A_{21}\left|a_{2}\right\rangle\right)\left(B_{12}\left|b_{1}\right\rangle+B_{22}\left|b_{2}\right\rangle\right) \\
    A \otimes B\left|a_{2}, b_{1}\right\rangle &=\left(A_{12}\left|a_{1}\right\rangle+A_{22}\left|a_{2}\right\rangle\right)\left(B_{11}\left|b_{1}\right\rangle+B_{21}\left|b_{2}\right\rangle\right) \\
    A \otimes B\left|a_{2}, b_{2}\right\rangle &=\left(A_{12}\left|a_{1}\right\rangle+A_{22}\left|a_{2}\right\rangle\right)\left(B_{12}\left|b_{1}\right\rangle+B_{22}\left|b_{2}\right\rangle\right)
    \end{aligned}\tag{1.50}\]

    Looking at the first line of Eq. (1.50), the basis vector \(\left|a_{1}, b_{1}\right\rangle\) gets mapped to all basis vectors:

    \[A \otimes B\left|a_{1}, b_{1}\right\rangle=A_{11} B_{11}\left|a_{1}, b_{1}\right\rangle+A_{11} B_{21}\left|a_{1}, b_{2}\right\rangle+A_{21} B_{11}\left|a_{2}, b_{1}\right\rangle+A_{21} B_{21}\left|a_{2}, b_{2}\right\rangle\tag{1.51}\]

    Combining this into matrix form leads to

    \[A \otimes B=\left(\begin{array}{llll}
    A_{11} B_{11} & A_{11} B_{12} & A_{12} B_{11} & A_{12} B_{12} \\
    A_{11} B_{21} & A_{11} B_{22} & A_{12} B_{21} & A_{12} B_{22} \\
    A_{21} B_{11} & A_{21} B_{12} & A_{22} B_{11} & A_{22} B_{12} \\
    A_{21} B_{21} & A_{21} B_{22} & A_{22} B_{21} & A_{22} B_{22}
    \end{array}\right)=\left(\begin{array}{ll}
    A_{11} B & A_{12} B \\
    A_{21} B & A_{22} B
    \end{array}\right)\tag{1.52}\]

    Recall that this is dependent on the basis that we have chosen. In particular, \(A \otimes B\) may be diagonalized in some other basis.


    This page titled 1.4: Projection Operators and Tensor Products is shared under a CC BY-NC-SA 4.0 license and was authored, remixed, and/or curated by Pieter Kok via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit history is available upon request.