Skip to main content
Physics LibreTexts

3.7: Entanglement Entropy

  • Page ID
    34637
  • \( \newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \) \( \newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}} \)\(\newcommand{\id}{\mathrm{id}}\) \( \newcommand{\Span}{\mathrm{span}}\) \( \newcommand{\kernel}{\mathrm{null}\,}\) \( \newcommand{\range}{\mathrm{range}\,}\) \( \newcommand{\RealPart}{\mathrm{Re}}\) \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\) \( \newcommand{\Argument}{\mathrm{Arg}}\) \( \newcommand{\norm}[1]{\| #1 \|}\) \( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\) \( \newcommand{\Span}{\mathrm{span}}\) \(\newcommand{\id}{\mathrm{id}}\) \( \newcommand{\Span}{\mathrm{span}}\) \( \newcommand{\kernel}{\mathrm{null}\,}\) \( \newcommand{\range}{\mathrm{range}\,}\) \( \newcommand{\RealPart}{\mathrm{Re}}\) \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\) \( \newcommand{\Argument}{\mathrm{Arg}}\) \( \newcommand{\norm}[1]{\| #1 \|}\) \( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\) \( \newcommand{\Span}{\mathrm{span}}\)\(\newcommand{\AA}{\unicode[.8,0]{x212B}}\)

    Previously, we said that a multi-particle system is entangled if the individual particles lack definite quantum states. It would be nice to make this statement more precise, and in fact physicists have come up with several different quantitive measures of entanglement. In this section, we will describe the most common measure, entanglement entropy, which is closely related to the entropy concept from thermodynamics, statistical mechanics, and information theory.

    We have seen from the previous section that if a subsystem \(A\) is (possibly) entangled with some other subsystem \(B\), the information required to calculate all partial measurement outcomes on \(A\) is stored within a reduced density operator \(\hat{\rho}_A\). We can use this to define a quantity called the entanglement entropy of \(A\):

    \[S_{A} = - k_b \, \mathrm{Tr}_A \Big\{ \hat{\rho}_A\, \ln\!\big[\hat{\rho}_A\big]\Big\}. \label{entropy}\]

    In this formula, \(\ln[\cdots]\) denotes the logarithm of an operator, which is the inverse of the exponential: \(\ln(\hat{P}) = \hat{Q} \Rightarrow \exp(\hat{Q}) = \hat{P}\). The prefactor \(k_b\) is Boltzmann’s constant, and ensures that \(S_A\) has the same units as thermodynamic entropy.

    The definition of the entanglement entropy is based on an analogy with the entropy concept from classical thermodynamics, statistical mechanics and information theory. In those classical contexts, entropy is a quantitative measure of uncertainty (i.e, lack of information) about a system’s underlying microscopic state, or “microstate”. Suppose a system has \(W\) possible microstates that occur with probabilities \(\{p_1, p_2, \dots, p_W\}\), satisfying \(\sum_i p_i = 1\). Then we define the classical entropy

    \[\label{eq:2}S_{\mathrm{cl.}} = - k_b \sum_{i=1}^W p_i \ln(p_i).\]

    In a situation of complete certainty where the system is known to be in a specific microstate \(k\) (\(p_i = \delta_{ik}\)), the formula gives \(S_{\mathrm{cl.}} = 0\). (Note that \(x \ln(x)\rightarrow 0\) as \(x\rightarrow 0\)). In a situation of complete uncertainty where all microstates are equally probable (\(p_i = 1/W\)), we get \(S_{\mathrm{cl.}} = k_b \ln W\), the entropy of a microcanonical ensemble in statistical mechanics. For any other distribution of probabilities, it can be shown that the entropy lies between these two extremes: \(0 \le S_{\mathrm{cl.}} \le k_b\ln W\). For a review of the properties of entropy, see Appendix C.

    The concept of entanglement entropy aims to quantify the uncertainty arising from a quantum (sub)system’s lack of a definite quantum state, due to it being possibly entangled with another (sub)system. When formulating it, the key issue we need to be careful about is how to extend classical notions of probability to quantum systems. We have seen that when performing a measurement on \(A\) whose possible outcomes are \(\{q_\mu\}\), the probability of getting \(q_\mu\) is \(P_\mu = \langle \mu | \hat{\rho}_A|\mu\rangle\). However, it is problematic to directly substitute these probabilities \(\{P_\mu\}\) into the classical entropy formula, since they are basis-dependent (i.e., the set of probabilities is dependent on the choice of measurement). Equation \(\eqref{entropy}\) bypasses this problem by using the trace, which is basis-independent.

    In the special case where \(\{|\mu\rangle\}\) is the eigenbasis for \(\hat{\rho}_A\), the connection is easier to see. From \(\eqref{eq:2}\), the eigenvalues \(\{p_\mu\}\) are all real numbers between 0 and 1, and summing to unity, so they can be regarded as probabilities. Then the entanglement entropy is

    \[\begin{align} \begin{aligned} S_A &= -k_b \sum_\mu \langle \mu | \hat{\rho}_A \ln(\hat{\rho}_A) | \mu\rangle \\ &= - k_b \sum_\mu p_\mu \ln(p_\mu). \end{aligned}\end{align}\]

    Therefore, in this particular basis the expression for the entanglement entropy is consistent with the classical definition of entropy, with the eigenvalues of \(\hat{\rho}_A\) serving as the relevant probabilities.

    By analogy with the classical entropy formula (see Appendix C), the entanglement entropy has the following bounds:

    \[0 \le S_A \le k_b\ln(d_A), \label{Sabounds}\]

    where \(d_A\) is the dimension of \(\mathscr{H}_A\).

    The lower bound \(S_A = 0\) holds if and only if system \(A\) is in a pure state (i.e., it is not entangled with any other system). This is because the bound corresponds to a situation where \(\hat{\rho}_A\) has one eigenvalue that is 1, and all the other eigenvalues are 0 (see Appendix C). If we denote the eigenvector associated with the non-vanishing eigenvalue by \(|\psi\rangle\), then the density matrix can be written as \(\hat{\rho}_A = |\varphi\rangle\langle\varphi|\), which has the form of a pure state.

    As a corollary, if we find that \(S_{A} \ne 0\), then \(\hat{\rho}_A\) cannot be written as a pure state \(|\psi\rangle\langle\psi|\) for any \(|\psi\rangle\), and hence it must describe a mixed state.

    A system is said to be maximally entangled if it saturates the upper bound of \(\eqref{Sabounds}\), \(S_A = k_b \ln(d_A)\). This occurs if and only if the eigenvalues of the density operator are all equal: i.e., \(p_j = 1/d_A\) for all \(j = 1, \dots, d_A\).

    Example \(\PageIndex{1}\)

    Consider the following state of two spin-\(1/2\) particles:

    \[\begin{align} |\psi\rangle = \frac{1}{\sqrt{2}} \Big(|\!+\!z\rangle|\!-\!z\rangle \,-\, |\!-\!z\rangle|\!+\!z\rangle\Big).\end{align}\]

    The density operator for the two-particle system is

    \[\hat{\rho}(\psi) = \frac{1}{2} \Big(|\!+\!z\rangle|\!-\!z\rangle \,-\, |\!-\!z\rangle|\!+\!z\rangle\Big) \Big(\langle+z|\langle-z| \,-\, \langle-z|\langle+z|\Big).\]

    Tracing over system \(B\) (the second slot) yields the reduced density operator

    \[\hat{\rho}_A(\psi) = \frac{1}{2} \Big(|\!+\!z\rangle \langle+z| \,+\, |\!-\!z\rangle \langle-z|\Big).\]

    This can be expressed as a matrix in the \(\{|\!+z\rangle,|\!-z\rangle\}\) basis:

    \[\hat{\rho}_A(\psi) = \begin{pmatrix}\frac{1}{2} & 0 \\ 0 & \frac{1}{2}\end{pmatrix}.\]

    Now we can use \(\hat{\rho}_A\) to compute the entanglement entropy

    \[S_A = -k_b\mathrm{Tr}\left\{\hat{\rho}_A\ln(\rho_A)\right\} = -k_b\mathrm{Tr}\begin{pmatrix}\frac{1}{2}\ln\left(\frac{1}{2}\right) & 0 \\ 0 & \frac{1}{2}\ln\left(\frac{1}{2}\right)\end{pmatrix} = k_b\ln(2).\]

    Hence, the particles are maximally entangled.


    This page titled 3.7: Entanglement Entropy is shared under a CC BY-SA 4.0 license and was authored, remixed, and/or curated by Y. D. Chong via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit history is available upon request.

    • Was this article helpful?