Skip to main content
Physics LibreTexts

10.1: Quantum measurements

  • Page ID
    57574
  • \( \newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \)

    \( \newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}} \)

    \( \newcommand{\id}{\mathrm{id}}\) \( \newcommand{\Span}{\mathrm{span}}\)

    ( \newcommand{\kernel}{\mathrm{null}\,}\) \( \newcommand{\range}{\mathrm{range}\,}\)

    \( \newcommand{\RealPart}{\mathrm{Re}}\) \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\)

    \( \newcommand{\Argument}{\mathrm{Arg}}\) \( \newcommand{\norm}[1]{\| #1 \|}\)

    \( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\)

    \( \newcommand{\Span}{\mathrm{span}}\)

    \( \newcommand{\id}{\mathrm{id}}\)

    \( \newcommand{\Span}{\mathrm{span}}\)

    \( \newcommand{\kernel}{\mathrm{null}\,}\)

    \( \newcommand{\range}{\mathrm{range}\,}\)

    \( \newcommand{\RealPart}{\mathrm{Re}}\)

    \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\)

    \( \newcommand{\Argument}{\mathrm{Arg}}\)

    \( \newcommand{\norm}[1]{\| #1 \|}\)

    \( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\)

    \( \newcommand{\Span}{\mathrm{span}}\) \( \newcommand{\AA}{\unicode[.8,0]{x212B}}\)

    \( \newcommand{\vectorA}[1]{\vec{#1}}      % arrow\)

    \( \newcommand{\vectorAt}[1]{\vec{\text{#1}}}      % arrow\)

    \( \newcommand{\vectorB}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \)

    \( \newcommand{\vectorC}[1]{\textbf{#1}} \)

    \( \newcommand{\vectorD}[1]{\overrightarrow{#1}} \)

    \( \newcommand{\vectorDt}[1]{\overrightarrow{\text{#1}}} \)

    \( \newcommand{\vectE}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash{\mathbf {#1}}}} \)

    \( \newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \)

    \( \newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}} \)

    The knowledge base outlined in the previous chapters gives us a sufficient background for a (by necessity, very brief) discussion of quantum measurements. \(^{2}\) Let me start by reminding the reader of the only postulate of the quantum theory that relates it to experiment - so far, meaning perfect measurements. In the simplest case when the system is in a coherent (pure) quantum state, its ket-vector may be represented as a linear superposition \[|\alpha\rangle=\sum_{j} \alpha_{j}\left|a_{j}\right\rangle,\] where \(a_{j}\) are the eigenstates of the operator of an observable \(A\), related to its eigenvalues \(A_{j}\) by Eq. (4.68): \[\hat{A}\left|a_{j}\right\rangle=A_{j}\left|a_{j}\right\rangle .\] In such a state, the outcome of every single measurement of the observable \(A\) may be uncertain, but is restricted to the set of eigenvalues \(A_{j}\), with the \(j^{\text {th }}\) outcome probability equal to \[W_{j}=\left|\alpha_{j}\right|^{2}\] As was discussed in Chapter 7 , the state of the system (or rather of the statistical ensemble of macroscopically similar systems we are using for this particular series of similar experiments) may be not coherent, and hence even more uncertain than the state described by Eq. (1). Hence, the measurement postulate means that even if the system is in this (the least uncertain) state, the measurement outcomes are still probabilistic. \({ }^{3}\)

    If we believe that a particular measurement may be done perfectly, and do not worry too much how exactly, we are subscribing to the mathematical notion of measurement, that was, rather reluctantly, used in these notes - up to this point. However, the actual (physical) measurements are always imperfect, first of all because of the huge gap between the energy-time scale \(\hbar \sim 10^{-34} \mathrm{~J} \cdot \mathrm{s}\) of the quantum phenomena in "microscopic" systems such as atoms, and the "macroscopic" scale of the direct human perception, so that the role of the instruments bridging this gap (Fig. 1), is highly nontrivial.

    Screen Shot 2022-01-25 at 2.51.19 PM.pngFig.10.1. The general scheme of a quantum measurement.

    Besides the famous Bohr-Einstein discussion in the mid-1930s, which will be briefly reviewed in Sec. 3, the founding fathers of quantum mechanics have not paid much attention to these issues, apparently because of the following reason. At that time it looked like the experimental instruments (at least the best of them :-) were doing exactly what the measurement postulate was telling. For example, the \(z\)-oriented Stern-Gerlach experiment (Fig. 4.1) turns two complex coefficients \(\alpha \uparrow\) and \(\alpha \downarrow\), describing the spin state of the incoming electrons, into a set of particle-counter clicks, with the rates proportional to, respectively, \(|\alpha \uparrow|^{2}\) and \(\left|\alpha_{\downarrow}\right|^{2}\). The crude internal nature of these instruments makes more detailed questions unnatural. For example, each click of a Geiger counter involves an effective disappearance of one observed electron in a zillion-particle electric discharge avalanche it has triggered. A century ago, it looked much more important to extend the newly born quantum mechanics to more complex systems (such as atomic nuclei, etc.) than to think about the physics of such instruments.

    However, since that time the experimental techniques, notably including high-vacuum and lowtemperature systems, micro- and nano-fabrication, and low-noise electronics, have improved quite dramatically. In particular, we now may observe quantum-mechanical behavior of more and more macroscopic objects - such as the micromechanical oscillators mentioned in Sec. 2.9. Moreover, some "macroscopic quantum systems" (in particular, special systems of Josephson junctions, see below) have properties enabling their use as essential parts of measurement setups. Such developments are making the line separating the "micro" and "macro" worlds finer and finer, so that more inquisitive inquiries into the physical nature of quantum measurements are not so hopeless now. In my personal scheme of things, \({ }^{4}\) these inquiries may be grouped as follows:

    (i) Does a quantum measurement involve any laws besides those of quantum mechanics? In particular, should it necessarily involve a human/intelligent observer? (The last question is not as laughable as it may look - see below.)

    (ii) What is the state of the measured system just after a single-shot measurement - meaning a measurement process limited to a time interval much shorter than the time scale of the measured system’s evolution? (This question is a necessary part of any discussion of repeated measurements and of their ultimate form - continuous monitoring of a certain observable.)

    (iii) If a measurement of an observable \(A\) has produced a certain outcome \(A_{j}\), what statements may be made about the state of the system just before the measurement? (This question is most closely related to various interpretations of quantum mechanics.)

    Let me discuss these issues in the listed order. First of all, I am happy to report that there is a virtual consensus of physicists on some aspects of these issues. According to this consensus, any reasonable quantum measurement needs to result in a certain, distinguishable state of a macroscopic output component of the measurement instrument - see Fig. 1. (Traditionally, its component is called a pointer, though its role may be played by a printer or a plotter, an electronic circuit sending out the result as a number, etc.). This requirement implies that the measurement process should have the following features:

    • provide a large "signal gain", i.e. some means of mapping the quantum state with its \(\hbar\)-scale of action (i.e. of the energy-by-time product) onto a macroscopic position of the pointer with a much larger action scale, and
    • if we want to approach the fundamental limit of uncertainty, given by Eq. (3), the instrument should introduce as little additional fluctuations ("noise") as permitted by the laws of physics.

    Both these requirements are fulfilled in a well-designed Stern-Gerlach experiment - see Fig. \(4.1\) again. Indeed, the magnetic field gradient, splitting the electron beam, turns the minuscule (microscopic) energy difference (4.167) between two spin-polarized states into a macroscopic difference between the final positions of two output beams, where their detectors may be located. However, as was noted above, the internal physics of the particle detectors (say, Geiger counters) at this measurement is rather complex, and would not allow us to discuss some aspects of the measurement, in particular to answer the second of inquiries we are working on.

    This is why let me describe the scheme of an almost similar "single-shot" measurement of a twolevel quantum system, which shares the simplicity, high gain, and low internal noise of the SternGerlach apparatus, but has an advantage that at its certain hardware implementations, \({ }^{5}\) the measurement process allows a thorough, quantitative theoretical description. Let us measure a particle trapped in a double-well potential (Fig. 2), where \(x\) is some continuous generalized coordinate - not necessarily a mechanical displacement. Let the particle be initially in a pure quantum state, with the energy close to the well’s bottom. Then, as we know from the discussion of such systems in Secs. \(2.6\) and 5.1, the state may be described by a ket-vector similar to that of \(\operatorname{spin}-1 / 2\) : \[|\alpha\rangle=\alpha_{\rightarrow}|\rightarrow\rangle+\alpha_{\leftarrow}|\leftarrow\rangle,\] where the component states \(\rightarrow\) and \(\leftarrow\) is described by wavefunctions localized near the potential well bottoms at \(x \approx \pm x_{0}-\) see the blue lines in Fig. 2. Our goal is to measure in which well the particle resides at a certain time instant, say at \(t=0\). For that, let us rapidly change, at that moment, the potential profile of the system, so that at \(t>0\), near the origin, it may be well described by an inverted parabola: \[U(x) \approx-\frac{m \lambda^{2}}{2} x^{2}, \quad \text { for } t>0,|x|<<x_{\mathrm{f}} .\] It is straightforward to verify that the Heisenberg equations of motion in such an inverted potential describe exponential growth of operator \(\hat{x}\) in time (proportional to \(\exp \{\lambda t\}\) ) and hence a similar, proportional growth of the expectation value \(\langle x\rangle\) and its r.m.s. uncertainty \(\delta x .{ }^{6}\) At this "inflation" stage, the coherence between the two component states \(\rightarrow\) and \(\leftarrow\) is still preserved, i.e. the time evolution of the system is, in principle, reversible.

    Screen Shot 2022-01-25 at 2.54.32 PM.png
    Fig. 10.2. The potential inversion, as viewed on the (a) "macroscopic" and (b) "microscopic" scales of the generalized coordinate \(x\).

    Now let the system be weakly coupled, also at \(t>0\), to a dissipative (e.g., Ohmic) environment. As we know from Chapter 7, such coupling ensures the state’s dephasing on some time scale \(T_{2}\). If \[x_{0}<x_{0} \exp \left\{\lambda T_{2}\right\}, x_{\mathrm{f}},\] then the process, after the potential inversion, consists of two stages, well separated in time:

    • the already discussed "inflation" stage, preserving the component the state’s coherence, and
    • the dephasing stage, at which the coherence of the component states \(\rightarrow\) and \(\leftarrow\) is gradually suppressed as described by Eq. (7.89), i.e. the density matrix of the system is gradually reduced to the diagonal form describing a classical mixture of two probability packets with the probabilities (3) equal to, respectively, \(W_{\rightarrow}=\left|\alpha_{\rightarrow}\right|^{2}\) and \(W_{\leftarrow}=\left|\alpha_{\leftarrow}\right|^{2} \equiv 1-\left|\alpha_{\rightarrow}\right|^{2}\).

    Besides dephasing, the environment gives the motion certain kinematic friction, with the drag coefficient \(\eta(7.141)\), so that the system eventually settles to rest at one of the macroscopically separated minima \(x=\pm x_{\mathrm{f}}\) of the inverted potential (Fig. 2a), thus ensuring a high "signal gain" \(x_{\mathrm{f}} / x_{0}>>1\). As a result, the final probability density distribution \(w(x)\) along the \(x\)-axis has two narrow, well-separated peaks. But this is just the situation that was discussed in Sec. \(2.5-\) see, in particular, Fig. 2.17. Since that discussion is very important, let me repeat - or rather rephrase it. The final state of the system is a classical mixture of two well-separated states, with the respective probabilities \(W_{\leftarrow}\) and \(W_{\rightarrow, \text { whose sum }}\) equals 1 . Now let us use some detector to test whether the system is in one of these states \(-\) say the right one. (If \(x_{\mathrm{f}}\) is sufficiently large, the noise contribution of this detector into the measurement uncertainty is negligible, \({ }^{7}\) and its physics is unimportant.) If the system has been found at this location (again, the probability of this outcome is \(W_{\rightarrow}=\left|\alpha_{\rightarrow}\right|^{2}\) ), the probability to find it at the counterpart (left) location at a consequent detection turns to zero.

    This probability "reduction" is a purely classical (or if you like, mathematical) effect of the statistical ensemble’s re-definition: \(W_{\leftarrow}\) equals zero not in the initial ensemble of all similar experiments (where is equals \(\left|\alpha_{\leftarrow}\right|^{2}\) ), but only in the re-defined ensemble of experiments in that the system had been found at the right location. Of course, which ensemble to use, i.e. what probabilities to register/publish is a purely accounting decision, which should be made by a human (or otherwise intelligent :-) observer. If we are only interested in an objective recording of results of a pre-fixed sequence of experiments (i.e. the members of a pre-defined, fixed statistical ensemble), there is no need to include such an observer in any discussion. In any case, this detection/registration process, very common in classical statistics, leaves no space for any mysterious "wave packet reduction" - understood as a hypothetical process that would not obey the regular laws of quantum mechanical evolution.

    The state dephasing and ensemble re-definition at measurements are in the core of several paradoxes, of which the so-called quantum Zeno paradox is perhaps the most spectacular. \({ }^{8}\) Let us return to a two-level system with the unperturbed Hamiltonian given by Eq. (4.166), the quantum oscillation period \(2 \pi / \Omega\) much longer than the single-shot measurement time, and the system initially \((\) at \(t=0)\) definitely in one of the partial quantum states - for example, a certain potential well of the double-well potential. Then, as we know from Secs. \(2.6\) and 4.6, the probability to find the system in this initial state at time \(t>0\) is \[W(t)=\cos ^{2} \frac{\Omega t}{2} \equiv 1-\sin ^{2} \frac{\Omega t}{2} .\] If the time is small enough \((t=d t<<1 / \Omega)\), we may use the Taylor expansion to write \[W(d t) \approx 1-\frac{\Omega^{2} d t^{2}}{4} \text {. }\] Now, let us use some good measurement scheme (say, the potential inversion discussed above) to measure whether the system is still in this initial state. If it is (as Eq. (8) shows, the probability of such an outcome is nearly \(100 \%\) ), then the system, after the measurement, is in the same state. Let us allow it to evolve again, with the same Hamiltonian. Then the evolution of \(W\) will follow the same law as in Eq. (7). Thus, when the system is measured again at time \(2 d t\), the probability to find it in the same state both times is \[W(2 d t) \approx W(d t)\left(1-\frac{\Omega^{2} d t^{2}}{4}\right)=\left(1-\frac{\Omega^{2} d t^{2}}{4}\right)^{2} .\] After repeating this cycle \(N\) times (with the total time \(t=N d t\) still much less than \(N^{1 / 2} / \Omega\) ), the probability that the system is still in its initial state is \[W(N d t) \equiv W(t) \approx\left(1-\frac{\Omega^{2} d t^{2}}{4}\right)^{N}=\left(1-\frac{\Omega^{2} t^{2}}{4 N^{2}}\right)^{N} \approx 1-\frac{\Omega^{2} t^{2}}{4 N} .\] Comparing this result with Eq. (7), we see that the process of system’s transfer to the opposite partial state has been slowed down rather dramatically, and in the limit \(N \rightarrow \infty\) (at fixed \(t\) ), its evolution is virtually stopped by the measurement process. There is of course nothing mysterious here; the evolution slowdown is due to the quantum state dephasing at each measurement.

    This may be the only acceptable occasion for me to mention, very briefly, one more famous \(-\) or rather infamous Schrödinger cat paradox, so much overplayed in popular publications. \({ }^{9}\) For this thought experiment, there is no need to discuss the (rather complicated :-) physics of the cat. As soon as the charged particle, produced at the radioactive decay, reaches the Geiger counter, the initial coherent superposition of the two possible quantum states ("the decay has happened"/"the decay has not happened") of the system is rapidly dephased, i.e. reduced to their classical mixture, leading, correspondingly, to the classical mixture of the final macroscopic states "cat dead"/"cat alive". So, despite attempts by numerous authors, without a proper physics background, to represent this situation as a mystery whose discussion needs involvement of professional philosophers, hopefully the reader knows enough about dephasing from Chapter 7, to ignore all this babble.


    \({ }^{1}\) For an excellent review of these controversies, as presented in a few leading textbooks, I highly recommend J. Bell’s paper in the collection by A. Miller (ed.), Sixty-Two Years of Uncertainty, Plenum, \(1989 .\)

    \({ }^{2}\) "Quantum measurements" is a very unfortunate and misleading term; it would be more sensible to speak about "measurements of observables in quantum mechanical systems". However, the former term is so common and compact that I will use it - albeit rather reluctantly.

    \({ }^{3}\) The measurement outcomes become definite only in the trivial case when the system is definitely in one of the eigenstates \(a_{j}\), say \(a_{0}\); then \(\alpha_{j}=\delta_{j, 0} \exp \{i \varphi\}\), and \(W_{j}=\delta_{j, 0}\).

    \({ }^{4}\) Again, this list and some other issues discussed in the balance of this section are still controversial.

    \({ }^{5}\) The scheme may be implemented, for example, using a simple Josephson-junction circuit called the balanced comparator - see, e.g., T. Walls et al., IEEE Trans. on Appl. Supercond. 17, 136 (2007), and references therein. Experiments have demonstrated that this system may have a measurement variance dominated by the theoretically expected quantum-mechanical uncertainty, at practicable experimental conditions (at temperatures below \(\sim 1 \mathrm{~K}\) ). A conceptual advantage of this system is that it is based on externally-shunted Josephson junctions, i.e. the devices whose quantum-mechanical model, including its part describing the coupling to the environment, is in a quantitative agreement with experiment - see, e.g., D. Schwartz et al., Phys. Rev. Lett. \(\mathbf{5 5}, 1547\) (1985). Colloquially, the balanced comparator is a high-gain instrument with a "well-documented Hamiltonian", eliminating the need for speculations about the environmental effects. In particular, the dephasing process in it, and its time \(T_{2}\), are well described by Eqs. (7.89) and (7.142), with the coefficients \(\eta\) equal to the Ohmic conductances \(G\) of the shunts.

    \({ }^{6}\) Somewhat counter-intuitively, the latter growth improves the measurement’s fidelity. Indeed, it does not affect the intrinsic "signal-to-noise ratio" \(\delta x /\langle x\rangle\), while making the intrinsic (say, quantum-mechanical) uncertainty much larger than the possible noise contribution by the later measurement stage(s).

    \({ }^{7}\) At the balanced-comparator implementation mentioned above, the final state detection may be readily performed using a "SQUID" magnetometer based on the same Josephson junction technology - see, e.g., EM Sec. 6.5. In this case, the distance between the potential minima \(\pm x_{\mathrm{f}}\) is close to one superconducting flux quantum (3.38), while the additional uncertainty induced by the SQUID may be as low as a few millionths of that amount.

    \({ }^{8}\) This name, coined by E. Sudarshan and B. Mishra in 1997 (though the paradox had been discussed in detail by A. Turing in 1954) is due to its superficial similarity to the classical paradoxes by the ancient Greek philosopher Zeno of Elea. By the way, just for fun, let us have a look at what happens when Mother Nature is discussed by people that do not understand math and physics. The most famous of the classical Zeno paradoxes is the case of Achilles and Tortoise: the fast runner Achilles can apparently never overtake the slower Tortoise, because (in Aristotle’s words) "the pursuer must first reach the point whence the pursued started, so that the slower must always hold a lead". For a physicist, the paradox has a trivial, obvious resolution, but here is what a philosopher writes about it - not in some year BC, but in the 2010 AD: "Given the history of ’final resolutions’, from Aristotle onwards, it’s probably foolhardy to think we’ve reached the end." For me, this is a sad symbol of modern philosophy.

    \({ }^{9}\) I fully agree with S. Hawking who has been quoted to say, "When I hear about the Schrödinger cat, I reach for my gun." The only good aspect of this popularity is that the formulation of this paradox should be so well known to the reader that I do not need to waste time/space repeating it.


    This page titled 10.1: Quantum measurements is shared under a CC BY-NC-SA 4.0 license and was authored, remixed, and/or curated by Konstantin K. Likharev via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit history is available upon request.