# 7.4: Fluctuation-dissipation Theorem

- Page ID
- 57588

\( \newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \)

\( \newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}} \)

\( \newcommand{\id}{\mathrm{id}}\) \( \newcommand{\Span}{\mathrm{span}}\)

( \newcommand{\kernel}{\mathrm{null}\,}\) \( \newcommand{\range}{\mathrm{range}\,}\)

\( \newcommand{\RealPart}{\mathrm{Re}}\) \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\)

\( \newcommand{\Argument}{\mathrm{Arg}}\) \( \newcommand{\norm}[1]{\| #1 \|}\)

\( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\)

\( \newcommand{\Span}{\mathrm{span}}\)

\( \newcommand{\id}{\mathrm{id}}\)

\( \newcommand{\Span}{\mathrm{span}}\)

\( \newcommand{\kernel}{\mathrm{null}\,}\)

\( \newcommand{\range}{\mathrm{range}\,}\)

\( \newcommand{\RealPart}{\mathrm{Re}}\)

\( \newcommand{\ImaginaryPart}{\mathrm{Im}}\)

\( \newcommand{\Argument}{\mathrm{Arg}}\)

\( \newcommand{\norm}[1]{\| #1 \|}\)

\( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\)

\( \newcommand{\Span}{\mathrm{span}}\) \( \newcommand{\AA}{\unicode[.8,0]{x212B}}\)

\( \newcommand{\vectorA}[1]{\vec{#1}} % arrow\)

\( \newcommand{\vectorAt}[1]{\vec{\text{#1}}} % arrow\)

\( \newcommand{\vectorB}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \)

\( \newcommand{\vectorC}[1]{\textbf{#1}} \)

\( \newcommand{\vectorD}[1]{\overrightarrow{#1}} \)

\( \newcommand{\vectorDt}[1]{\overrightarrow{\text{#1}}} \)

\( \newcommand{\vectE}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash{\mathbf {#1}}}} \)

\( \newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \)

\( \newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}} \)

\(\newcommand{\avec}{\mathbf a}\) \(\newcommand{\bvec}{\mathbf b}\) \(\newcommand{\cvec}{\mathbf c}\) \(\newcommand{\dvec}{\mathbf d}\) \(\newcommand{\dtil}{\widetilde{\mathbf d}}\) \(\newcommand{\evec}{\mathbf e}\) \(\newcommand{\fvec}{\mathbf f}\) \(\newcommand{\nvec}{\mathbf n}\) \(\newcommand{\pvec}{\mathbf p}\) \(\newcommand{\qvec}{\mathbf q}\) \(\newcommand{\svec}{\mathbf s}\) \(\newcommand{\tvec}{\mathbf t}\) \(\newcommand{\uvec}{\mathbf u}\) \(\newcommand{\vvec}{\mathbf v}\) \(\newcommand{\wvec}{\mathbf w}\) \(\newcommand{\xvec}{\mathbf x}\) \(\newcommand{\yvec}{\mathbf y}\) \(\newcommand{\zvec}{\mathbf z}\) \(\newcommand{\rvec}{\mathbf r}\) \(\newcommand{\mvec}{\mathbf m}\) \(\newcommand{\zerovec}{\mathbf 0}\) \(\newcommand{\onevec}{\mathbf 1}\) \(\newcommand{\real}{\mathbb R}\) \(\newcommand{\twovec}[2]{\left[\begin{array}{r}#1 \\ #2 \end{array}\right]}\) \(\newcommand{\ctwovec}[2]{\left[\begin{array}{c}#1 \\ #2 \end{array}\right]}\) \(\newcommand{\threevec}[3]{\left[\begin{array}{r}#1 \\ #2 \\ #3 \end{array}\right]}\) \(\newcommand{\cthreevec}[3]{\left[\begin{array}{c}#1 \\ #2 \\ #3 \end{array}\right]}\) \(\newcommand{\fourvec}[4]{\left[\begin{array}{r}#1 \\ #2 \\ #3 \\ #4 \end{array}\right]}\) \(\newcommand{\cfourvec}[4]{\left[\begin{array}{c}#1 \\ #2 \\ #3 \\ #4 \end{array}\right]}\) \(\newcommand{\fivevec}[5]{\left[\begin{array}{r}#1 \\ #2 \\ #3 \\ #4 \\ #5 \\ \end{array}\right]}\) \(\newcommand{\cfivevec}[5]{\left[\begin{array}{c}#1 \\ #2 \\ #3 \\ #4 \\ #5 \\ \end{array}\right]}\) \(\newcommand{\mattwo}[4]{\left[\begin{array}{rr}#1 \amp #2 \\ #3 \amp #4 \\ \end{array}\right]}\) \(\newcommand{\laspan}[1]{\text{Span}\{#1\}}\) \(\newcommand{\bcal}{\cal B}\) \(\newcommand{\ccal}{\cal C}\) \(\newcommand{\scal}{\cal S}\) \(\newcommand{\wcal}{\cal W}\) \(\newcommand{\ecal}{\cal E}\) \(\newcommand{\coords}[2]{\left\{#1\right\}_{#2}}\) \(\newcommand{\gray}[1]{\color{gray}{#1}}\) \(\newcommand{\lgray}[1]{\color{lightgray}{#1}}\) \(\newcommand{\rank}{\operatorname{rank}}\) \(\newcommand{\row}{\text{Row}}\) \(\newcommand{\col}{\text{Col}}\) \(\renewcommand{\row}{\text{Row}}\) \(\newcommand{\nul}{\text{Nul}}\) \(\newcommand{\var}{\text{Var}}\) \(\newcommand{\corr}{\text{corr}}\) \(\newcommand{\len}[1]{\left|#1\right|}\) \(\newcommand{\bbar}{\overline{\bvec}}\) \(\newcommand{\bhat}{\widehat{\bvec}}\) \(\newcommand{\bperp}{\bvec^\perp}\) \(\newcommand{\xhat}{\widehat{\xvec}}\) \(\newcommand{\vhat}{\widehat{\vvec}}\) \(\newcommand{\uhat}{\widehat{\uvec}}\) \(\newcommand{\what}{\widehat{\wvec}}\) \(\newcommand{\Sighat}{\widehat{\Sigma}}\) \(\newcommand{\lt}{<}\) \(\newcommand{\gt}{>}\) \(\newcommand{\amp}{&}\) \(\definecolor{fillinmathshade}{gray}{0.9}\)Similar questions may be asked about a more general situation, when the Hamiltonian \(\hat{H}_{s}\) of the system of interest \((s)\), in the composite Hamiltonian (68), is not specified at all, but the interaction between that system and its environment still has a bilinear form similar to Eqs. (70) and (6.130): \[\hat{H}_{\text {int }}=-\hat{F}\{\lambda\} \hat{x},\] where \(x\) is some observable of our system \(s\) - say, its generalized coordinate or generalized momentum. It may look incredible that in this very general situation one still can make a very simple and powerful statement about the statistical properties of the generalized force \(F\), under only two (interrelated) conditions - which are satisfied in a huge number of cases of interest:

(i) the coupling of system \(s\) of interest to its environment \(e\) is weak - in the sense that the perturbation theory (see Chapter 6 ) is applicable, and

(ii) the environment may be considered as staying in thermodynamic equilibrium, with a certain temperature \(T\), regardless of the process in the system of interest. \({ }^{31}\)

This famous statement is called the fluctuation-dissipation theorem (FDT). \({ }^{32}\) Due to the importance of this fundamental result, let me derive it. \({ }^{33}\) Since by writing Eq. (68) we treat the whole system \((s+e)\) as a Hamiltonian one, we may use the Heisenberg equation (4.199) to write \[i \hbar \dot{\hat{F}}=[\hat{F}, \hat{H}]=\left[\hat{F}, \hat{H}_{e}\right],\] because, as was discussed in the last section, operator \(\hat{F}\{\lambda\}\) commutes with both \(\hat{H}_{s}\) and \(\hat{x}\). Generally, very little may be done with this equation, because the time evolution of the environment’s Hamiltonian depends, in turn, on that of the force. This is where the perturbation theory becomes indispensable. Let us decompose the force operator into the following sum: \[\hat{F}\{\lambda\}=\langle\hat{F}\rangle+\hat{\widetilde{F}}(t), \text { with }\langle\hat{\widetilde{F}}(t)\rangle=0,\] where (here and on, until further notice) the sign \(\langle\ldots\rangle\) means the statistical averaging over the environment alone, i.e. over an ensemble with absolutely similar evolutions of the system \(s\), but random states of its environment. \({ }^{34}\) From the point of view of the system \(s\), the first term of the sum (still an operator!) describes the average response of the environment to the system dynamics (possibly, including such irreversible effects as friction), and has to be calculated with a proper account of their interaction - as we will do later in this section. On the other hand, the last term in Eq. (92) represents random fluctuations of the environment, which exist even in the absence of the system \(s\). Hence, in the first non-zero approximation in the interaction strength, the fluctuation part may be calculated ignoring the interaction, i.e. treating the environment as being in thermodynamic equilibrium: \[i \hbar \dot{\tilde{F}}=\left[\hat{\widetilde{F}},\left.\hat{H}_{e}\right|_{\mathrm{eq}}\right] .\] Since in this approximation the environment’s Hamiltonian does not have an explicit dependence on time, the solution of this equation may be written by combining Eqs. (4.190) and (4.175): \[\hat{F}(t)=\exp \left\{+\left.\frac{i}{\hbar} \hat{H}_{e}\right|_{\mathrm{eq}} t\right\} \hat{F}(0) \exp \left\{-\left.\frac{i}{\hbar} \hat{H}_{e}\right|_{\mathrm{eq}} t\right\} .\] Let us use this relation to calculate the correlation function of the fluctuations \(F(t)\), defined similarly to Eq. (80), but taking care of the order of the time arguments (very soon we will see why): \[\left\langle\widetilde{F}(t) \widetilde{F}\left(t^{\prime}\right)\right\rangle=\left\langle\exp \left\{+\frac{i}{\hbar} \hat{H}_{e} t\right\} \hat{F}(0) \exp \left\{-\frac{i}{\hbar} \hat{H}_{e} t\right\} \exp \left\{+\frac{i}{\hbar} \hat{H}_{e} t^{\prime}\right\} \hat{F}(0) \exp \left\{-\frac{i}{\hbar} \hat{H}_{e} t^{\prime}\right\}\right\rangle .\] (Here, for the notation brevity, the thermal equilibrium of the environment is just implied.) We may calculate this expectation value in any basis, and the best choice for it is evident: in the environment’s stationary-state basis, the density operator of the environment, its Hamiltonian, and hence the exponents in Eq. (95) are all represented by diagonal matrices. Using Eq. (5), the correlation function becomes \[\begin{aligned} \left\langle\widetilde{F}(t) \widetilde{F}\left(t^{\prime}\right)\right\rangle &=\operatorname{Tr}\left[\hat{w} \exp \left\{+\frac{i}{\hbar} \hat{H}_{e} t\right\} \hat{F}(0) \exp \left\{-\frac{i}{\hbar} \hat{H}_{e} t\right\} \exp \left\{+\frac{i}{\hbar} \hat{H}_{e} t^{\prime}\right\} \hat{F}(0) \exp \left\{-\frac{i}{\hbar} \hat{H}_{e} t^{\prime}\right\}\right] \\ & \equiv \sum_{n}\left[\hat{w} \exp \left\{+\frac{i}{\hbar} \hat{H}_{e} t\right\} \hat{F}(0) \exp \left\{-\frac{i}{\hbar} \hat{H}_{e} t\right\} \exp \left\{+\frac{i}{\hbar} \hat{H}_{e} t^{\prime}\right\} \hat{F}(0) \exp \left\{-\frac{i}{\hbar} \hat{H}_{e} t^{\prime}\right\}\right]_{n n} \\ &=\sum_{n, n^{\prime}} W_{n} \exp \left\{+\frac{i}{\hbar} E_{n} t\right\} \hat{F}_{n n^{\prime}} \exp \left\{-\frac{i}{\hbar} E_{n^{\prime}} t\right\} \exp \left\{+\frac{i}{\hbar} E_{n^{\prime}} t^{\prime}\right\} \hat{F}_{n^{\prime} n} \exp \left\{-\frac{i}{\hbar} E_{n} t^{\prime}\right\} \\ & \equiv \sum_{n, n^{\prime}} W_{n}\left|F_{n n^{\prime}}\right|^{2} \exp \left\{+\frac{i}{\hbar}\left(E_{n}-E_{n^{\prime}}\right)\left(t-t^{\prime}\right)\right\} \end{aligned}\] Here \(W_{n}\) are the Gibbs distribution probabilities given by Eq. (24), with the environment’s temperature \(T\), and \(F_{n n^{\prime}} \equiv F_{n n}\) ( \((0)\) are the Schrödinger-picture matrix elements of the interaction force operator.

We see that though the correlator (96) is a function of the difference \(\tau \equiv t-t\) ’ only (as it should be for fluctuations in a macroscopically stationary system), it may depend on the order of its arguments. This is why let us mark this particular correlation function with the upper index "+", \[K_{F}^{+}(\tau) \equiv\left\langle\widetilde{F}(t) \widetilde{F}\left(t^{\prime}\right)\right\rangle=\sum_{n, n^{\prime}} W_{n}\left|F_{n n^{\prime}}\right|^{2} \exp \left\{+\frac{i \widetilde{E} \tau}{\hbar}\right\}, \quad \text { where } \widetilde{E} \equiv E_{n}-E_{n^{\prime}}\] while its counterpart, with the swapped times \(t\) and \(t\) ’, with the upper index "-": \[K_{F}^{-}(\tau) \equiv K_{F}^{+}(-\tau)=\left\langle\widetilde{F}\left(t^{\prime}\right) \widetilde{F}(t)\right\rangle=\sum_{n, n^{\prime}} W_{n}\left|F_{n n^{\prime}}\right|^{2} \exp \left\{-\frac{i \widetilde{E} \tau}{\hbar}\right\} .\] So, in contrast with classical processes, in quantum mechanics the correlation function of fluctuations \(\widetilde{F}\) is not necessarily time-symmetric: \[K_{F}^{+}(\tau)-K_{F}^{-}(\tau) \equiv K_{F}^{+}(\tau)-K_{F}^{+}(-\tau)=\left\langle\widetilde{F}(t) \widetilde{F}\left(t^{\prime}\right)-\widetilde{F}\left(t^{\prime}\right) \widetilde{F}(t)\right\rangle=2 i \sum_{n, n^{\prime}} W_{n}\left|F_{n n^{\prime}}\right|^{2} \sin \frac{\widetilde{E} \tau}{\hbar} \neq 0,\] so that \(\hat{F}(t)\) gives one more example of a Heisenberg-picture operator whose "values", taken in different moments of time, generally do not commute - see Footnote 49 in Chapter 4. (A good sanity check here is that at \(\tau=0\), i.e. at \(t=t^{\prime}\), the difference (99) between \(K_{F}^{+}\)and \(K_{F}^{-}\)vanishes.) Now let us return to the force operator’s decomposition (92), and calculate its first (average) component. To do that, let us write the formal solution of Eq. (91) as follows: \[\hat{F}(t)=\frac{1}{i \hbar} \int_{-\infty}^{t}\left[\hat{F}\left(t^{\prime}\right), \hat{H}_{e}\left(t^{\prime}\right)\right] d t^{\prime} .\] On the right-hand side of this relation, we still cannot treat the Hamiltonian of the environment as an unperturbed (equilibrium) one, even if the effect of our system \((s)\) on the environment is very weak, because this would give zero statistical average of the force \(F(t)\). Hence, we should make one more step of our perturbative treatment, taking into account the effect of the force on the environment. To do this, let us use Eqs. (68) and (90) to write the (so far, exact) Heisenberg equation of motion for the environment’s Hamiltonian, \[i \hbar \dot{\hat{H}}_{e}=\left[\hat{H}_{e}, \hat{H}\right]=-\hat{x}\left[\hat{H}_{e}, \hat{F}\right],\] and its formal solution, similar to Eq. (100), but for time \(t\) ’ rather than \(t\) : \[\hat{H}_{e}\left(t^{\prime}\right)=-\frac{1}{i \hbar} \int_{-\infty}^{t^{\prime}} \hat{x}\left(t^{\prime \prime}\right)\left[\hat{H}_{e}\left(t^{\prime \prime}\right), \hat{F}\left(t^{\prime \prime}\right)\right] d t^{\prime \prime} .\] Plugging this equality into the right-hand side of Eq. (100), and averaging the result (again, over the environment only!), we get \[\langle\hat{F}(t)\rangle=\frac{1}{\hbar^{2}} \int_{-\infty}^{t} d t^{\prime} \int_{-\infty}^{t^{\prime}} d t^{\prime \prime} \hat{x}\left(t^{\prime \prime}\right)\left\langle\left[\hat{F}\left(t^{\prime}\right),\left[\hat{H}_{e}\left(t^{\prime \prime}\right), \hat{F}\left(t^{\prime \prime}\right)\right]\right]\right\rangle .\] This is still an exact result, but now it is ready for an approximate treatment, implemented by averaging in its right-hand side over the unperturbed (thermal-equilibrium) state of the environment. This may be done absolutely similarly to that in Eq. (96), at the last step using Eq. (94): \[\begin{aligned} &\left\langle\left[\hat{F}\left(t^{\prime}\right),\left[\hat{H}_{e}\left(t^{\prime \prime}\right), \hat{F}\left(t^{\prime \prime}\right)\right]\right\rfloor=\operatorname{Tr}\left\{\mathrm{w}\left[\mathrm{F}\left(t^{\prime}\right),\left[\mathrm{H}_{e} \mathrm{~F}\left(t^{\prime \prime}\right)\right]\right]\right\}\right. \\ &\equiv \operatorname{Tr}\left\{\mathrm{w}\left[\mathrm{F}\left(t^{\prime}\right) \mathrm{H}_{e} \mathrm{~F}\left(t^{\prime \prime}\right)-\mathrm{F}\left(t^{\prime}\right) \mathrm{F}\left(t^{\prime \prime}\right) \mathrm{H}_{e}-\mathrm{H}_{e} \mathrm{~F}\left(t^{\prime \prime}\right) \mathrm{F}\left(t^{\prime}\right)+\mathrm{F}\left(t^{\prime \prime}\right) \mathrm{H}_{e} \mathrm{~F}\left(t^{\prime}\right)\right]\right\} \\ &=\sum_{n, n^{\prime}} W_{n}\left[F_{n n^{\prime}}\left(t^{\prime}\right) E_{n^{\prime}} F_{n^{\prime} n}\left(t^{\prime \prime}\right)-F_{n n^{\prime}}\left(t^{\prime}\right) F_{n^{\prime} n}\left(t^{\prime \prime}\right) E_{n}-E_{n} F_{n n^{\prime}}\left(t^{\prime \prime}\right) F_{n^{\prime} n}\left(t^{\prime}\right)+F_{n n^{\prime}}\left(t^{\prime \prime}\right) E_{n^{\prime}} F_{n^{\prime} n}\left(t^{\prime \prime}\right)\right] \\ &\equiv-\sum_{n, n^{\prime}} W_{n} \widetilde{E}\left|F_{n n^{\prime}}\right|^{2}\left[\exp \left\{\frac{i \widetilde{E}\left(t^{\prime}-t^{\prime \prime}\right)}{\hbar}\right\}+\text { c.c. }\right] . \end{aligned}\] Now, if we try to integrate each term of this sum, as Eq. (103) seems to require, we will see that the lower-limit substitution (at \(t^{\prime}, t^{\prime \prime} \rightarrow-\infty\) ) is uncertain because the exponents oscillate without decay. This mathematical difficulty may be overcome by the following physical reasoning. As illustrated by the example considered in the previous section, coupling to a disordered environment makes the "memory horizon" of the system of our interest \((s)\) finite: its current state does not depend on its history beyond a certain time scale. \({ }^{35}\) As a result, the function under the integrals of Eq. (103), i.e. the sum (104), should self-average at a certain finite time. A simplistic technique for expressing this fact mathematically is just dropping the lower-limit substitution; this would give the correct result for Eq. (103). However, a better (mathematically more acceptable) trick is to first multiply the functions under the integrals by, respectively, \(\exp \left\{\varepsilon\left(t-t^{\prime}\right)\right\}\) and \(\exp \left\{\varepsilon\left(t^{\prime}-t^{\prime \prime}\right)\right\}\), where \(\varepsilon\) is a very small positive constant, then carry out the integration, and after that follow the limit \(\varepsilon \rightarrow 0\). The physical justification of this procedure may be provided by saying that the system’s behavior should not be affected if its interaction with the environment was not kept constant but rather turned on gradually - say, exponentially with an infinitesimal rate \(\varepsilon\). With this modification, Eq. (103) becomes \[\langle\hat{F}(t)\rangle=-\frac{1}{\hbar^{2}} \sum_{n, n^{\prime}} W_{n} \widetilde{E}\left|F_{n n^{\prime}}\right|^{2} \lim _{\varepsilon \rightarrow 0} \int_{-\infty}^{t} d t^{\prime} \int_{-\infty}^{t^{\prime}} d t^{\prime \prime} \hat{x}\left(t^{\prime \prime}\right)\left[\exp \left\{\frac{i \widetilde{E}\left(t^{\prime}-t^{\prime \prime}\right)}{\hbar}+\varepsilon\left(t^{\prime \prime}-t\right)\right\}+\text { c.c. }\right] \text {. }\] This double integration is over the area shaded in Fig. 6, which makes it obvious that the order of integration may be changed to the opposite one as \[\int_{-\infty}^{t} d t^{\prime} \int_{-\infty}^{t^{\prime}} d t^{\prime \prime} \ldots=\int_{-\infty}^{t} d t^{\prime \prime} \int_{t^{\prime \prime}}^{t} d t^{\prime} \ldots=\int_{-\infty}^{t} d t^{\prime \prime} \int_{t^{\prime \prime}-t}^{0} d\left(t^{\prime}-t\right) \ldots \equiv \int_{-\infty}^{t} d t^{\prime \prime} \int_{0}^{\tau} d \tau^{\prime} \ldots,\] where \(\tau^{\prime} \equiv t-t^{\prime}\), and \(\tau \equiv t-t^{\prime \prime}\).

As a result, Eq. (105) may be rewritten as a single integral, \[\langle\hat{F}(t)\rangle=\int_{-\infty}^{t} G\left(t-t^{\prime \prime}\right) \hat{x}\left(t^{\prime \prime}\right) d t^{\prime \prime} \equiv \int_{0}^{\infty} G(\tau) \hat{x}(t-\tau) d \tau,\] whose kernel, \[\begin{aligned} G(\tau>0) & \equiv-\frac{1}{\hbar^{2}} \sum_{n, n^{\prime}} W_{n} \widetilde{E}\left|F_{n n^{\prime}}\right|^{2} \lim _{\varepsilon \rightarrow 0} \int_{0}^{\tau}\left[\exp \left\{\frac{i \widetilde{E}\left(\tau-\tau^{\prime}\right)}{\hbar}-\varepsilon \tau\right\}+\text { c.c. }\right] d \tau^{\prime} \\ &=\lim _{\varepsilon \rightarrow 0} \frac{2}{\hbar} \sum_{n, n^{\prime}} W_{n}\left|F_{n n^{\prime}}\right|^{2} \sin \frac{\widetilde{E} \tau}{\hbar} e^{-\varepsilon \tau} \equiv \frac{2}{\hbar} \sum_{n, n^{\prime}} W_{n}\left|F_{n n^{\prime}}\right|^{2} \sin \frac{\widetilde{E} \tau}{\hbar} \end{aligned}\] does not depend on the particular law of evolution of the system \((s)\) under study, i.e. provides a general characterization of its coupling to the environment.

In Eq. (107) we may readily recognize the most general form of the linear response of a system (in our case, the environment), taking into account the causality principle, where \(G(\tau)\) is the response function (also called the "temporal Green’s function") of the environment. Now comparing Eq. (108) with Eq. (99), we get a wonderfully simple universal relation, \[\langle[\hat{\tilde{F}}(\tau), \hat{\widetilde{F}}(0)]\rangle=i \hbar G(\tau) .\] that emphasizes once again the quantum nature of the correlation function’s time asymmetry. (This relation, called the Green-Kubo (or just "Kubo") formula after the works by Melville Green (1954) and Ryogo Kubo (1957), does not come up in the easier derivations of the FDT, mentioned in the beginning of this section.)

However, for us the relation between the function \(G(\tau)\) and the force’s anti-commutator, \[\left\langle\{\hat{\widetilde{F}}(t+\tau), \hat{\tilde{F}}(t)\} \equiv\langle\hat{\widetilde{F}}(t+\tau) \hat{\tilde{F}}(t)+\hat{\tilde{F}}(t) \hat{\widetilde{F}}(t+\tau)\rangle \equiv K_{F}^{+}(\tau)+K_{F}^{-}(\tau),\right.\] is much more important, because of the following reason. Eqs. (97)-(98) show that the so-called symmetrized correlation function, \[\begin{aligned} K_{F}(\tau) & \equiv \frac{K_{F}^{+}(\tau)+K_{F}^{-}(\tau)}{2}=\frac{1}{2}\langle\{\hat{\widetilde{F}}(\tau), \hat{\widetilde{F}}(0)\}\rangle=\lim _{\varepsilon \rightarrow 0} \sum_{n, n^{\prime}} W_{n}\left|F_{n n^{\prime}}\right|^{2} \cos \frac{\widetilde{E} \tau}{\hbar} e^{-2 \varepsilon|\tau|} \\ & \equiv \sum_{n, n^{\prime}} W_{n}\left|F_{n n^{\prime}}\right|^{2} \cos \frac{\widetilde{E} \tau}{\hbar} \end{aligned}\] which is an even function of the time difference \(\tau\), looks very similar to the response function (108), "only" with another trigonometric function under the sum, and a constant front factor. \({ }^{36}\) This similarity may be used to obtain a direct algebraic relation between the Fourier images of these two functions of \(\tau\). Indeed, the function (111) may be represented as the Fourier integral \({ }^{37}\) \[K_{F}(\tau)=\int_{-\infty}^{+\infty} S_{F}(\omega) e^{-i \omega \tau} d \omega=2 \int_{0}^{+\infty} S_{F}(\omega) \cos \omega \tau d \omega,\] with the reciprocal transform \[S_{F}(\omega)=\frac{1}{2 \pi} \int_{-\infty}^{+\infty} K_{F}(\tau) e^{i \omega \tau} d \tau=\frac{1}{\pi} \int_{0}^{+\infty} K_{F}(\tau) \cos \omega \tau d \tau,\] of the symmetrized spectral density of the variable \(F\), defined as \[S_{F}(\omega) \delta\left(\omega-\omega^{\prime}\right) \equiv \frac{1}{2}\left\langle\hat{F}_{\omega} \hat{F}_{-\omega^{\prime}}+\hat{F}_{-\omega^{\prime}} \hat{F}_{\omega}\right\rangle \equiv \frac{1}{2}\left\langle\left\{\hat{F}_{\omega}, \hat{F}_{-\omega^{\prime}}\right\}\right\rangle,\] where the function \(\hat{F}_{\omega}\) (also a Heisenberg operator rather than a \(c\)-number!) is defined as \[\hat{F}_{\omega} \equiv \frac{1}{2 \pi} \int_{-\infty}^{+\infty} \hat{F}(t) e^{i \omega t} d t, \quad \text { so that } \hat{F}(t)=\int_{-\infty}^{+\infty} \hat{F}_{\omega} e^{-i \omega t} d \omega .\] The physical meaning of the function \(S_{F}(\omega)\) becomes clear if we write Eq. (112) for the particular case \(\tau=0\) : \[K_{F}(0) \equiv\left\langle\hat{\widetilde{F}}^{2}\right\rangle=\int_{-\infty}^{+\infty} S_{F}(\omega) d \omega=2 \int_{0}^{+\infty} S_{F}(\omega) d \omega\] This formula infers that if we pass the function \(F(t)\) through a linear filter cutting from its frequency spectrum a narrow band \(d \omega\) of physical (positive) frequencies, then the variance \(\left\langle F_{\mathrm{f}}^{2}\right\rangle\) of the filtered signal \(F_{\mathrm{f}}(t)\) would be equal to \(2 S_{F}(\omega) d \omega\) - hence the name "spectral density". 38

Let us use Eqs. (111) and (113) to calculate the spectral density of fluctuations \(\widetilde{F}(t)\) in our model, using the same \(\varepsilon\)-trick as at the deviation of Eq. (108), to quench the upper-limit substitution: \[\begin{aligned} S_{F}(\omega) &=\sum_{n, n^{\prime}} W_{n}\left|F_{n n^{\prime}}\right|^{2} \frac{1}{2 \pi} \lim _{\varepsilon \rightarrow 0} \int_{-\infty}^{+\infty} \cos \frac{\widetilde{E} \tau}{\hbar} e^{-\varepsilon \mid \tau} \mid e^{i \omega \tau} d \tau \\ & \equiv \frac{1}{2 \pi} \sum_{n, n^{\prime}} W_{n}\left|F_{n n^{\prime}}\right|^{2} \lim _{\varepsilon \rightarrow 0} \int_{0}^{+\infty}\left[\exp \left\{\frac{i \widetilde{E} \tau}{\hbar}\right\}+\text { c.c. }\right] e^{-\varepsilon \tau} e^{i \omega \tau} d \tau \\ &=\frac{1}{2 \pi} \sum_{n, n^{\prime}} W_{n}\left|F_{n n^{\prime}}\right|^{2} \lim _{\varepsilon \rightarrow 0}\left[\frac{1}{i(\widetilde{E} / \hbar+\omega)-\varepsilon}+\frac{1}{i(-\widetilde{E} / \hbar+\omega)-\varepsilon}\right] . \end{aligned}\] Now it is a convenient time to recall that each of the two summations here is over the eigenenergies of the environment, whose spectrum is virtually continuous because of its large size, so that we may transform each sum into an integral - just as this was done in Sec. 6.6: \[\sum_{n} \ldots \rightarrow \int \ldots d n=\int \ldots \rho\left(E_{n}\right) d E_{n},\] where \(\rho(E) \equiv d n / d E\) is the environment’s density of states at a given energy. This transformation yields \[S_{F}(\omega)=\frac{1}{2 \pi} \lim _{\varepsilon \rightarrow 0} \int d E_{n} W\left(E_{n}\right) \rho\left(E_{n}\right) \int d E_{n^{\prime}} \rho\left(E_{n^{\prime}}\right)\left|F_{n n^{\prime}}\right|^{2}\left[\frac{1}{i(\widetilde{E} / \hbar-\omega)-\varepsilon}+\frac{1}{i(-\widetilde{E} / \hbar-\omega)-\varepsilon}\right] .\] Since the expression inside the square bracket depends only on a specific linear combination of two energies, namely on \(\widetilde{E} \equiv E_{n}-E_{n^{\prime}}\), it is convenient to introduce also another, linearly-independent combination of the energies, for example, the average energy \(\bar{E} \equiv\left(E_{n}+E_{n^{\prime}}\right) / 2\), so that the state energies may be represented as \[E_{n}=\bar{E}+\frac{\widetilde{E}}{2}, \quad E_{n^{\prime}}=\bar{E}-\frac{\widetilde{E}}{2} .\] With this notation, Eq. (119) becomes \[\begin{gathered} S_{F}(\omega)=-\frac{\hbar}{2 \pi} \lim _{\varepsilon \rightarrow 0} \int d \bar{E}\left[\int d \widetilde{E} W\left(\bar{E}+\frac{\widetilde{E}}{2}\right) \rho\left(\bar{E}+\frac{\widetilde{E}}{2}\right) \rho\left(\bar{E}-\frac{\widetilde{E}}{2}\right)\left|F_{n n^{\prime}}\right|^{2} \frac{1}{i(\widetilde{E}-\hbar \omega)-\hbar \varepsilon}\right. \\ \left.+\int d \widetilde{E} W\left(\bar{E}+\frac{\widetilde{E}}{2}\right) \rho\left(\bar{E}+\frac{\widetilde{E}}{2}\right) \rho\left(\bar{E}-\frac{\widetilde{E}}{2}\right)\left|F_{n n^{\prime}}\right|^{2} \frac{1}{i(-\widetilde{E}-\hbar \omega)-\hbar \varepsilon}\right] . \end{gathered}\] Due to the smallness of the parameter \(\hbar \varepsilon\) (which should be much smaller than all genuine energies of the problem, including \(k_{\mathrm{B}} T, \hbar \omega, E_{n}\), and \(E_{n}\) ), each of the internal integrals in Eq. (121) is dominated by an infinitesimal vicinity of one point, \(\widetilde{E}_{\pm}=\pm \hbar \omega .\) In these vicinities, the state densities, the matrix elements, and the Gibbs probabilities do not change considerably, and may be taken out of the integral, which may be then worked out explicitly: 39 \[\begin{aligned} S_{F}(\omega) &=-\frac{\hbar}{2 \pi} \lim _{\varepsilon \rightarrow 0} \int d \bar{E} \rho_{+} \rho_{-}\left[W_{+}\left|F_{+}\right|^{2} \int_{-\infty}^{+\infty} \frac{d \widetilde{E}}{i(\widetilde{E}-\hbar \omega)-\hbar \varepsilon}+W_{-}\left|F_{-}\right|^{2} \int_{-\infty}^{+\infty} \frac{d \widetilde{E}}{i(-\widetilde{E}-\hbar \omega)-\hbar \varepsilon}\right] \\ & \equiv-\frac{\hbar}{2 \pi} \lim _{\varepsilon \rightarrow 0} \int d \bar{E} \rho_{+} \rho_{-}\left[W_{+}\left|F_{+}\right|^{2} \int_{-\infty}^{+\infty} \frac{-i(\widetilde{E}-\hbar \omega)-\hbar \varepsilon}{(\widetilde{E}-\hbar \omega)^{2}+(\hbar \varepsilon)^{2}} d \widetilde{E}+W_{-}\left|F_{-}\right|^{2} \int_{-\infty}^{+\infty} \frac{i(\widetilde{E}+\hbar \omega)-\hbar \varepsilon}{(\widetilde{E}+\hbar \omega)^{2}+(\hbar \varepsilon)^{2}} d \widetilde{E}\right] \\ &=\frac{\hbar}{2} \int \rho_{+} \rho_{-}\left[W_{+}\left|F_{+}\right|^{2}+W_{-}\left|F_{-}\right|^{2}\right] d \bar{E}, \end{aligned}\] where the indices \(\pm\) mark the functions’ values at the special points \(\widetilde{E}_{\pm}=\pm \hbar \omega\), i.e. \(E_{n}=E_{n}^{\prime} \pm \hbar \omega\). The physics of these points becomes simple if we interpret the state \(n\), for which the equilibrium Gibbs distribution function equals \(W_{n}\), as the initial state of the environment, and \(n\) ’ as its final state. Then the top-sign point corresponds to \(E_{n}{ }^{\prime}=E_{n}-\hbar \omega\), i.e. to the result of emission of one energy quantum \(\hbar \omega\) of the "observation" frequency \(\omega\) by the environment to the system \(s\) of our interest, while the bottom-sign point \(E_{n^{\prime}}=E_{n}+\hbar \omega\), corresponds to the absorption of such quantum by the environment. As Eq. (122) shows, both processes give similar, positive contributions into the force fluctuations.The situation is different for the Fourier image of the response function \(G(\tau),{ }^{40}\) \[\chi(\omega) \equiv \int_{0}^{+\infty} G(\tau) e^{i \omega \tau} d \tau,\] that is usually called either the generalized susceptibility or the response function - in our case, of the environment. Its physical meaning is that according to Eq. (107), the complex function \(\chi(\omega)=\chi^{\prime}(\omega)+\) \(i \chi^{\prime \prime}(\omega)\) relates the Fourier amplitudes of the generalized coordinate and the generalized force: \({ }^{41}\) \[\left\langle\hat{F}_{\omega}\right\rangle=\chi(\omega) \hat{x}_{\omega} .\] The physics of its imaginary part \(\chi\) " \((\omega)\) is especially clear. Indeed, if \(x_{\omega}\) represents a sinusoidal classical process, say \[x(t)=x_{0} \cos \omega t \equiv \frac{x_{0}}{2} e^{-i \omega t}+\frac{x_{0}}{2} e^{+i \omega t}, \text { i.e. } x_{\omega}=x_{-\omega}=\frac{x_{0}}{2},\] then, in accordance with the correspondence principle, Eq. (124) should hold for the \(c\)-number complex amplitudes \(F_{\omega}\) and \(x_{\omega}\), enabling us to calculate the time dependence of the force as \[\begin{aligned} F(t) &=F_{\omega} e^{-i \omega t}+F_{-\omega} e^{+i \omega t}=\chi(\omega) x_{\omega} e^{-i \omega t}+\chi(-\omega) x_{-\omega} e^{+i \omega t}=\frac{x_{0}}{2}\left[\chi(\omega) e^{-i \omega t}+\chi^{*}(\omega) e^{+i \omega t}\right] \\ &=\frac{x_{0}}{2}\left[\left(\chi^{\prime}+i \chi^{\prime \prime}\right) e^{-i \omega t}+\left(\chi^{\prime}-i \chi^{\prime \prime}\right) e^{+i \omega t}\right] \equiv x_{0}\left[\chi^{\prime}(\omega) \cos \omega t+\chi^{\prime \prime}(\omega) \sin \omega t\right] \end{aligned}\] We see that \(\chi\) ’" \((\omega)\) weighs the force’s part (frequently called quadrature) that is \(\pi / 2\)-shifted from the coordinate \(x\), i.e. is in phase with its velocity, and hence characterizes the time-average power flow from the system into its environment, i.e. the energy dissipation rate: 42 \[\overline{\mathscr{P}}=\overline{-F(t) \dot{x}(t)}=\overline{-x_{0}\left[\chi^{\prime}(\omega) \cos \omega t+\chi^{\prime \prime}(\omega) \sin \omega t\right]\left(-\omega x_{0} \sin \omega t\right)}=\frac{x_{0}^{2}}{2} \omega \chi^{\prime \prime}(\omega) .\] Let us calculate this function from Eqs. (108) and (123), just as we have done for the spectral density of fluctuations: \[\begin{aligned} \chi^{\prime \prime}(\omega) &=\operatorname{Im} \int_{0}^{+\infty} G(\tau) e^{i \omega \tau} d \tau=\frac{2}{\hbar} \sum_{n, n^{\prime}} W_{n}\left|F_{n n^{\prime}}\right|^{2} \lim _{\varepsilon \rightarrow 0} \operatorname{Im} \int_{0}^{+\infty} \frac{1}{2 i}\left(\exp \left\{i \frac{\widetilde{E} \tau}{\hbar}\right\}-\text { c.c. }\right) e^{i \omega \tau} e^{-\varepsilon \tau} d \tau \\ &=\sum_{n, n^{\prime}} W_{n}\left|F_{n n^{\prime}}\right|^{2} \lim _{\varepsilon \rightarrow 0} \operatorname{Im}\left(\frac{1}{-\widetilde{E}-\hbar \omega-i \hbar \varepsilon}-\frac{1}{\widetilde{E}-\hbar \omega-i \hbar \varepsilon}\right) \\ & \equiv \sum_{n, n^{\prime}} W_{n}\left|F_{n n^{\prime}}\right|^{2} \lim _{\varepsilon \rightarrow 0}\left(\frac{\hbar \varepsilon}{(\widetilde{E}+\hbar \omega)^{2}+(\hbar \varepsilon)^{2}}-\frac{\hbar \varepsilon}{(\widetilde{E}-\hbar \omega)^{2}+(\hbar \varepsilon)^{2}}\right) \end{aligned}\] Making the transfer (118) from the double sum to the double integral, and then the integration variable transfer (120), we get \[\begin{aligned} \chi^{\prime \prime}(\omega)=\lim _{\varepsilon \rightarrow 0} \int d \bar{E}\left[\int_{-\infty}^{+\infty} W\left(\bar{E}+\frac{\widetilde{E}}{2}\right) \rho\left(\bar{E}+\frac{\widetilde{E}}{2}\right) \rho\left(\bar{E}-\frac{\widetilde{E}}{2}\right)\left|F_{n n^{\prime}}\right|^{2} \frac{\hbar \varepsilon}{(\widetilde{E}+\hbar \omega)^{2}+(\hbar \varepsilon)^{2}} d \widetilde{E}\right.\\ \left.-\int_{-\infty}^{+\infty} W\left(\bar{E}+\frac{\widetilde{E}}{2}\right) \rho\left(\bar{E}+\frac{\widetilde{E}}{2}\right) \rho\left(\bar{E}-\frac{\widetilde{E}}{2}\right)\left|F_{n n^{\prime}}\right|^{2} \frac{\hbar \varepsilon}{(\widetilde{E}-\hbar \omega)^{2}+(\hbar \varepsilon)^{2}} d \widetilde{E}\right] . \end{aligned}\] Now using the same argument about the smallness of parameter \(\varepsilon\) as above, we may take the spectral densities, the matrix elements of force, and the Gibbs probabilities out of the integrals, and work out the remaining integrals, getting a result very similar to Eq. (122): \[\chi^{\prime \prime}(\omega)=\pi \int \rho_{+} \rho_{-}\left[W_{-}\left|F_{-}\right|^{2}-W_{+}\left|F_{+}\right|^{2}\right] d \bar{E} .\] In order to relate these two results, it is sufficient to notice that according to Eq. (24), the Gibbs probabilities \(W_{\pm}\)are related by a coefficient depending on only the temperature \(T\) and observation frequency \(\omega\) : \[W_{\pm} \equiv W\left(\bar{E}+\frac{\widetilde{E}_{\pm}}{2}\right) \equiv W\left(\bar{E} \pm \frac{\hbar \omega}{2}\right)=\frac{1}{Z} \exp \left\{-\frac{\bar{E} \pm \hbar \omega / 2}{k_{\mathrm{B}} T}\right\}=W(\bar{E}) \exp \left\{\mp \frac{\hbar \omega}{2 k_{\mathrm{B}} T}\right\},\] so that both the spectral density (122) and the dissipative part (130) of the generalized susceptibility may be expressed via the same integral over the environment energies: \[\begin{aligned} &S_{F}(\omega)=\hbar \cosh \left(\frac{\hbar \omega}{2 k_{\mathrm{B}} T}\right) \int \rho_{+} \rho_{-} W(\bar{E})\left[\left|F_{+}\right|^{2}+\left|F_{-}\right|^{2}\right] d \bar{E}, \\ &\chi^{\prime \prime}(\omega)=2 \pi \sinh \left(\frac{\hbar \omega}{2 k_{\mathrm{B}} T}\right) \int \rho_{+} \rho_{-} W(\bar{E})\left[\left|F_{+}\right|^{2}+\left|F_{-}\right|^{2}\right] d \bar{E}, \end{aligned}\] and hence are universally related as \[S_{F}(\omega)=\frac{\hbar}{2 \pi} \chi^{\prime \prime}(\omega) \operatorname{coth} \frac{\hbar \omega}{2 k_{\mathrm{B}} T} .\] This is, finally, the much-celebrated Callen-Welton’s fluctuation-dissipation theorem (FDT). It reveals a fundamental, intimate relationship between these two effects of the environment ("no dissipation without fluctuation") - hence the name. A curious feature of the FDT is that Eq. (134) includes the same function of temperature as the average energy (26) of a quantum oscillator of frequency \(\omega\), though, as the reader could witness, the notion of the oscillator was by no means used in its derivation. As will see in the next section, this fact leads to rather interesting consequences and even conceptual opportunities.

In the classical limit, \(\hbar \omega<<k_{\mathrm{B}} T\), the FDT is reduced to \[S_{F}(\omega)=\frac{\hbar}{2 \pi} \chi^{\prime \prime}(\omega) \frac{2 k_{\mathrm{B}} T}{\hbar \omega}=\frac{k_{\mathrm{B}} T}{\pi} \frac{\operatorname{Im} \chi(\omega)}{\omega} .\] In most systems of interest, the last fraction is close to a finite (positive) constant within a substantial range of relatively low frequencies. Indeed, expanding the right-hand side of Eq. (123) into the Taylor series in small \(\omega\), we get \[\chi(\omega)=\chi(0)+i \omega \eta+\ldots, \quad \text { with } \chi(0)=\int_{0}^{\infty} G(\tau) d \tau, \quad \text { and } \eta \equiv \int_{0}^{\infty} G(\tau) \tau d \tau .\] Since the temporal Green’s function \(G\) is real by definition, the Taylor expansion of \(\chi^{\prime \prime}(\omega) \equiv \operatorname{Im} \chi(\omega)\) at \(\omega=0\) starts with the linear term \(\omega \eta\), where \(\eta\) is a certain real coefficient, and unless \(\eta=0\), is dominated by this term at small \(\omega\). The physical sense of the constant \(\eta\) becomes clear if we consider an environment that provides a force described by a simple, well-known kinematic friction law \[\langle\hat{F}\rangle=-\eta \dot{\hat{x}}, \quad \text { with } \eta \geq 0,\] where \(\eta\) is usually called the drag coefficient. For the Fourier images of coordinate and force, this gives the relation \(F_{\omega}=i \omega \eta x_{\omega}\), so that according to Eq. (124), \[\chi(\omega)=i \omega \eta, \quad \text { i.e. } \frac{\chi^{\prime \prime}(\omega)}{\omega} \equiv \frac{\operatorname{Im} \chi(\omega)}{\omega}=\eta \geq 0\] With this approximation, and in the classical limit, the FDT (134) is reduced to the well-known Nyquist formula: 43 \[S_{F}(\omega)=\frac{k_{\mathrm{B}} T}{\pi} \eta, \quad \text { i.e. }\left\langle F_{\mathrm{f}}^{2}\right\rangle=4 k_{\mathrm{B}} T \eta d v\] According to Eq. (112), if such a constant spectral density \({ }^{44}\) persisted at all frequencies, it would correspond to a delta-correlated process \(F(t)\), with \[K_{F}(\tau)=2 \pi S_{F}(0) \delta(\tau)=2 k_{\mathrm{B}} T \eta \delta(\tau)\]

- cf. Eqs. (82) and (83). Since in the classical limit the right-hand side of Eq. (109) is negligible, and the correlation function may be considered an even function of time, the symmetrized function under the integral in Eq. (113) may be rewritten just as \(\langle F(\tau) F(0)\rangle\). In the limit of relatively low observation frequencies (in the sense that \(\omega\) is much smaller than not only the quantum frontier \(k_{\mathrm{B}} T / \hbar\) but also the frequency scale of the function \(\left.\chi^{\prime \prime}(\omega) / \omega\right)\), Eq. (138) may be used to recast Eq. (135) in the form 45

\[\eta \equiv \lim _{\omega \rightarrow 0} \frac{\chi^{\prime \prime}(\omega)}{\omega}=\frac{1}{k_{\mathrm{B}} T} \int_{0}^{\infty}\langle F(\tau) F(0)\rangle d \tau\] To conclude this section, let me return for a minute to the questions formulated in our earlier discussion of dephasing in the two-level model. In that problem, the dephasing time scale is \(T_{2}=1 / 2 D_{\varphi}\). Hence the classical approach to the dephasing, used in Sec. 3, is adequate if \(\hbar D_{\varphi} \ll k_{\mathrm{B}} T\). Next, we may identify the operators \(\hat{f}\) and \(\hat{\sigma}_{z}\) participating in Eq. (70) with, respectively, \((-\hat{F})\) and \(\hat{x}\) participating in the general Eq. (90). Then the comparison of Eqs. (82), (89), and (140) yields \[\frac{1}{T_{2}} \equiv 2 D_{\varphi}=\frac{4 k_{\mathrm{B}} T}{\hbar^{2}} \eta\] so that, for the model described by Eq. (137) with a temperature-independent drag coefficient \(\eta\), the rate of dephasing by a classical environment is proportional to its temperature.

\({ }^{31}\) The most frequent example of the violation of this condition is the environment’s overheating by the energy flow from system \(s\). Let me leave it to the reader to estimate the overheating of a standard physical laboratory room by a typical dissipative quantum process - the emission of an optical photon by an atom. (Hint: it is extremely small.)

\({ }^{32}\) The FDT was first derived by Herbert Callen and Theodore Allen Welton in 1951, on the background of an earlier derivation of its classical limit by Harry Nyquist in 1928 .

\({ }^{33}\) The FDT may be proved in several ways that are shorter than the one given below - see, e.g., either the proof in SM Secs. \(5.5\) and \(5.6\) (based on H. Nyquist’s arguments), or the original paper by H. Callen and T. Welton, Phys. Rev. 83, 34 (1951) - wonderful in its clarity. The longer approach I will describe here, besides giving the important Green-Kubo formula (109) as a byproduct, is a very useful exercise in the operator manipulation and the perturbation theory in its integral form - different from the differential forms used in Chapter 6 . If the reader is not interested in this exercise, they may skip the derivation and jump straight to the result expressed by Eq. (134), which uses the notions defined by Eqs. (114) and (123).

\({ }_{34}\) For usual ("ergodic") environments, without intrinsic long-term memories, this statistical averaging over an ensemble of environments is equivalent to averaging over intermediate times - much longer than the correlation time \(\tau_{\mathrm{c}}\) of the environment, but still much shorter than the characteristic time of evolution of the system under analysis, such as the dephasing time \(T_{2}\) and the energy relaxation time \(T_{1}\) - both still to be calculated.

\({ }^{35}\) Actually, this is true for virtually any real physical system - in contrast to idealized models such as a dissipation-free oscillator that swings for ever and ever with the same amplitude and phase, thus "remembering" the initial conditions.

\({ }^{36}\) For the heroic reader who has suffered through the calculations up to this point: our conceptual work is done! What remains is just some simple math to bring the relation between Eqs. (108) and (111) to an explicit form.

\({ }^{37}\) Due to their practical importance, and certain mathematical issues of their justification for random functions, Eqs. (112)-(113) have their own grand name, the Wiener-Khinchin theorem, though the math rigor aside, they are just a straightforward corollary of the standard Fourier integral transform (115).

\({ }^{38}\) An alternative popular measure of the spectral density of a process \(F(t)\) is \(S_{F}(v) \equiv\left\langle F_{\mathrm{f}}^{2}\right\rangle / d v=4 \pi S_{F}(\omega)\), where \(v\) \(=\omega / 2 \pi\) is the "cyclic" frequency (measured in \(\mathrm{Hz}\) ).

\({ }^{39}\) Using, e.g., MA Eq. (6.5a). (The imaginary parts of the integrals vanish, because the integration in infinite limits may be always re-centered to the finite points \(\pm \hbar \omega\).) A math-enlightened reader may have noticed that the integrals might be taken without the introduction of small \(\varepsilon\), using the Cauchy theorem - see MA Eq. (15.1).

\({ }^{40}\) The integration in Eq. (123) may be extended to the whole time axis, \(-\infty<\tau<+\infty\), if we complement the definition (107) of the function \(G(\tau)\) for \(\tau>0\) with its definition as \(G(\tau)=0\) for \(\tau<0\), in correspondence with the causality principle.

\({ }^{41}\) In order to prove this relation, it is sufficient to plug expression \(\hat{x}_{s}=\hat{x}_{\omega} e^{-i \omega t}\), or any sum of such exponents, into Eqs. (107) and then use the definition (123). This (simple) exercise is highly recommended to the reader.

\({ }^{42}\) The sign minus in Eq. (127) is due to the fact that according to Eq. (90), \(F\) is the force exerted on our system ( \((s)\) by the environment, so that the force exerted by our system on the environment is \(-F\). With this sign clarification, the expression \(\mathscr{P}=-F \dot{x}=-F v\) for the instant power flow is evident if \(x\) is the usual Cartesian coordinate of a 1D particle. However, according to analytical mechanics (see, e.g., CM Chapters 2 and 10), it is also valid for any {generalized coordinate, generalized force} pair which forms the interaction Hamiltonian (90).

\({ }^{43}\) Actually, the 1928 work by H. Nyquist was about the electronic noise in resistors, just discovered experimentally by his Bell Labs colleague John Bertrand Johnson. For an Ohmic resistor, as the dissipative "environment" of the electric circuit it is connected with, Eq. (137) is just the Ohm’s law, and may be recast as either \(\langle V\rangle=-R(d Q / d t)=R I\), or \(\langle I\rangle=-G(d \Phi / d t)=G V\). Thus for the voltage \(V\) across an open circuit, \(\eta\) corresponds to its resistance \(R\), while for current \(I\) in a short circuit, to its conductance \(G=1 / R\). In this case, the fluctuations described by Eq. (139) are referred to as the Johnson-Nyquist noise. (Because of this important application, any model leading to Eq. (138) is commonly referred to as the Ohmic dissipation, even if the physical nature of the variables \(x\) and \(F\) is quite different from voltage and current.)

\({ }^{44}\) A random process whose spectral density may be reasonably approximated by a constant is frequently called the white noise, because it is a random mixture of all possible sinusoidal components with equal weights, reminding the spectral composition of the natural white light.

\({ }^{45}\) Note that in some fields (especially in physical kinetics and chemical physics), this particular limit of the Nyquist formula is called the Green-Kubo (or just "Kubo") formula. However, in the view of the FDT development history (described above), it is much more reasonable to associate these names with Eq. (109) - as it is done in most fields of physics.