Skip to main content
Physics LibreTexts

10.4: The Hamilton-Jacobi Equation

  • Page ID
    34813
  • \( \newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \) \( \newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}} \)\(\newcommand{\id}{\mathrm{id}}\) \( \newcommand{\Span}{\mathrm{span}}\) \( \newcommand{\kernel}{\mathrm{null}\,}\) \( \newcommand{\range}{\mathrm{range}\,}\) \( \newcommand{\RealPart}{\mathrm{Re}}\) \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\) \( \newcommand{\Argument}{\mathrm{Arg}}\) \( \newcommand{\norm}[1]{\| #1 \|}\) \( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\) \( \newcommand{\Span}{\mathrm{span}}\) \(\newcommand{\id}{\mathrm{id}}\) \( \newcommand{\Span}{\mathrm{span}}\) \( \newcommand{\kernel}{\mathrm{null}\,}\) \( \newcommand{\range}{\mathrm{range}\,}\) \( \newcommand{\RealPart}{\mathrm{Re}}\) \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\) \( \newcommand{\Argument}{\mathrm{Arg}}\) \( \newcommand{\norm}[1]{\| #1 \|}\) \( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\) \( \newcommand{\Span}{\mathrm{span}}\)\(\newcommand{\AA}{\unicode[.8,0]{x212B}}\)

    The action \(S\), defined by Eq. (47), may be used for one more analytical formulation of classical mechanics. For that, we need to make one more, different commitment: \(S\) has to be considered a function of the following independent arguments: the final time point \(t_{\text {fin }}\) (which I will, for brevity, denote as \(t\) in this section), and the set of generalized coordinates (but not of the generalized velocities!) at that point: \[S \equiv \int_{t_{\mathrm{ini}}}^{t} L d t=S\left[t, q_{j}(t)\right]\] Let us calculate the variation of this (from the variational point of view, new!) function, resulting from an arbitrary combination of variations of the final values \(q_{j}(t)\) of the coordinates while keeping \(t\) fixed. Formally this may be done by repeating the variational calculations described by Eqs. (49)-(51), besides that now the variations \(\delta q_{j}\) at the finite point \((t)\) do not necessarily equal zero. As a result, we get \[\delta S=\left.\sum_{j} \frac{\partial L}{\partial \dot{q}_{j}} \delta q_{j}\right|_{t}-\int_{t_{\mathrm{ini}}}^{t} d t \sum_{j}\left[\frac{d}{d t}\left(\frac{\partial L}{\partial \dot{q}_{j}}\right)-\frac{\partial L}{\partial q_{j}}\right] \delta q_{j} .\] For the motion along the real trajectory, i.e. satisfying the Lagrange equations, the second term of this expression equals zero. Hence Eq. (65) shows that, for (any) fixed time \(t\), \[\frac{\partial S}{\partial q_{j}}=\frac{\partial L}{\partial \dot{q}_{j}} .\] But the last derivative is nothing else than the generalized momentum \(p_{j}-\) see Eq. (2.31), so that \[\frac{\partial S}{\partial q_{j}}=p_{j} .\] (As a reminder, both parts of this relation refer to the final moment \(t\) of the trajectory.) As a result, the full derivative of the action \(S\left[t, q_{j}(t)\right]\) over time takes the form \[\frac{d S}{d t}=\frac{\partial S}{\partial t}+\sum_{j} \frac{\partial S}{\partial q_{j}} \dot{q}_{j}=\frac{\partial S}{\partial t}+\sum_{j} p_{j} \dot{q}_{j} .\] Now, by the very definition (64), the full derivative \(d S / d t\) is nothing more than the function \(L\), so that Eq. (67) yields \[\frac{\partial S}{\partial t}=L-\sum_{j} p_{j} \dot{q}_{j} .\] However, according to the definition (2) of the Hamiltonian function \(H\), the right-hand side of Eq. (69) is just \((-H)\), so that we get an extremely simply-looking Hamilton-Jacobi equation \[\frac{\partial S}{\partial t}=-H \text {. }\] This simplicity is, however, rather deceiving, because in order to use this equation for the calculation of the function \(S\left(t, q_{i}\right)\) for any particular problem, the Hamiltonian function has to be first expressed as a function of time \(t\), generalized coordinates \(q_{j}\), and the generalized momenta \(p_{j}\) (which may be, according to Eq. (67), represented just as derivatives \(\partial S / \partial q_{j}\) ). Let us see how does this procedure work for the simplest case of a 1D system with the Hamiltonian function given by Eq. (10). In this case, the only generalized momentum is \(p=\partial S / \partial q\), so that \[H=\frac{p^{2}}{2 m_{\mathrm{ef}}}+U_{\mathrm{ef}}(q, t)=\frac{1}{2 m_{\mathrm{ef}}}\left(\frac{\partial S}{\partial q}\right)^{2}+U_{\mathrm{ef}}(q, t),\] and Eq. (70) is reduced to a partial differential equation, \[\frac{\partial S}{\partial t}+\frac{1}{2 m_{\mathrm{ef}}}\left(\frac{\partial S}{\partial q}\right)^{2}+U_{\mathrm{ef}}(q, t)=0 .\] Its solution may be readily found in the easiest case of time-independent potential energy \(U_{\text {ef }}=\) \(U_{\text {ef }}(q)\). In this case, Eq. (72) is evidently satisfied by the following variable-separated solution: \[S(t, q)=S_{0}(q)+\text { const } \times t .\] Plugging this solution into Eq. (72), we see that since the sum of the two last terms on the left-hand side of that equation represents the full mechanical energy \(E\), the constant in Eq. (73) is nothing but (-E). Thus for the function \(S_{0}(q)\) we get an ordinary differential equation \[-E+\frac{1}{2 m_{\mathrm{ef}}}\left(\frac{d S_{0}}{d q}\right)^{2}+U_{\mathrm{ef}}(q)=0 .\] Integrating it, we get \[S_{0}=\int\left\{2 m_{\mathrm{ef}}\left[E-U_{\mathrm{ef}}(q)\right]\right\}^{1 / 2} d q+\text { const }\] so that, finally, the action is equal to \[S=\int\left\{2 m_{\mathrm{ef}}\left[E-U_{\mathrm{ef}}(q)\right]\right\}^{1 / 2} d q-E t+\text { const. }\] For the case of 1D motion of a single 1D particle, i.e. for \(q=x, m_{\mathrm{ef}}=m, U_{\mathrm{ef}}(q)=U(x)\), this solution is just the 1D case of the more general Eqs. (59)-(60), which were obtained above by a much more simple way. (In particular, \(S_{0}\) is just the abbreviated action.)

    This particular case illustrates that the Hamilton-Jacobi equation is not the most efficient way for the solution of most practical problems of classical mechanics. However, it may be rather useful for studies of certain mathematical aspects of dynamics. \({ }^{22}\) Moreover, in the early 1950s this approach was extended to a completely different field - the optimal control theory, in which the role of action \(S\) is played by the so-called cost function - a certain functional of a system (understood in a very general sense of this term), that should be minimized by an optimal choice of a control signal - a function of time that affects the system’s evolution in time. From the point of view of this mathematical theory, Eq. (70) is a particular case of a more general Hamilton-Jacobi-Bellman equation. \({ }^{23}\)


    \({ }^{23}\) See, e.g., Chapters 6-9 in I. C. Percival and D. Richards, Introduction to Dynamics, Cambridge U. Press, 1983.

    \({ }^{24}\) See, e.g., T. Bertsekas, Dynamic Programming and Optimal Control, vols. 1 and 2, Aetna Scientific, 2005 and 2007. The reader should not be intimidated by the very unnatural term “dynamic programming”, which was invented by the founding father of this field, Richard Ernest Bellman, to lure government bureaucrats into funding his research, deemed too theoretical at that time. (Presently, it has a broad range of important applications.)


    This page titled 10.4: The Hamilton-Jacobi Equation is shared under a CC BY-NC-SA 4.0 license and was authored, remixed, and/or curated by Konstantin K. Likharev via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit history is available upon request.

    • Was this article helpful?