Loading [MathJax]/jax/output/HTML-CSS/jax.js
Skip to main content
Library homepage
 

Text Color

Text Size

 

Margin Size

 

Font Type

Enable Dyslexic Font
Physics LibreTexts

10.4: The Hamilton-Jacobi Equation

( \newcommand{\kernel}{\mathrm{null}\,}\)

The action S, defined by Eq. (47), may be used for one more analytical formulation of classical mechanics. For that, we need to make one more, different commitment: S has to be considered a function of the following independent arguments: the final time point tfin  (which I will, for brevity, denote as t in this section), and the set of generalized coordinates (but not of the generalized velocities!) at that point: SttiniLdt=S[t,qj(t)] Let us calculate the variation of this (from the variational point of view, new!) function, resulting from an arbitrary combination of variations of the final values qj(t) of the coordinates while keeping t fixed. Formally this may be done by repeating the variational calculations described by Eqs. (49)-(51), besides that now the variations δqj at the finite point (t) do not necessarily equal zero. As a result, we get δS=jL˙qjδqj|tttinidtj[ddt(L˙qj)Lqj]δqj. For the motion along the real trajectory, i.e. satisfying the Lagrange equations, the second term of this expression equals zero. Hence Eq. (65) shows that, for (any) fixed time t, Sqj=L˙qj. But the last derivative is nothing else than the generalized momentum pj see Eq. (2.31), so that Sqj=pj. (As a reminder, both parts of this relation refer to the final moment t of the trajectory.) As a result, the full derivative of the action S[t,qj(t)] over time takes the form dSdt=St+jSqj˙qj=St+jpj˙qj. Now, by the very definition (64), the full derivative dS/dt is nothing more than the function L, so that Eq. (67) yields St=Ljpj˙qj. However, according to the definition (2) of the Hamiltonian function H, the right-hand side of Eq. (69) is just (H), so that we get an extremely simply-looking Hamilton-Jacobi equation St=H This simplicity is, however, rather deceiving, because in order to use this equation for the calculation of the function S(t,qi) for any particular problem, the Hamiltonian function has to be first expressed as a function of time t, generalized coordinates qj, and the generalized momenta pj (which may be, according to Eq. (67), represented just as derivatives S/qj ). Let us see how does this procedure work for the simplest case of a 1D system with the Hamiltonian function given by Eq. (10). In this case, the only generalized momentum is p=S/q, so that H=p22mef+Uef(q,t)=12mef(Sq)2+Uef(q,t), and Eq. (70) is reduced to a partial differential equation, St+12mef(Sq)2+Uef(q,t)=0. Its solution may be readily found in the easiest case of time-independent potential energy Uef = Uef (q). In this case, Eq. (72) is evidently satisfied by the following variable-separated solution: S(t,q)=S0(q)+ const ×t. Plugging this solution into Eq. (72), we see that since the sum of the two last terms on the left-hand side of that equation represents the full mechanical energy E, the constant in Eq. (73) is nothing but (-E). Thus for the function S0(q) we get an ordinary differential equation E+12mef(dS0dq)2+Uef(q)=0. Integrating it, we get S0={2mef[EUef(q)]}1/2dq+ const  so that, finally, the action is equal to S={2mef[EUef(q)]}1/2dqEt+ const.  For the case of 1D motion of a single 1D particle, i.e. for q=x,mef=m,Uef(q)=U(x), this solution is just the 1D case of the more general Eqs. (59)-(60), which were obtained above by a much more simple way. (In particular, S0 is just the abbreviated action.)

This particular case illustrates that the Hamilton-Jacobi equation is not the most efficient way for the solution of most practical problems of classical mechanics. However, it may be rather useful for studies of certain mathematical aspects of dynamics. 22 Moreover, in the early 1950s this approach was extended to a completely different field - the optimal control theory, in which the role of action S is played by the so-called cost function - a certain functional of a system (understood in a very general sense of this term), that should be minimized by an optimal choice of a control signal - a function of time that affects the system’s evolution in time. From the point of view of this mathematical theory, Eq. (70) is a particular case of a more general Hamilton-Jacobi-Bellman equation. 23


23 See, e.g., Chapters 6-9 in I. C. Percival and D. Richards, Introduction to Dynamics, Cambridge U. Press, 1983.

24 See, e.g., T. Bertsekas, Dynamic Programming and Optimal Control, vols. 1 and 2, Aetna Scientific, 2005 and 2007. The reader should not be intimidated by the very unnatural term “dynamic programming”, which was invented by the founding father of this field, Richard Ernest Bellman, to lure government bureaucrats into funding his research, deemed too theoretical at that time. (Presently, it has a broad range of important applications.)


This page titled 10.4: The Hamilton-Jacobi Equation is shared under a CC BY-NC-SA 4.0 license and was authored, remixed, and/or curated by Konstantin K. Likharev via source content that was edited to the style and standards of the LibreTexts platform.

Support Center

How can we help?