2.3: General Method for the Minimization Problem
- Page ID
- 29956
To emphasize the generality of the method, we’ll just write
\[J[y]=\int_{x_{1}}^{x_{2}} f\left(y, y^{\prime}\right) d x \quad\left(y^{\prime}=d y / d x\right)\]
Then under any infinitesimal variation \(\delta y(x)\) (equal to zero at the fixed endpoints)
\[\delta J[y]=\int_{x_{1}}^{x_{2}}\left[\frac{\partial f\left(y, y^{\prime}\right)}{\partial y} \delta y(x)+\frac{\partial f\left(y, y^{\prime}\right)}{\partial y^{\prime}} \delta y^{\prime}(x)\right] d x=0\]
To make further progress, we write \(\delta y^{\prime}=\delta(d y / d x)=(d / d x) \delta y\), then integrate the second term by parts, remembering \(\delta y=0\) at the endpoints, to get
\[\delta J[y]=\int_{x_{1}}^{x_{2}}\left[\frac{\partial f\left(y, y^{\prime}\right)}{\partial y}-\frac{d}{d x}\left(\frac{\partial f\left(y, y^{\prime}\right)}{\partial y^{\prime}}\right)\right] \delta y(x) d x=0\]
Since this is true for any infinitesimal variation, we can choose a variation which is only nonzero near one point in the interval, and deduce that
\[\frac{\partial f\left(y, y^{\prime}\right)}{\partial y}-\frac{d}{d x}\left(\frac{\partial f\left(y, y^{\prime}\right)}{\partial y^{\prime}}\right)=0\]
This general result is called the Euler-Lagrange equation. It’s very important—you’ll be seeing it again.