Skip to main content
Physics LibreTexts

2.3: General Method for the Minimization Problem

  • Page ID
    29956
  • \( \newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \) \( \newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}} \)\(\newcommand{\id}{\mathrm{id}}\) \( \newcommand{\Span}{\mathrm{span}}\) \( \newcommand{\kernel}{\mathrm{null}\,}\) \( \newcommand{\range}{\mathrm{range}\,}\) \( \newcommand{\RealPart}{\mathrm{Re}}\) \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\) \( \newcommand{\Argument}{\mathrm{Arg}}\) \( \newcommand{\norm}[1]{\| #1 \|}\) \( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\) \( \newcommand{\Span}{\mathrm{span}}\) \(\newcommand{\id}{\mathrm{id}}\) \( \newcommand{\Span}{\mathrm{span}}\) \( \newcommand{\kernel}{\mathrm{null}\,}\) \( \newcommand{\range}{\mathrm{range}\,}\) \( \newcommand{\RealPart}{\mathrm{Re}}\) \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\) \( \newcommand{\Argument}{\mathrm{Arg}}\) \( \newcommand{\norm}[1]{\| #1 \|}\) \( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\) \( \newcommand{\Span}{\mathrm{span}}\)\(\newcommand{\AA}{\unicode[.8,0]{x212B}}\)

    To emphasize the generality of the method, we’ll just write

    \[J[y]=\int_{x_{1}}^{x_{2}} f\left(y, y^{\prime}\right) d x \quad\left(y^{\prime}=d y / d x\right)\]

    Then under any infinitesimal variation \(\delta y(x)\) (equal to zero at the fixed endpoints)

    \[\delta J[y]=\int_{x_{1}}^{x_{2}}\left[\frac{\partial f\left(y, y^{\prime}\right)}{\partial y} \delta y(x)+\frac{\partial f\left(y, y^{\prime}\right)}{\partial y^{\prime}} \delta y^{\prime}(x)\right] d x=0\]

    To make further progress, we write \(\delta y^{\prime}=\delta(d y / d x)=(d / d x) \delta y\), then integrate the second term by parts, remembering \(\delta y=0\) at the endpoints, to get

    \[\delta J[y]=\int_{x_{1}}^{x_{2}}\left[\frac{\partial f\left(y, y^{\prime}\right)}{\partial y}-\frac{d}{d x}\left(\frac{\partial f\left(y, y^{\prime}\right)}{\partial y^{\prime}}\right)\right] \delta y(x) d x=0\]

    Since this is true for any infinitesimal variation, we can choose a variation which is only nonzero near one point in the interval, and deduce that

    \[\frac{\partial f\left(y, y^{\prime}\right)}{\partial y}-\frac{d}{d x}\left(\frac{\partial f\left(y, y^{\prime}\right)}{\partial y^{\prime}}\right)=0\]

    This general result is called the Euler-Lagrange equation. It’s very important—you’ll be seeing it again.


    This page titled 2.3: General Method for the Minimization Problem is shared under a not declared license and was authored, remixed, and/or curated by Michael Fowler.