# 5.5: Functions with Several Independent Variables

$$\newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} }$$ $$\newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}}$$$$\newcommand{\id}{\mathrm{id}}$$ $$\newcommand{\Span}{\mathrm{span}}$$ $$\newcommand{\kernel}{\mathrm{null}\,}$$ $$\newcommand{\range}{\mathrm{range}\,}$$ $$\newcommand{\RealPart}{\mathrm{Re}}$$ $$\newcommand{\ImaginaryPart}{\mathrm{Im}}$$ $$\newcommand{\Argument}{\mathrm{Arg}}$$ $$\newcommand{\norm}[1]{\| #1 \|}$$ $$\newcommand{\inner}[2]{\langle #1, #2 \rangle}$$ $$\newcommand{\Span}{\mathrm{span}}$$ $$\newcommand{\id}{\mathrm{id}}$$ $$\newcommand{\Span}{\mathrm{span}}$$ $$\newcommand{\kernel}{\mathrm{null}\,}$$ $$\newcommand{\range}{\mathrm{range}\,}$$ $$\newcommand{\RealPart}{\mathrm{Re}}$$ $$\newcommand{\ImaginaryPart}{\mathrm{Im}}$$ $$\newcommand{\Argument}{\mathrm{Arg}}$$ $$\newcommand{\norm}[1]{\| #1 \|}$$ $$\newcommand{\inner}[2]{\langle #1, #2 \rangle}$$ $$\newcommand{\Span}{\mathrm{span}}$$$$\newcommand{\AA}{\unicode[.8,0]{x212B}}$$

## Functions with several independent variables $$y_{i}(x)$$

The discussion has focussed on systems having only a single function $$y(x)$$ such that the functional is an extremum. It is more common to have a functional that is dependent upon several independent variables $$f\left[ y_{1}(x),y_{1}^{\prime }(x),y_{2}(x),y_{2}^{\prime }(x),....;x\right]$$ which can be written as

$F=\int_{x_{1}}^{x_{2}}\sum_{i=1}^{N}f\left[ y_{i}(x),y_{i}^{\prime }(x);x \right] dx$

where $$i=1,2,3,....,N.$$

By analogy with the one dimensional problem, define neighboring functions $$\eta _{i}$$ for each variable. Then

\begin{align} y_{i}(\epsilon ,x) &=&y_{i}(0,x)+\epsilon \eta _{i}(x) \label{5.17} \\ y_{i}^{\prime }(\epsilon ,x) &\equiv &\frac{dy_{i}(\epsilon ,x)}{dx}=\frac{ dy_{i}(0,x)}{dx}+\epsilon \frac{d\eta _{i}}{dx} \notag\end{align}

where $$\eta _{i}$$ are independent functions of $$x$$ that vanish at $$x_{1}$$ and $$x_{2}.$$ Using equations ($$5.2.10$$) and \ref{5.17} leads to the requirements for an extremum value to be $\frac{\partial F}{\partial \epsilon }=\int_{x_{1}}^{x_{2}}\sum_{i}^{N}\left( \frac{\partial f}{\partial y_{i}}\frac{\partial y_{i}}{\partial \epsilon }+\frac{\partial f}{\partial y_{i}^{\prime }}\frac{\partial y_{i}^{\prime }}{ \partial \epsilon }\right) dx=\int_{x_{1}}^{x_{2}}\sum_{i}^{N}\left( \frac{ \partial f}{\partial y_{i}}-\frac{d}{dx}\frac{\partial f}{\partial y_{i}^{\prime }}\right) \eta _{i}(x)dx=0$

If the variables $$y_{i}(x)$$ are independent, then the $$\eta _{i}(x)$$ are independent. Since the $$\eta _{i}(x)$$ are independent, then evaluating the above equation at $$\epsilon =0$$ implies that each term in the bracket must vanish independently. That is, Euler’s differential equation becomes a set of $$N$$ equations for the $$N$$ independent variables

$\frac{\partial f}{\partial y_{i}}-\frac{d}{dx}\frac{\partial f}{\partial y_{i}^{\prime }}=0$

where $$i=1,2,3..N.$$ Thus, each of the $$N$$ equations can be solved independently when the $$N$$ variables are independent. Euler’s equation involves partial derivatives for the dependent variables $$y_{i}$$, $$y_{i\text{ }}^{\prime }$$and the total derivative for the independent variable $$x$$.

Example $$\PageIndex{1}$$: Fermat's Principle

In $$\mathit{1662}$$ Fermat’s proposed that the propagation of light obeyed the generalized principle of least transit time. In optics, Fermat’s principle, or the principle of least time, is the principle that the path taken between two points by a ray of light is the path that can be traversed in the least time. Historically, the proof of Fermat’s principle by Johann Bernoulli was one of the first triumphs of the calculus of variations, and served as a guiding principle in the formulation of physical laws using variational calculus.

Consider the geometry shown in the figure, where the light travels from the point $$P_{1}(0,y_{1},0)$$ to the point $$P_{2}(x_{2},-y_{2},0)$$. The light beam intersects a plane glass interface at the point $$Q(x,0,z)$$.

The French mathematician Fermat discovered that the required path travelled by light is the path for which the travel time $$t$$ is a minimum. That is, the transit time from the initial point $$P_{1}$$ to the final point $$P_{2}$$ is given by

$t=\int_{1}^{2}dt=\int_{1}^{2} \frac{ds}{v}=\frac{1}{c}\int_{1}^{2}nds=\frac{1}{c}\int_{1}^{2}n(x,y,z)\sqrt{ 1+\left( x^{\prime }\right) ^{2}+\left( z^{\prime }\right) ^{2}}dy\nonumber$

assuming that the velocity of light in any medium is given by $$v=c/n$$ where $$n$$ is the refractive index of the medium and $$c$$ is the velocity of light in vacuum.

This is a problem that has two dependent variables $$x(y)$$ and $$z(y)$$ with $$y$$ chosen as the independent variable. The integral can be broken into two parts $$y_{1}\rightarrow 0$$ and $$0\rightarrow -y_{2}.$$

$t=\frac{1}{c}\left[ \int_{y_{1}}^{0}n_{1}\sqrt{1+\left( x^{\prime }\right) ^{2}+\left( z^{\prime }\right) ^{2}}dy+\int_{0}^{-y_{2}}n_{2}\sqrt{1+\left( x^{\prime }\right) ^{2}+\left( z^{\prime }\right) ^{2}}dy\right]\nonumber$

The functionals are functions of $$x^{\prime }$$ and $$z^{\prime }$$ but not $$x$$ or $$z$$. Thus Euler’s equation for $$z$$ simplifies to

$0+\frac{d}{dy}\left( \frac{1}{c}(\frac{n_{1}z^{\prime }}{\sqrt{1+x^{^{\prime }2}+z^{\prime 2}}}+\frac{n_{2}z^{\prime }}{\sqrt{1+x^{\prime 2}+z^{^{\prime }2}}})\right) =0\nonumber$

This implies that $$z^{\prime }=0$$, therefore $$z$$ is a constant. Since the initial and final values were chosen to be $$z_{1}=z_{2}=0$$, therefore at the interface $$z=0$$. Similarly Euler’s equations for $$x$$ are

$0+\frac{d}{dy}\left( \frac{1}{c}(\frac{n_{1}x^{\prime }}{\sqrt{1+x^{^{\prime }2}+z^{\prime 2}}}+\frac{n_{2}x^{\prime }}{\sqrt{1+x^{\prime 2}+z^{^{\prime }2}}})\right) =0\nonumber$

But $$x^{\prime }=\tan \theta _{1}$$ for $$n_{1}$$ and $$x^{\prime }=-\tan \theta _{2}$$ for $$n_{2}$$ and it was shown that $$z^{\prime }=0$$. Thus

$0+\frac{d}{dy}\left( \frac{1}{c}(\frac{n_{1}\tan \theta _{1}}{\sqrt{1+\left( \tan \theta _{1}\right) ^{2}}}-\frac{n_{2}\tan \theta _{2}}{\sqrt{1+\left( \tan \theta _{2}\right) ^{2}}})\right) =\frac{d}{dy}\left( \frac{1}{c} (n_{1}\sin \theta _{1}-n_{2}\sin \theta _{2})\right) =0\nonumber$ Therefore $$\frac{1}{c}(n_{1}\sin \theta _{1}-n_{2}\sin \theta _{2})=$$ constant which must be zero since when $$n_{1}=n_{2},$$ then $$\theta _{1}=\theta _{2}$$. Thus Fermat’s principle leads to Snell’s Law. $n_{1}\sin \theta _{1}=n_{2}\sin \theta _{2}\nonumber$

The geometry of this problem is simple enough to directly minimize the path rather than using Euler’s equations for the two parameters as performed above. The lengths of the paths $$P_{1}Q$$ and $$QP_{2}$$ are

\begin{aligned} P_{1}Q &=&\sqrt{x^{2}+y_{1}^{2}+z^{2}} \\ QP_{2} &=&\sqrt{\left( x_{2}-x\right) ^{2}+y_{2}^{2}+z^{2}}\end{aligned}\nonumber

The total transit time is given by

$t=\frac{1}{c}\left( n_{1}\sqrt{x^{2}+y_{1}^{2}+z^{2}}+n_{2}\sqrt{\left( x_{2}-x\right) ^{2}+y_{2}^{2}+z^{2}}\right)\nonumber$

This problem involves two dependent variables, $$y(x)$$ and $$z(x)$$. To find the minima, set the partial derivatives $$\frac{ \partial t}{\partial z}=0$$ and $$\frac{\partial t}{\partial x}=0$$. That is,

$\frac{\partial t}{\partial z}=\frac{1}{c}(\frac{n_{1}z}{\sqrt{ x^{2}+y_{1}^{2}+z^{2}}}+\frac{n_{2}z}{\sqrt{\left( x_{2}-x\right) ^{2}+y_{2}^{2}+z^{2}}})=0\nonumber$

This is zero only if $$\ z=0$$, that is the point $$Q$$ lies in the plane containing $$P_{1}$$ and $$P_{2}$$. Similarly

$\frac{\partial t}{\partial x}=\frac{1}{c}(\frac{n_{1}x}{\sqrt{ x^{2}+y_{1}^{2}+z^{2}}}-\frac{n_{2}(x_{2}-x)}{\sqrt{\left( x_{2}-x\right) ^{2}+y_{2}^{2}+z^{2}}})=\frac{1}{c}\left( n_{1}\sin \theta _{1}-n_{2}\sin \theta _{2}\right) =0 \nonumber$

This is zero only if Snell’s law applies that is

$n_{1}\sin \theta _{1}=n_{2}\sin \theta _{2}\nonumber$

Fermat’s principle has shown that the refracted light is given by Snell’s Law, and is in a plane normal to the surface. The laws of reflection also are given since then $$n_{1}=n_{2}=n$$ and the angle of reflection equals the angle of incidence.

Example $$\PageIndex{2}$$: Minimum of $$(\nabla \phi)^2$$ in a volume

Find the function $$\phi (x_{1},x_{2},x_{3})$$ that has the minimum value of $$\left( \nabla \phi \right) ^{2}$$ per unit volume. For the volume $$V$$ it is desired to minimize the following

$J= \frac{1}{V}\int \int \int \left( \nabla \phi \right) ^{2}dx_{1}dx_{2}dx_{3}= \frac{1}{V}\int \int \int \left[ \left( \frac{\partial \phi }{\partial x_{1}} \right) ^{2}+\left( \frac{\partial \phi }{\partial x_{2}}\right) ^{2}+\left( \frac{\partial \phi }{\partial x_{3}}\right) ^{2}\right] dx_{1}dx_{2}dx_{3}\nonumber$

Note that the variables $$x_{1},x_{2},x_{3}$$ are independent, and thus Euler’s equation for several independent variables can be used. To minimize the functional $$J$$, the function

$f=\left( \frac{\partial \phi }{\partial x_{1}}\right) ^{2}+\left( \frac{ \partial \phi }{\partial x_{2}}\right) ^{2}+\left( \frac{\partial \phi }{ \partial x_{3}}\right) ^{2} \tag{\alpha  }$

must satisfy the Euler equation

$\frac{\partial f}{\partial \phi }-\sum_{i=1}^{3}\frac{\partial }{\partial x_{i}}\left( \frac{\partial f}{\partial \phi ^{\prime }}\right) =0\nonumber$

where $$\phi ^{\prime }=\frac{\partial \phi }{\partial x_{i}}$$. Substitute $$f$$ into Euler’s equation gives

$\sum_{i=1}^{3}\frac{\partial }{\partial x_{i}}\left( \frac{\partial \phi }{ \partial x_{i}}\right) =0\nonumber$

This is just Laplace’s equation

$\nabla ^{2}\phi =0\nonumber$

Therefore $$\phi$$ must satisfy Laplace’s equation in order that the functional $$J$$ be a minimum.

This page titled 5.5: Functions with Several Independent Variables is shared under a CC BY-NC-SA 4.0 license and was authored, remixed, and/or curated by Douglas Cline via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit history is available upon request.