$$\require{cancel}$$

# 9.5: Monte Carlo Integration

$$\newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} }$$ $$\newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}}$$$$\newcommand{\id}{\mathrm{id}}$$ $$\newcommand{\Span}{\mathrm{span}}$$ $$\newcommand{\kernel}{\mathrm{null}\,}$$ $$\newcommand{\range}{\mathrm{range}\,}$$ $$\newcommand{\RealPart}{\mathrm{Re}}$$ $$\newcommand{\ImaginaryPart}{\mathrm{Im}}$$ $$\newcommand{\Argument}{\mathrm{Arg}}$$ $$\newcommand{\norm}[1]{\| #1 \|}$$ $$\newcommand{\inner}[2]{\langle #1, #2 \rangle}$$ $$\newcommand{\Span}{\mathrm{span}}$$ $$\newcommand{\id}{\mathrm{id}}$$ $$\newcommand{\Span}{\mathrm{span}}$$ $$\newcommand{\kernel}{\mathrm{null}\,}$$ $$\newcommand{\range}{\mathrm{range}\,}$$ $$\newcommand{\RealPart}{\mathrm{Re}}$$ $$\newcommand{\ImaginaryPart}{\mathrm{Im}}$$ $$\newcommand{\Argument}{\mathrm{Arg}}$$ $$\newcommand{\norm}[1]{\| #1 \|}$$ $$\newcommand{\inner}[2]{\langle #1, #2 \rangle}$$ $$\newcommand{\Span}{\mathrm{span}}$$

The final numerical integration scheme that we'll discuss is Monte Carlo integration, and it is conceptually completely different from the previous schemes. Instead of assigning a set of discretization points (either explicitly, as in the mid-point/trapezium/Simpson's rules, or through a machine optimization procedure, as in the adaptive quadrature method), this method randomly samples the points in the integration domain. If the sampling points are independent and there is a sufficiently large number of them, the integral can be estimated by taking a weighted average of the integrand over the sampling points.

To be precise, consider a 1D integral over a domain $$x \in [a,b]$$. Let each sampling point be drawn independently from a distribution $$p(x)$$. This means that the probability of drawing sample $$x_{n}$$ in the range $$x_n \in [x, x + dx]$$ is $$p(x) dx$$. The distribution is normalized, so that

$\int_a^b p(x) \; dx = 1.$

Let us take $$N$$ samples, and evaluate the integrand at those points: this gives us a set of numbers $$\{f(x_n)\}$$. We then compute the quantity

$\mathcal{I}^{\mathrm{mc}} = \frac{1}{N} \sum_{n=0}^{N-1} \frac{f(x_n)}{p(x_n)}.$

Unlike the estimators that we have previously studied, $$\mathcal{I}^{\mathrm{mc}}$$ is a random number (because the underlying variables $$\{x_n\}$$ are all random). Crucially, its average value is equal to the desired integral:

\begin{align} \left\langle\mathcal{I}^{\mathrm{mc}}\right\rangle &= \frac{1}{N} \sum_{n=0}^{N-1} \left\langle \frac{f(x_n)}{p(x_n)}\right\rangle \\ & = \left\langle \frac{f(x_n)}{p(x_n)}\right\rangle \qquad\mathrm{for}\;\mathrm{each}\; n \\ & = \int_a^b p(x) \, \left[\frac{f(x)}{p(x)}\right]\, dx \\ & = \int_a^b f(x)\, dx \end{align}

For low-dimensional integrals, there is normally no reason to use the Monte Carlo integration method. It requires a much larger number of samples in order to reach a level of numerical accuracy comparable to the other numerical integration methods. (For 1D integrals, Monte Carlo integration typically requires millions of samples, whereas Simpson's rule only requires hundreds or thousands of discretization points.) However, Monte Carlo integration outperforms discretization-based integration schemes when the dimensionality of the integration becomes extremely large. Such integrals occur, for example, in quantum mechanical calculations involving many-body systems, where the dimensionality of the Hilbert space scales exponentially with the number of particles.

9.5: Monte Carlo Integration is shared under a CC BY-SA 4.0 license and was authored, remixed, and/or curated by Y. D. Chong via source content that was edited to conform to the style and standards of the LibreTexts platform; a detailed edit history is available upon request.