1: Numerical Methods
- Page ID
- 6790
\( \newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \)
\( \newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}} \)
\( \newcommand{\dsum}{\displaystyle\sum\limits} \)
\( \newcommand{\dint}{\displaystyle\int\limits} \)
\( \newcommand{\dlim}{\displaystyle\lim\limits} \)
\( \newcommand{\id}{\mathrm{id}}\) \( \newcommand{\Span}{\mathrm{span}}\)
( \newcommand{\kernel}{\mathrm{null}\,}\) \( \newcommand{\range}{\mathrm{range}\,}\)
\( \newcommand{\RealPart}{\mathrm{Re}}\) \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\)
\( \newcommand{\Argument}{\mathrm{Arg}}\) \( \newcommand{\norm}[1]{\| #1 \|}\)
\( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\)
\( \newcommand{\Span}{\mathrm{span}}\)
\( \newcommand{\id}{\mathrm{id}}\)
\( \newcommand{\Span}{\mathrm{span}}\)
\( \newcommand{\kernel}{\mathrm{null}\,}\)
\( \newcommand{\range}{\mathrm{range}\,}\)
\( \newcommand{\RealPart}{\mathrm{Re}}\)
\( \newcommand{\ImaginaryPart}{\mathrm{Im}}\)
\( \newcommand{\Argument}{\mathrm{Arg}}\)
\( \newcommand{\norm}[1]{\| #1 \|}\)
\( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\)
\( \newcommand{\Span}{\mathrm{span}}\) \( \newcommand{\AA}{\unicode[.8,0]{x212B}}\)
\( \newcommand{\vectorA}[1]{\vec{#1}} % arrow\)
\( \newcommand{\vectorAt}[1]{\vec{\text{#1}}} % arrow\)
\( \newcommand{\vectorB}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \)
\( \newcommand{\vectorC}[1]{\textbf{#1}} \)
\( \newcommand{\vectorD}[1]{\overrightarrow{#1}} \)
\( \newcommand{\vectorDt}[1]{\overrightarrow{\text{#1}}} \)
\( \newcommand{\vectE}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash{\mathbf {#1}}}} \)
\( \newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \)
\(\newcommand{\longvect}{\overrightarrow}\)
\( \newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}} \)
\(\newcommand{\avec}{\mathbf a}\) \(\newcommand{\bvec}{\mathbf b}\) \(\newcommand{\cvec}{\mathbf c}\) \(\newcommand{\dvec}{\mathbf d}\) \(\newcommand{\dtil}{\widetilde{\mathbf d}}\) \(\newcommand{\evec}{\mathbf e}\) \(\newcommand{\fvec}{\mathbf f}\) \(\newcommand{\nvec}{\mathbf n}\) \(\newcommand{\pvec}{\mathbf p}\) \(\newcommand{\qvec}{\mathbf q}\) \(\newcommand{\svec}{\mathbf s}\) \(\newcommand{\tvec}{\mathbf t}\) \(\newcommand{\uvec}{\mathbf u}\) \(\newcommand{\vvec}{\mathbf v}\) \(\newcommand{\wvec}{\mathbf w}\) \(\newcommand{\xvec}{\mathbf x}\) \(\newcommand{\yvec}{\mathbf y}\) \(\newcommand{\zvec}{\mathbf z}\) \(\newcommand{\rvec}{\mathbf r}\) \(\newcommand{\mvec}{\mathbf m}\) \(\newcommand{\zerovec}{\mathbf 0}\) \(\newcommand{\onevec}{\mathbf 1}\) \(\newcommand{\real}{\mathbb R}\) \(\newcommand{\twovec}[2]{\left[\begin{array}{r}#1 \\ #2 \end{array}\right]}\) \(\newcommand{\ctwovec}[2]{\left[\begin{array}{c}#1 \\ #2 \end{array}\right]}\) \(\newcommand{\threevec}[3]{\left[\begin{array}{r}#1 \\ #2 \\ #3 \end{array}\right]}\) \(\newcommand{\cthreevec}[3]{\left[\begin{array}{c}#1 \\ #2 \\ #3 \end{array}\right]}\) \(\newcommand{\fourvec}[4]{\left[\begin{array}{r}#1 \\ #2 \\ #3 \\ #4 \end{array}\right]}\) \(\newcommand{\cfourvec}[4]{\left[\begin{array}{c}#1 \\ #2 \\ #3 \\ #4 \end{array}\right]}\) \(\newcommand{\fivevec}[5]{\left[\begin{array}{r}#1 \\ #2 \\ #3 \\ #4 \\ #5 \\ \end{array}\right]}\) \(\newcommand{\cfivevec}[5]{\left[\begin{array}{c}#1 \\ #2 \\ #3 \\ #4 \\ #5 \\ \end{array}\right]}\) \(\newcommand{\mattwo}[4]{\left[\begin{array}{rr}#1 \amp #2 \\ #3 \amp #4 \\ \end{array}\right]}\) \(\newcommand{\laspan}[1]{\text{Span}\{#1\}}\) \(\newcommand{\bcal}{\cal B}\) \(\newcommand{\ccal}{\cal C}\) \(\newcommand{\scal}{\cal S}\) \(\newcommand{\wcal}{\cal W}\) \(\newcommand{\ecal}{\cal E}\) \(\newcommand{\coords}[2]{\left\{#1\right\}_{#2}}\) \(\newcommand{\gray}[1]{\color{gray}{#1}}\) \(\newcommand{\lgray}[1]{\color{lightgray}{#1}}\) \(\newcommand{\rank}{\operatorname{rank}}\) \(\newcommand{\row}{\text{Row}}\) \(\newcommand{\col}{\text{Col}}\) \(\renewcommand{\row}{\text{Row}}\) \(\newcommand{\nul}{\text{Nul}}\) \(\newcommand{\var}{\text{Var}}\) \(\newcommand{\corr}{\text{corr}}\) \(\newcommand{\len}[1]{\left|#1\right|}\) \(\newcommand{\bbar}{\overline{\bvec}}\) \(\newcommand{\bhat}{\widehat{\bvec}}\) \(\newcommand{\bperp}{\bvec^\perp}\) \(\newcommand{\xhat}{\widehat{\xvec}}\) \(\newcommand{\vhat}{\widehat{\vvec}}\) \(\newcommand{\uhat}{\widehat{\uvec}}\) \(\newcommand{\what}{\widehat{\wvec}}\) \(\newcommand{\Sighat}{\widehat{\Sigma}}\) \(\newcommand{\lt}{<}\) \(\newcommand{\gt}{>}\) \(\newcommand{\amp}{&}\) \(\definecolor{fillinmathshade}{gray}{0.9}\)This chapter is not intended as a comprehensive course in numerical methods. Rather it deals, and only in a rather basic way, with the very common problems of numerical integration and the solution of simple (and not so simple!) Equations. Specialist astronomers today can generate most of the planetary tables for themselves; but those who are not so specialized still have a need to look up data in tables such as The Astronomical Almanac, and I have therefore added a brief section on interpolation, which I hope may be useful. While any of these topics could be greatly expanded, this section should be useful for many everyday computational purposes.
- 1.1: Introduction to Numerical Methods
- This page highlights the shift from seeking explicit algebraic solutions to the practicality of numerical methods in mathematics. It emphasizes the importance of numerical integration, equation solving, and briefly touches on interpolation. Although not extensively covering differential equations, it demonstrates the efficiency of numerical approaches through an example, illustrating their utility in computation.
- 1.2: Numerical Integration
- There are many occasions when one may wish to integrate an expression numerically rather than analytically. Sometimes one cannot find an analytical expression for an integral, or, if one can, it is so complicated that it is just as quick to integrate numerically as it is to tabulate the analytical expression. Or one may have a table of numbers to integrate rather than an analytical equation.
- 1.3: Quadratic Equations
- This page explores alternative iterative numerical methods for solving quadratic equations beyond the standard formula. It describes two approaches: one requiring a good initial guess but converging slowly, and another that rapidly converges even with poor initial guesses. The text emphasizes the impact of initial values on outcomes and encourages readers to investigate the superior convergence of the second method.
- 1.4: The Solution of f(x) = 0
- This page introduces the Newton-Raphson method for solving equations of the form \(f(x) = 0\). It begins with an initial guess and uses iterations based on the tangent slope to compute a more accurate value. The formula used is \(x \approx x_g - \frac{f(x_g)}{f'(x_g)}\), and the method is noted for its quick convergence, even from poor initial guesses, though it may fail in certain conditions. Various examples are provided to showcase the method's versatility and efficiency.
- 1.5: The Solution of Polynomial Equations
- The Newton-Raphson method is very suitable for the solution of polynomial equations.
- 1.6: Failure of the Newton-Raphson Method
- In nearly all cases encountered in practice Newton-Raphson method is very rapid and does not require a particularly good first guess. Nevertheless for completeness it should be pointed out that there are rare occasions when the method either fails or converges rather slowly.
- 1.7: Simultaneous Linear Equations, N = n
- This page covers methods for solving systems of linear equations, emphasizing an efficient elimination method over more labor-intensive techniques like Cramer's Rule and matrix methods. It includes a systematic approach with an error-checking mechanism using a Σ-column for accuracy in calculations. The page illustrates with a five-equation system and offers simplified solutions for two-variable equations, showcasing the effectiveness of direct methods based on coefficients and constants.
- 1.8: Simultaneous Linear Equations, N > n
- This page explains the least squares method by Carl Friedrich Gauss for solving systems of equations with no exact solutions. It focuses on minimizing residuals to derive normal equations, highlighting their importance in statistics and applications such as planetary computations. The page also covers the setup and solution of these normal equations, verifying their correctness through coefficient sums and offering examples.
- 1.9: Nonlinear Simultaneous Equations
- This page covers methods for solving nonlinear equations, specifically through examples involving simultaneous equations in orbital theory and a fourth-degree polynomial. It details techniques like variable elimination, iterative processes, and using Taylor expansions for refining solutions.
- 1.10: 1.10- Besselian Interpolation
- This page covers the evolution from printed tables to computational methods for mathematical functions like Bessel functions, emphasizing the role of nonlinear interpolation for value accuracy. It introduces Bessel's interpolation formula, its coefficients, and finite difference calculus, supported by a solar coordinate example.
- 1.11: Fitting a Polynomial to a Set of Points - Lagrange Polynomials and Lagrange Interpolation
- This page covers polynomial interpolation using Lagrange polynomials to fit \(n\) points on a graph with a polynomial of degree \(n-1\), highlighting its application without equal intervals. It also illustrates the calculation of \(\sin 51^\circ\) through both Lagrangian and Besselian methods, achieving a close approximation to the actual value.
- 1.12: Fitting a Least Squares Straight Line to a Set of Observational Points
- This page covers linear regression and the least squares method used to fit a line to data points, focusing on minimizing vertical distances (residuals). It notes the assumption that x-values are more precise than y-values. Additionally, it introduces the correlation coefficient as a measure of the relationship between two variables, stressing the importance of the number of data points in assessing its significance and the practical relevance of understanding these concepts.
- 1.13: Fitting a Least Squares Polynomial to a Set of Observational Points
- This page covers least squares polynomial regression, emphasizing quadratic regression of \(y\) on \(x\) and the determination of coefficients \(a_0, a_1,\) and \(a_2\) to minimize residuals. It includes examples of polynomial fits and warns against the dangers of high-degree polynomials for extrapolation due to erratic behavior. The page also notes the complications that arise when errors are present in \(x\) values.
- 1.14: Legendre Polynomials
- This page covers the expansion of \((1-2rx + r^2 )^{-1/2}\) through the binomial theorem, leading to a series of Legendre polynomials \(P_l(x)\), and introduces their recursion relation along with the first eleven such polynomials. It emphasizes their importance in theoretical physics, details their numerical values, and illustrates them graphically.
- 1.15: Gaussian Quadrature - the Algorithm
- This page covers Gaussian quadrature as a superior numerical integration method compared to Simpson's rule, especially noted for its efficiency. It explains the process of adapting Gaussian quadrature for various integrals through substitutions and provides practical examples, including functions with infinite limits or singularities.
- 1.16: Gaussian Quadrature - Derivation
- This page covers Gaussian quadrature and the key role of Legendre polynomials in polynomial integration. It explains polynomial division, emphasizing the orthogonality of Legendre polynomials, and introduces methods for approximating definite integrals using finite series and Lagrange polynomials.
- 1.17: Frequently-needed Numerical Procedures
- This page highlights the importance of building a personal collection of short computer programs for efficient execution of mathematical tasks. It underscores the benefits of developing custom programs for time-saving and gaining insight into their functionality. The author provides examples of essential programs for solving equations and matrix operations, encouraging readers to compile their own software toolkit to boost mathematical skills and efficiency.
Thumbnail: Comparison between 2-point Gaussian and trapezoidal quadrature. The blue line is the polynomial , whose integral in [−1, 1] is 2/3. The trapezoidal rule returns the integral of the orange dashed line. The 2-point Gaussian quadrature rule returns the integral of the black dashed curve. Such a result is exact, since the green region has the same area as the red regions. (CC BY-Sa 4.0; Paolostar).


