Loading [MathJax]/extensions/TeX/newcommand.js
Skip to main content
Library homepage
 

Text Color

Text Size

 

Margin Size

 

Font Type

Enable Dyslexic Font
Physics LibreTexts

1.7: Simultaneous Linear Equations, N = n

\newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } 

\newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}}

\newcommand{\id}{\mathrm{id}} \newcommand{\Span}{\mathrm{span}}

( \newcommand{\kernel}{\mathrm{null}\,}\) \newcommand{\range}{\mathrm{range}\,}

\newcommand{\RealPart}{\mathrm{Re}} \newcommand{\ImaginaryPart}{\mathrm{Im}}

\newcommand{\Argument}{\mathrm{Arg}} \newcommand{\norm}[1]{\| #1 \|}

\newcommand{\inner}[2]{\langle #1, #2 \rangle}

\newcommand{\Span}{\mathrm{span}}

\newcommand{\id}{\mathrm{id}}

\newcommand{\Span}{\mathrm{span}}

\newcommand{\kernel}{\mathrm{null}\,}

\newcommand{\range}{\mathrm{range}\,}

\newcommand{\RealPart}{\mathrm{Re}}

\newcommand{\ImaginaryPart}{\mathrm{Im}}

\newcommand{\Argument}{\mathrm{Arg}}

\newcommand{\norm}[1]{\| #1 \|}

\newcommand{\inner}[2]{\langle #1, #2 \rangle}

\newcommand{\Span}{\mathrm{span}} \newcommand{\AA}{\unicode[.8,0]{x212B}}

\newcommand{\vectorA}[1]{\vec{#1}}      % arrow

\newcommand{\vectorAt}[1]{\vec{\text{#1}}}      % arrow

\newcommand{\vectorB}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } 

\newcommand{\vectorC}[1]{\textbf{#1}} 

\newcommand{\vectorD}[1]{\overrightarrow{#1}} 

\newcommand{\vectorDt}[1]{\overrightarrow{\text{#1}}} 

\newcommand{\vectE}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash{\mathbf {#1}}}}

\newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } 

\newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}}

\newcommand{\avec}{\mathbf a} \newcommand{\bvec}{\mathbf b} \newcommand{\cvec}{\mathbf c} \newcommand{\dvec}{\mathbf d} \newcommand{\dtil}{\widetilde{\mathbf d}} \newcommand{\evec}{\mathbf e} \newcommand{\fvec}{\mathbf f} \newcommand{\nvec}{\mathbf n} \newcommand{\pvec}{\mathbf p} \newcommand{\qvec}{\mathbf q} \newcommand{\svec}{\mathbf s} \newcommand{\tvec}{\mathbf t} \newcommand{\uvec}{\mathbf u} \newcommand{\vvec}{\mathbf v} \newcommand{\wvec}{\mathbf w} \newcommand{\xvec}{\mathbf x} \newcommand{\yvec}{\mathbf y} \newcommand{\zvec}{\mathbf z} \newcommand{\rvec}{\mathbf r} \newcommand{\mvec}{\mathbf m} \newcommand{\zerovec}{\mathbf 0} \newcommand{\onevec}{\mathbf 1} \newcommand{\real}{\mathbb R} \newcommand{\twovec}[2]{\left[\begin{array}{r}#1 \\ #2 \end{array}\right]} \newcommand{\ctwovec}[2]{\left[\begin{array}{c}#1 \\ #2 \end{array}\right]} \newcommand{\threevec}[3]{\left[\begin{array}{r}#1 \\ #2 \\ #3 \end{array}\right]} \newcommand{\cthreevec}[3]{\left[\begin{array}{c}#1 \\ #2 \\ #3 \end{array}\right]} \newcommand{\fourvec}[4]{\left[\begin{array}{r}#1 \\ #2 \\ #3 \\ #4 \end{array}\right]} \newcommand{\cfourvec}[4]{\left[\begin{array}{c}#1 \\ #2 \\ #3 \\ #4 \end{array}\right]} \newcommand{\fivevec}[5]{\left[\begin{array}{r}#1 \\ #2 \\ #3 \\ #4 \\ #5 \\ \end{array}\right]} \newcommand{\cfivevec}[5]{\left[\begin{array}{c}#1 \\ #2 \\ #3 \\ #4 \\ #5 \\ \end{array}\right]} \newcommand{\mattwo}[4]{\left[\begin{array}{rr}#1 \amp #2 \\ #3 \amp #4 \\ \end{array}\right]} \newcommand{\laspan}[1]{\text{Span}\{#1\}} \newcommand{\bcal}{\cal B} \newcommand{\ccal}{\cal C} \newcommand{\scal}{\cal S} \newcommand{\wcal}{\cal W} \newcommand{\ecal}{\cal E} \newcommand{\coords}[2]{\left\{#1\right\}_{#2}} \newcommand{\gray}[1]{\color{gray}{#1}} \newcommand{\lgray}[1]{\color{lightgray}{#1}} \newcommand{\rank}{\operatorname{rank}} \newcommand{\row}{\text{Row}} \newcommand{\col}{\text{Col}} \renewcommand{\row}{\text{Row}} \newcommand{\nul}{\text{Nul}} \newcommand{\var}{\text{Var}} \newcommand{\corr}{\text{corr}} \newcommand{\len}[1]{\left|#1\right|} \newcommand{\bbar}{\overline{\bvec}} \newcommand{\bhat}{\widehat{\bvec}} \newcommand{\bperp}{\bvec^\perp} \newcommand{\xhat}{\widehat{\xvec}} \newcommand{\vhat}{\widehat{\vvec}} \newcommand{\uhat}{\widehat{\uvec}} \newcommand{\what}{\widehat{\wvec}} \newcommand{\Sighat}{\widehat{\Sigma}} \newcommand{\lt}{<} \newcommand{\gt}{>} \newcommand{\amp}{&} \definecolor{fillinmathshade}{gray}{0.9}

Consider the Equations

a_{11} x_1 + a_{12} x_2 + a_{13} x_3 + a_{14} x_4 + a_{15} x_5 = b_1 \label{1.7.1}

a_{21} x_1 + a_{22} x_2 + a_{23} x_3 + a_{24} x_4 + a_{25}x_{5} = b_2 \label{1.7.2}

a_{31} x_1 + a_{32} x_2 + a_{33} x_3 + a_{34} x_4 + a_{35}x_{5} = b_3 \label{1.7.3}

a_{41} x_1 + a_{42} x_2 + a_{43} x_3 + a_{44} x_4 + a_{45}x_{5} = b_4 \label{1.7.4}

a_{51} x_1 + a_{52} x_2 + a_{53} x_3 + a_{54} x_4 + a_{55}x_{5} = b_5 \label{1.7.5}

There are two well-known methods of solving these Equations. One of these is called Cramer's Rule. Let D be the determinant of the coefficients. Let D_i be the determinant obtained by substituting the column vector of the constants b_1, \ b_2, \ b_3, \ b_4, \ b_5 for the ith column in D. Then the solutions are

x_i = D_i / D \label{1.7.6}

This is an interesting theorem in the theory of determinants. It should be made clear, however, that, when it comes to the practical numerical solution of a set of linear Equations that may be encountered in practice, this is probably the most laborious and longest method ever devised in the history of mathematics.

The second well-known method is to write the Equations in matrix form:

\mathbb{A}\textbf{x} = \textbf{b}

Here \mathbb{A} is the matrix of the coefficients, \textbf{x} is the column vector of unknowns, and \textbf{b} is the column vector of the constants. The solutions are then given by

\textbf{x} = \mathbb{A}^{-1} \textbf{b}, \label{1.7.8} \tag{1.7.8}

where \mathbb{A}^{-1} is the inverse or reciprocal of \mathbb{A}. Thus the problem reduces to inverting a matrix. Now inverting a matrix is notoriously labour-intensive, and, while the method is not quite so long as Cramer's Rule, it is still far too long for practical purposes.

How, then, should a system of linear Equations be solved?

Consider the Equations

7x - 2y = 24

3x + 9y = 30

Few would have any hesitation in multiplying the first Equation by 3, the second Equation by 7, and subtracting. This is what we were all taught in our younger days, but few realize that this remains, in spite of knowledge of determinants and matrices, the fastest and most efficient method of solving simultaneous linear Equations. Let us see how it works with a system of several Equations in several unknowns.

Consider the Equations

9x_1 - 9x_2 + 8x_3 - 6x_4 + 4x_5 = -9

5x_1 - x_2 + 6x_3 + x_4 + 5x_5 = 58

2x_1 + 4x_2 - 5x_3 - 6x_4 + 7x_5 = -1

2x_1 + 3x_2 - 8x_3 - 5x_4 - 2x_5 = -49

8x_1 - 5x_2 + 7x_3 + x_4 + 5x_5 = 42

We first eliminate x_1 from the Equations, leaving four Equations in four unknowns. Then we eliminate x_2, leaving three Equations in three unknowns. Then x_3, and then x_4, finally leaving a single Equation in one unknown. The following table shows how it is done.

In columns 2 to 5 are listed the coefficients of x_1, x_2, x_3, x_4 and x_5, and in column 6 are the constant terms on the right hand side of the Equations. Thus columns 2 to 6 of the first five rows are just the original Equations. Column 7 is the sum of the numbers in columns 2 to 6, and this is a most important column. The boldface numbers in column 1 are merely labels.

Lines 6 to 9 show the elimination of x_1. Line 6 shows the elimination of x_1 from lines 1 and 2 by multiplying line 2 by 9 and line 1 by 5 and subtracting. The operation performed is recorded in column 1. In line 7, x_1 is eliminated from Equations 1 and 3 and so on.

\begin{array}{l c c c c c c c} & x_1 & x_2 & x_3 & x_4 & x_5 & b & \sum \\ \textbf{1} & 9 & -9 & 8 & -6 & 4 & -9 & -3 \\ \textbf{2} & 5 & -1 & 6 & 1 & 5 & 58 & 74 \\ \textbf{3} & 2 & 4 & -5 & -6 & 7 & -1 & 1 \\ \textbf{4} & 2 & 3 & -8 & -5 & -2 & -49 & -59 \\ \textbf{5} & 8 & -5 & 7 & 1 & 5 & 42 & 58 \\ \\ \textbf{6}= 9 \times \textbf{2} - 5 \times \textbf{1} && 36 & 14 & 39 & 25 & 567 & 681 \\ \textbf{7} = 2 \times \textbf{1} - 9 \times \textbf{3} && -54 & 61 & 42 & -55 & -9 & -15 \\ \textbf{8} = \textbf{3} - \textbf{4} && 1 & 3 & -1 & 9 & 48 & 60 \\ \textbf{9} = 4 \times \textbf{3} - \textbf{5} && 21 & -27 & -25 & 23 & -46 & -54 \\ \\ \textbf{10} = 3 \times \textbf{6} + 2 \times \textbf{7} &&& 164 & 201 & -35 & 1 \ 683 & 2 \ 013 \\ \textbf{11} = \textbf{6} - 36 \times \textbf{8} &&& -94 & 75 & -299 & -1 \ 161 & -1 \ 479 \\ \textbf{12} = 7 \times \textbf{6} - 12 \times \textbf{9} &&& 422 & 573 & -101 & 4 \ 521 & 5 \ 415 \\ \\ \textbf{13} = 47 \times \textbf{10} + 82 \times \textbf{11} &&&& 15 \ 597 & -26 \ 163 & -16 \ 101 & -26 \ 667 \\ \textbf{14} = 211 \times \textbf{11} + 47 \times \textbf{12} & &&&42 \ 756 & -67 \ 836 & -32 \ 484 & -57 \ 654 \\ \\ \textbf{15} = 5199 \times \textbf{14} - 14252 \times \textbf{13} &&&&& 20 \ 195 \ 712 & 60 \ 587 \ 136 & 80 \ 782 \ 848 \\ \end{array}

The purpose of Σ ? This column is of great importance. Whatever operation is performed on the previous columns is also performed on Σ, and Σ must remain the sum of the previous columns. If it does not, then an arithmetic mistake has been made, and it is immediately detected. There is nothing more disheartening to discover at the very end of a calculation that a mistake has been made and that one has no idea where the mistake occurred. Searching for mistakes takes far longer than the original calculation. The Σ-column enables one to detect and correct a mistake as soon as it has been made.

We eventually reach line 15, which is

20 \ 195 \ 712 x_5 = 60 \ 587 \ 136 ,

from which x_5 = 3.

x_4 can now easily be found from either or both of lines 13 and 14, x_3 can be found from any or all of lines 10, 11 and 12, and so on. When the calculation is complete, the answers should be checked by substitution in the original Equations (or in the sum of the five Equations). For the record, the solutions are x_1 = 2, \ x_2 = 7, \ x_3 = 6, \ x_4 = 4 and x _5 = 3.

Of course, if you have only two simultaneous Equations to solve, it is easy to write down explicit algebraic expressions for the solutions, and that may be the fastest and most efficient way of doing it. Thus, if

a_{11} x + a_{12} y = b_1 \label{1.7.9} \tag{1.7.9}

and a_{21} x + a_{22} y = b_2, \label{1.7.10} \tag{1.7.10}

the solutions are

x = c(b_1 a_{22} - b_2 a_{12}) \label{1.7.11} \tag{1.7.11}

and y = c(b_2 a_{11} - b_1 a_{21} ), \label{1.7.12} \tag{1.7.12}

where c = 1/(a_{11} a_{22} - a_{12} a_{21}). \label{1.7.13}


This page titled 1.7: Simultaneous Linear Equations, N = n is shared under a CC BY-NC 4.0 license and was authored, remixed, and/or curated by Jeremy Tatum via source content that was edited to the style and standards of the LibreTexts platform.

Support Center

How can we help?