5.2: Matrix Generalization
We can generalize the Gaussian elimination algorithm described in the previous section, to solve matrix problems of the form
\[\mathbf{A}\, \mathbf{x} = \mathbf{b},\]
where \(\mathbf{x}\) and \(\mathbf{b}\) are \(N\times M\) matrices, not merely vectors. An example, for \(M=2\), is
\[\begin{bmatrix}1 &2 &3 \\ 3 &2 &2 \\ 2 &6 &2\end{bmatrix} \begin{bmatrix} x_{00} & x_{01} \\ x_{10} & x_{11} \\ x_{20} & x_{21} \end{bmatrix} = \begin{bmatrix}3 & 6 \\ 4 & 8 \\ 4 & 2\end{bmatrix}.\]
It can get a bit tedious to keep writing out the \(x\) elements in the system of equations, particularly when \(\mathbf{x}\) becomes a matrix. For this reason, we switch to a notation known as the augmented matrix :
\[\left[\begin{array}{ccc|cc} 1 & 2 & 3 & 3 & 6 \\ 3 & 2 & 2 & 4 & 8 \\ 2 & 6 & 2 & 4 & 2 \end{array}\right].\]
Here, the entries to the left of the vertical separator denote the left-hand side of the system of equations, and the entries to the right of the separator denote the right-hand side of the system of equations.
The Gaussian elimination algorithm can now be performed directly on the augmented matrix. We will walk through the steps for the above example. First, row reduction:
-
Eliminate the element at \((1,0)\):
\[\left[\begin{array}{ccc|cc} 1 & 2 & 3 & 3 & 6 \\ 0 & -4 & -7 & -5 & -10 \\ 2 & 6 & 2 & 4 & 2 \end{array}\right]\] -
Eliminate the element at \((2,0)\):
\[\left[\begin{array}{ccc|cc} 1 & 2 & 3 & 3 & 6 \\ 0 & -4 & -7 & -5 & -10 \\ 0 & 2 & -4 & -2 & -10 \end{array}\right]\] -
Eliminate the element at \((2,1)\):
\[\left[\begin{array}{ccc|cc} 1 & 2 & 3 & 3 & 6 \\ 0 & -4 & -7 & -5 & -10 \\ 0 & 0 & -7.5 & -4.5 & -15 \end{array}\right]\]
The back-substitution step converts the left-hand portion of the augmented matrix to the identity matrix:
-
Solve for row \(2\):
\[\left[\begin{array}{ccc|cc} 1 & 2 & 3 & 3 & 3 \\ 0 & -4 & -7 & -5 & -10 \\ 0 & 0 & 1 & 0.6 & 2 \end{array}\right]\] -
Solve for row \(1\):
\[\left[\begin{array}{ccc|cc} 1 &\; 2 & \;\;\;3 \;\;& 3 & \;3 \\ 0 & \;1 & \;\;\;0\;\; & 0.2 & \;-1 \\ 0 & \;0 & \;\;\;1\;\; & 0.6 & \;2 \end{array}\right]\] -
Solve for row \(0\):
\[\left[\begin{array}{ccc|cc} 1 & \;0 & \;\;\;0\;\; & 0.8 & \;2 \\ 0 & \;1 & \;\;\;0\;\; & 0.2 & \;-1 \\ 0 & \;0 & \;\;\;1\;\; & 0.6 & \;2 \end{array}\right]\]
After the algorithm finishes, the right-hand side of the augmented matrix contains the result for \(\mathbf{x}\). Analyzing the runtime using the same reasoning as before, we find that the row reduction step scales as \(O\Big(N^2(N+M)\Big)\), and the back-substitution step scales as \(O\Big(N(N+M)\Big)\).
This matrix form of the Gaussian elimination algorithm is the standard method for computing matrix inverses. If \(\mathbf{b}\) is the \(N\times N\) identity matrix, then the solution \(\mathbf{x}\) will be the inverse of \(\mathbf{A}\). Thus, the runtime for calculating a matrix inverse scales as \(O(N^{3})\).