Math 511: Linear Algebra
1.2 Matrix Equations
1.2.1 Coefficient Matrices¶
$$ \require{color} \definecolor{brightblue}{rgb}{.267, .298, .812} \definecolor{darkblue}{rgb}{.08, .18, .28} \definecolor{palepink}{rgb}{1, .73, .8} \definecolor{softmagenta}{rgb}{.99,.34,.86} \def\ihat{\hat{\mmlToken{mi}[mathvariant="bold"]{ı}}} \def\jhat{\hat{\mmlToken{mi}[mathvariant="bold"]{ȷ}}} \def\khat{\hat{\mathrm{k}}} \def\tombstone{\unicode{x220E}} $$
In Section 1.1 we introduced Systems of Linear Equations
$$
\begin{array}{rcl} a_{11}x_1 + a_{12}x_2 +\ \cdots\ + a_{1n}x_n & = & b_1 \\ a_{21}x_1 + a_{22}x_2 +\ \cdots\ + a_{2n}x_n & = & b_2 \\ \\ \ddots\, \ \ +\ \ \ddots\, \ \ +\ \cdots\ +\ \ \ \ddots\, & = & \vdots \\ \\ a_{m1}x_1 + a_{m2}x_2 +\ \cdots + a_{mn}x_n & = & b_m \end{array}
$$
Let us consider such a system:
Example 1¶
$$\begin{array}{rcl} -\ \ \ x_2 -\ \ x_3 +\ \ x_4 & = &\ \ 0 \\ x_1 +\ \ x_2 +\ \ x_3 +\ \ x_4 & = &\ \ 6 \\ 2x_1 + 4x_2 +\ \ x_3 - 2x_4 & = & -1 \\ 3x_1 +\ \ x_2 - 2x_3 + 2x_4 & = &\ \ 3 \end{array}$$
To the mathematician in all of us, this is still too much to write if we need perform several operations to eliminate variables until it is strict triangular form. This is also not an efficient way to represent the linear system in a computer. What if instead we return to the column picture and agree that the first column of terms all involve the first independent variable $x_1$, the second column of terms only contain terms with the second independent variable $x_2$, and so on? We could represent the system of coefficients using the column picture
$$x_1\begin{bmatrix} \ 0\ \\ \ 1\ \\ \ 2\ \\ \ 3\ \end{bmatrix} + x_2\begin{bmatrix} -1 \\ \ \ 1 \\ \ \ 4 \\ \ \ 1 \end{bmatrix} + x_3\begin{bmatrix} -1 \\ \ \ 1 \\ \ \ 1 \\ -2 \end{bmatrix} + x_4\begin{bmatrix}\ \ 1 \\ \ \ 1 \\ -2 \\ \ \ 2 \end{bmatrix} = \begin{bmatrix}\ \ 0 \\ \ \ 6 \\ -1 \\ \ \ 3 \end{bmatrix}$$
We could make our representation even more efficient by packaging the columns into a single matrix and define matrix vector multiplication so that
$$x_1\begin{bmatrix} \ 0\ \\ \ 1\ \\ \ 2\ \\ \ 3\ \end{bmatrix} + x_2\begin{bmatrix} -1 \\ \ \ 1 \\ \ \ 4 \\ \ \ 1 \end{bmatrix} + x_3\begin{bmatrix} -1 \\ \ \ 1 \\ \ \ 1 \\ -2 \end{bmatrix} + x_4\begin{bmatrix}\ \ 1 \\ \ \ 1 \\ -2 \\ \ \ 2 \end{bmatrix} = \begin{bmatrix} 0 & -1 & -1 & \ \ 1 \\ 1 & \ \ 1 & \ \ 1 &\ \ 1 \\ 2 &\ \ 4&\ \ 1& -2 \\ 3 &\ \ 1 & -2 &\ \ 2 \end{bmatrix}\begin{bmatrix} x_1 \\ x_2 \\ x_3 \\ x_4 \end{bmatrix}$$
In other words, when we write a matrix multiplied on the right by a column vector
$$\begin{bmatrix} 0 & -1 & -1 & \ \ 1 \\ 1 & \ \ 1 & \ \ 1 &\ \ 1 \\ 2 &\ \ 4&\ \ 1& -2 \\ 3 &\ \ 1 & -2 &\ \ 2 \end{bmatrix}\begin{bmatrix} x_1 \\ x_2 \\ x_3 \\ x_4 \end{bmatrix}$$
we mean $x_1$ times the first column of the matrix plus $x_2$ times the second column, etc. In this way, matrix-vector multiplication is defined to be the __linear combination__ of column vectors
$$x_1\begin{bmatrix} \ 0\ \\ \ 1\ \\ \ 2\ \\ \ 3\ \end{bmatrix} + x_2\begin{bmatrix} -1 \\ \ \ 1 \\ \ \ 4 \\ \ \ 1 \end{bmatrix} + x_3\begin{bmatrix} -1 \\ \ \ 1 \\ \ \ 1 \\ -2 \end{bmatrix} + x_4\begin{bmatrix}\ \ 1 \\ \ \ 1 \\ -2 \\ \ \ 2 \end{bmatrix}$$
If we substitute our new matrix-vector multiplication into our equation we obtain the __matrix equation__
$$\begin{bmatrix} 0 & -1 & -1 & \ \ 1 \\ 1 & \ \ 1 & \ \ 1 &\ \ 1 \\ 2 &\ \ 4&\ \ 1& -2 \\ 3 &\ \ 1 & -2 &\ \ 2 \end{bmatrix}\begin{bmatrix} x_1 \\ x_2 \\ x_3 \\ x_4 \end{bmatrix} = \begin{bmatrix} b_1 \\ b_2 \\ b_3 \\ b_4 \end{bmatrix}$$
We call the matrix in our matrix-vector multiplication the __coefficient matrix__ because the elements of the matrix are the coefficients of the variables from the system of linear equations. We will use __capital__ letters to represent matrices, lower case letters in __bold__ to represent vectors and Greek and Latin alphabet letters to represent scalars. We only use this convention so that our linear algebra equations will be easy to read.
We can for example call our coefficient matrix $A$
$$A = \begin{bmatrix} 0 & -1 & -1 & \ \ 1 \\ 1 & \ \ 1 & \ \ 1 &\ \ 1 \\ 2 &\ \ 4&\ \ 1& -2 \\ 3 &\ \ 1 & -2 &\ \ 2 \end{bmatrix}$$
We can call our vector of independent variables the vector $\mathbf{x}$
$$\mathbf{x} = \begin{bmatrix} x_1 \\ x_2 \\ x_3 \\ x_4 \end{bmatrix}$$
We can call our vector of constants on the right-hand side of the equation the vector $\mathbf{b}$
$$\mathbf{b} = \begin{bmatrix} b_1 \\ b_2 \\ b_3 \\ b_4 \end{bmatrix}$$
Our equation becomes
$$A\mathbf{x} = \mathbf{b}.$$
We still need to perform several operations on the equations to get the system of linear equations into strict triangular form. Notice that every time we write the equivalent linear system as a matrix equation, the independent variable vector $\mathbf{x}$ is the same vector. We want to write the system as efficiently as possible so that we can avoid repetitive and unnecessary writing. We also need an efficient representation so that we can represent the system of linear equations in a computer. To meet these goals, we will write the linear system of equations using only the coefficient matrix $A$ and the constant vector $\mathbf{b}$. We create a partitioned matrix, that is a matrix constructed of two or more submatrices
$$
\left[ A | \mathbf{b}\right] =
\left[ \begin{array}{cccc|c}
0 & -1 & -1 & \ \ 1 &\ \ 0 \\ 1 & \ \ 1 & \ \ 1 &\ \ 1 &\ \ 6 \\ 2 &\ \ 4 &\ \ 1 & -2 & -1 \\ 3 &\ \ 1 & -2 &\ \ 2 &\ \ 3
\end{array} \right]
$$
This partitioned matrix is called the augmented matrix for the linear system of equations. The dashed vertical bar reminds us that the augmented matrix consists of the coefficient matrix and the constant vector.
1.2.2 Elementary Row Operations¶
The augmented matrix is a very dense representation of a system of linear equations. It is useful for a computer because we only give the computer only the information necessary to solve the system. The software in our computer only stores the array of numbers
$$
\left[ A | \mathbf{b}\right] =
\left[ \begin{array}{ccccc}
0 & -1 & -1 & \ \ 1 &\ \ 0 \\ 1 & \ \ 1 & \ \ 1 &\ \ 1 &\ \ 6 \\ 2 &\ \ 4 &\ \ 1 & -2 & -1 \\ 3 &\ \ 1 & -2 &\ \ 2 &\ \ 3
\end{array} \right]
$$
Notice the lack of the dashed vertical bar that we use to remind us that this is a partitioned matrix. The computer has no concept of what this array of numbers represents. You give the array of numbers meaning. You know that the array of numbers represents a linear system of equations, and in applications you decide that your physical or abstract model is represented by your linear systems of equations. You decide what the solution to the system of linear equations implies about the state of your physical or abstract model.
Remember that each row of the augmented matrix represents one of the equations in the linear system. Every operation we performed on the equations in Section 1.1, we can perform on a row of the augmented matrix.
Augmented Matrix¶
$$ \left[\begin{array}{cccc|c} 0 & -1 & -1 & \ \ 1 &\ \ 0 \\ 1 & \ \ 1 & \ \ 1 &\ \ 1 &\ \ 6 \\ 2 &\ \ 4 &\ \ 1 & -2 & -1 \\ 3 &\ \ 1 & -2 &\ \ 2 &\ \ 3 \end{array}\right] $$
Linear System of Equations¶
$$\begin{array}{rcl} -\ \ \ x_2 -\ \ x_3 +\ \ x_4 & = &\ \ 0 \\ x_1 +\ \ x_2 +\ \ x_3 +\ \ x_4 & = &\ \ 6 \\ 2x_1 + 4x_2 +\ \ x_3 - 2x_4 & = & -1 \\ 3x_1 +\ \ x_2 - 2x_3 + 2x_4 & = &\ \ 3 \end{array} $$
Recall there are three elementary operations we can perform on our system of equations that result in an equivalent system of equations.
Elementary Operations¶
- Interchange two equations
- Multiply an equation on both sides by a nonzero number
- Add a multiple of one equation to another equation
This gives us three elementary row operations we can perform on our augmented matrix that results in the augmented matrix of an equivalent augmented matrix. Remember an equivalent system of linear equations is not precisely the same linear system, but the equivalent system has exactly the same set of solutions. The three elementary row operations are
Elementary Row Operations¶
$$ \begin{array}{|l|l|} \hline \text{Name} & \text{Operation} \\ \hline \textbf{Type I} & \text{Interchange two rows of the augmented matrix} \\ \hline \textbf{Type II} & \text{Multiply a row by a nonzero number} \\ \hline \textbf{Type III} & \text{Add a multiple of one row of the augmented matrix to another row} \\ \hline \end{array} $$
Type I Elementary Row Operation¶
In our example, the first equation, that is the first row of the augmented matrix, has no value for the first independent variable. Let us perform a Type I Elementary Row Operation and interchange the first and second row to obtain
$$\left[\begin{array}{cccc|c} {\color{#0066CC}1} & \ \ 1 & \ \ 1 &\ \ 1 &\ \ 6 \\ 0 & -1 & -1 & \ \ 1 &\ \ 0 \\ 2 &\ \ 4 &\ \ 1 & -2 & -1 \\ 3 &\ \ 1 & -2 &\ \ 2 &\ \ 3 \end{array}\right]$$
This resulting system of linear equations is not exactly the same as the previous system but has the same set of solutions. We call the old augmented matrix and the new one equivalent matrices. Let use proceed to solve the linear system.
1.2.3 Solving a System of Linear Equations with the Augmented Matrix¶
Type III Elementary Row Operations¶
We identify the first nonzero element of matrix $A$ in the top left corner as our first pivot. This pivot is in column one, so the first column of matrix $A$ is a pivot column, and the first unknown variable $x_1$ is now a pivot variable. The pivot in matrix $A$ is rendered in blue to identify visually. To implement elimination, we need to subtract a multiple of row one from rows three and four to eliminate the variable $x_1$ from those equations. I usually make a note of my row operations to the right of the matrix.
$$\begin{align*}
\left[\begin{array}{cccc|c} {\color{#0066CC}1} & \ \ 1 & \ \ 1 &\ \ 1 &\ \ 6 \\ 0 & -1 & -1 & \ \ 1 &\ \ 0 \\ {\color{#CC0099}2} &\ \ 4 &\ \ 1 & -2 & -1 \\ {\color{#CC0099}3} &\ \ 1 & -2 &\ \ 2 &\ \ 3 \end{array}\right] &\begin{array}{r} \ \\ \ \\ R_3 - 2R_1 \\ R_4 - 3R_1 \end{array}
\end{align*}$$
These two Type III row operations result in the new linear system
$$\left[\begin{array}{cccc|c} {\color{#0066CC}1} & \ \ 1 & \ \ 1 &\ \ 1 &\ \ 6 \\ 0 & -1 & -1 & \ \ 1 &\ \ 0 \\ 0 &\ \ 2 & -1 & -4 & -13 \\ 0 & -2 & -5 & -1 & -15 \end{array}\right]$$
Type II Elementary Row Operation¶
We can use two Type II elementary row operations and multiply both sides of rows two and four by $-1$. This results in the equivalent system
$$\left[\begin{array}{cccc|c} {\color{#0066CC}1} & \ \ 1 & \ \ 1 &\ \ 1 &\ \ 6 \\ 0 &\ \ {\color{#0066CC}1} &\ \ 1 & -1 &\ \ 0 \\ 0 &\ \ 2 & -1 & -4 & -13 \\ 0 &\ \ 2 &\ \ 5 &\ \ 1 &\ \ 15 \end{array}\right]$$
We identify our second pivot by following the next equation to the first nonzero coefficient. Two Type III elementary row operations performed on the third and fourth row eliminate the second pivot variable $x_2$ from those equations.
$$\begin{align*}
\left[\begin{array}{cccc|c} {\color{#0066CC}1} & \ \ 1 & \ \ 1 &\ \ 1 &\ \ 6 \\ 0 &\ \ {\color{#0066CC}1} &\ \ 1 & -1 &\ \ 0 \\ 0 &\ \ {\color{#CC0099}2} & -1 & -4 & -13 \\ 0 &\ \ {\color{#CC0099}2} &\ \ 5 &\ \ 1 &\ \ 15 \end{array}\right] &\begin{array}{l} \ \\ \ \\ R_3-2R_2 \\ R_4-2R_2 \end{array}
\end{align*}$$
These two Type III elementary row operations result in the equivalent system
$$\left[\begin{array}{cccc|c} {\color{#0066CC}1} & \ \ 1 & \ \ 1 &\ \ 1 &\ \ 6 \\ 0 &\ \ {\color{#0066CC}1} &\ \ 1 & -1 &\ \ 0 \\ 0 &\ \ 0 & -3 & -2 & -13 \\ 0 &\ \ 0 &\ \ 3 &\ \ 3 &\ \ 15 \end{array}\right]$$
To save time and writing we perform two different row operations on this system of linear equations. We perform a Type II elementary row operation on equation three and a Type III elementary row operation on equation four.
$$\begin{align*}
\left[\begin{array}{cccc|c} {\color{#0066CC}1} & \ \ 1 & \ \ 1 &\ \ 1 &\ \ 6 \\ 0 &\ \ {\color{#0066CC}1} &\ \ 1 & -1 &\ \ 0 \\ 0 &\ \ 0 & -3 & -2 & -13 \\ 0 &\ \ 0 &\ \ 3 &\ \ 3 &\ \ 15 \end{array}\right] &\begin{array}{l} \ \\ \ \\ -R_3 \\ R_4+R_3 \end{array}
\end{align*}$$
Upper Triangular Form¶
$$\left[\begin{array}{cccc|c} {\color{#0066CC}1} & \ \ 1 & \ \ 1 &\ \ 1 &\ \ 6 \\ 0 &\ \ {\color{#0066CC}1} &\ \ 1 & -1 &\ \ 0 \\ 0 &\ \ 0 &\ \ {\color{#0066CC}3} &\ \ 2 &\ \ 13 \\ 0 &\ \ 0 &\ \ 0 &\ \ {\color{#0066CC}1} &\ \ 2 \end{array}\right]$$
This allows us to identify the last two pivots and realize that all of the columns of matrix $A$ are pivot columns and all of the unknowns $x_1$, $x_2$, $x_3$, $x_4$ are pivot variables. Thus we know that the linear system is independent because all the columns of matrix $A$ are pivot columns. We call the such a matrix square nonsingular. The resulting equivalent linear system is called upper triangular because the lower triangle of the matrix on the left of the vertical bar in the partitioned matrix has all zeros. The resulting equivalent linear system can be written
$$\begin{bmatrix} 1 & \ \ 1 & \ \ 1 &\ \ 1 \\ 0 &\ \ 1 &\ \ 1 & -1 \\ 0 &\ \ 0 &\ \ 3 &\ \ 2 \\ 0 &\ \ 0 &\ \ 0 &\ \ 1 \end{bmatrix}\cdot\begin{bmatrix} x_1 \\ x_2 \\ x_3 \\ x_4 \end{bmatrix} = \begin{bmatrix} 6 \\ 0 \\ 13 \\ 2 \end{bmatrix}$$
The matrices
$$\begin{bmatrix} 0 & -1 & -1 & \ \ 1 \\ 1 & \ \ 1 & \ \ 1 &\ \ 1 \\ 2 &\ \ 4 &\ \ 1 & -2 \\ 3 &\ \ 1 & -2 &\ \ 2 \end{bmatrix}\longleftrightarrow\begin{bmatrix} 1 & \ \ 1 & \ \ 1 &\ \ 1 \\ 0 &\ \ 1 &\ \ 1 & -1 \\ 0 &\ \ 0 &\ \ 3 &\ \ 2 \\ 0 &\ \ 0 &\ \ 0 &\ \ 1 \end{bmatrix}$$
are called row equivalent because one uses a finite number of elementary row operations to obtain either matrix from the other. The first matrix can be obtained from the last by reversing all of the previous elementary row operations.
1.2.4 Row Echelon From¶
If we perform one more Type II elementary row operation the equivalent system has the form
$$\left[\begin{array}{cccc|c} {\color{#0066CC}1} & \ \ 1 & \ \ 1 &\ \ 1 &\ \ 6 \\ 0 &\ \ {\color{#0066CC}1} &\ \ 1 & -1 &\ \ 0 \\ 0 &\ \ 0 &\ \ {\color{#0066CC}3} &\ \ 2 &\ \ 13 \\ 0 &\ \ 0 &\ \ 0 &\ \ {\color{#0066CC}1} &\ \ 2 \end{array}\right] \begin{array}{l} \ \\ \ \\ \dfrac{1}{3}R_3 \\ \ \end{array}\longrightarrow\left[\begin{array}{cccc|c} {\color{#0066CC}1} & \ \ 1 & \ \ 1 &\ \ 1 &\ \ 6 \\ 0 &\ \ {\color{#0066CC}1} &\ \ 1 & -1 &\ \ 0 \\ 0 &\ \ 0 &\ \ {\color{#0066CC}1} &\ \ \frac{2}{3} &\ \ \frac{13}{3} \\ 0 &\ \ 0 &\ \ 0 &\ \ {\color{#0066CC}1} &\ \ 2 \end{array}\right]$$
Because the pivots all have coefficients equal to $1$, we say the equivalent linear system is in row echelon form. We can perform backward substitution using either upper triangular or row echelon form and obtain a solution. Because we find a solution the linear system is called consistent.
Backward Substitution¶
Backward substitution is performed by assigning any free variables and proceeding from the last row to the first row. For each row one should have a true algebraic equation, or be able to solve the equation for one of the pivot variables.
The last equation in our example now reads
$$ x_4 = 2 $$
Proceeding from bottom-to-top and substituting $2$ or $x_4$ in the next row yields
$$ \begin{align*} x_3 + \frac{2}{3}x_4 = x_3 + \frac{2}{3}\,2 &= \frac{13}{3} \\ \\ x_3 &= \frac{13}{3} - \frac{4}{3} = \frac{9}{3} = 3 \end{align*} $$
Continuing to the next row and substituting for $x_3$ and $x_4$ gives
$$ \begin{align*} x_2 + x_3 - x_4 = x_2 + 3 - 2 &= 0 \\ \\ x_2 &= -1 \end{align*} $$
Finally substituting the values of $x_2$, $x_3$ and $x_4$ into the first equation results in
$$ \begin{align*} x_1 + x_2 + x_3 + x_4 &= x_1 - 1 + 3 + 2 = 6 \\ \\ x_1 + 4 &= 6 \\ \\ x_1 &= 2 \end{align*} $$
This gives us the unique solution $\begin{bmatrix}\ \ 2\ \\ -1\ \\ \ \ 3\ \\ \ \ 2\ \end{bmatrix}$
The solution set is
$$ \left\{\, \begin{bmatrix}\ \ 2\ \\ -1\ \\ \ \ 3\ \\ \ \ 2\ \end{bmatrix} \,\right\} $$
1.2.5 Reduced Row Echelon Form¶
For purposes of demonstration we continue to use elementary row operations to eliminate variables above the pivots as well as below the pivots. To continue we start with the last equation and use Type III elementary row operations to eliminate the pivot variable $x_4$ from equations one, two and three.
$$\left[\begin{array}{cccc|c} {\color{#0066CC}1} & \ \ 1 & \ \ 1 &\ \ {\color{#CC0099}1} &\ \ 6 \\ 0 &\ \ {\color{#0066CC}1} &\ \ 1 & {\color{#CC0099}{-1}} &\ \ 0 \\ 0 &\ \ 0 &\ \ {\color{#0066CC}1} &\ \ {\color{#CC0099}{\frac{2}{3}}} &\ \ \frac{13}{3} \\ 0 &\ \ 0 &\ \ 0 &\ \ {\color{#0066CC}1} &\ \ 2 \end{array}\right]\begin{array}{l} R_1-R_4 \\ R_2+R_4 \\ R_3 - \frac{2}{3}R_4 \\ \ \end{array}\longrightarrow\left[\begin{array}{cccc|c} {\color{#0066CC}1} & \ \ 1 & \ \ 1 &\ \ 0 &\ \ 4 \\ 0 &\ \ {\color{#0066CC}1} &\ \ 1 & \ \ 0 &\ \ 2 \\ 0 &\ \ 0 &\ \ {\color{#0066CC}1} &\ \ 0 &\ \ 3 \\ 0 &\ \ 0 &\ \ 0 &\ \ {\color{#0066CC}1} &\ \ 2 \end{array}\right]$$
Using the third equation we perform two Type III elementary row operations on rows one and two to eliminate the pivot variable $x_3$ from equations one and two.
$$\left[\begin{array}{cccc|c} {\color{#0066CC}1} & \ \ 1 & \ \ {\color{#CC0099}{1}} &\ \ 0 &\ \ 4 \\ 0 &\ \ {\color{#0066CC}1} &\ \ {\color{#CC0099}{1}} & \ \ 0 &\ \ 2 \\ 0 &\ \ 0 &\ \ {\color{#0066CC}1} &\ \ 0 &\ \ 3 \\ 0 &\ \ 0 &\ \ 0 &\ \ {\color{#0066CC}1} &\ \ 2 \end{array}\right]\begin{array}{l} R_1-R_3 \\ R_2-R_3 \\ \ \\ \ \end{array}\longrightarrow\left[\begin{array}{cccc|c} {\color{#0066CC}1} & \ \ 1 & \ \ 0 &\ \ 0 &\ \ 1 \\ 0 &\ \ {\color{#0066CC}1} &\ \ 0 & \ \ 0 & -1 \\ 0 &\ \ 0 &\ \ {\color{#0066CC}1} &\ \ 0 &\ \ 3 \\ 0 &\ \ 0 &\ \ 0 &\ \ {\color{#0066CC}1} &\ \ 2 \end{array}\right]$$
Finally we eliminate the pivot variable $x_2$ from equation one using a Type III elementary row operation.
$$\left[\begin{array}{cccc|c} {\color{#0066CC}1} & \ \ {\color{#CC0099}{1}} & \ \ 0 &\ \ 0 &\ \ 1 \\ 0 &\ \ {\color{#0066CC}1} &\ \ 0 & \ \ 0 & -1 \\ 0 &\ \ 0 &\ \ {\color{#0066CC}1} &\ \ 0 &\ \ 3 \\ 0 &\ \ 0 &\ \ 0 &\ \ {\color{#0066CC}1} &\ \ 2 \end{array}\right]\begin{array}{l} R_1-R_2 \\ \ \\ \ \\ \ \end{array}\longrightarrow\left[\begin{array}{cccc|c} {\color{#0066CC}1} & \ \ 0 & \ \ 0 &\ \ 0 &\ \ 2 \\ 0 &\ \ {\color{#0066CC}1} &\ \ 0 & \ \ 0 & -1 \\ 0 &\ \ 0 &\ \ {\color{#0066CC}1} &\ \ 0 &\ \ 3 \\ 0 &\ \ 0 &\ \ 0 &\ \ {\color{#0066CC}1} &\ \ 2 \end{array}\right]$$
Reduced Row Echelon Form¶
We say that the equivalent linear system is in reduced row echelon form when the pivot variables are eliminated from all of the other equations (that is the matrix has zeros below and above each pivot), and the coefficients of each pivot is $1$. Notice that all of the row elementary operations we used to obtain reduced row echelon form from row echelon form are very similar to the steps we would use to compute a solution using backward substitution from row echelon form. Backward substitution is easiest in reduced row echelon form. The reduced row echelon form of our equivalent linear system of equations yields
$$ x_1 = 2,\ x_2 = -1,\ x_3 = 3,\ x_4 = 2.$$
The solution of the matrix form of the linear system $A\mathbf{x} = \mathbf{b}$ or
$$\begin{bmatrix} 0 & -1 & -1 & \ \ 1 \\ 1 & \ \ 1 & \ \ 1 &\ \ 1 \\ 2 &\ \ 4&\ \ 1& -2 \\ 3 &\ \ 1 & -2 &\ \ 2 \end{bmatrix}\cdot\begin{bmatrix} x_1 \\ x_2 \\ x_3 \\ x_4 \end{bmatrix} = \begin{bmatrix}\ \ 0 \\ \ \ 6 \\ -1 \\ \ \ 3 \end{bmatrix}$$
is the vector
$$\mathbf{x} = \begin{bmatrix}\ \ 2 \\ -1 \\ \ \ 3 \\ \ \ 2 \end{bmatrix}$$
as
$$ \begin{bmatrix} 0 & -1 & -1 & \ \ 1 \\ 1 & \ \ 1 & \ \ 1 &\ \ 1 \\ 2 &\ \ 4&\ \ 1& -2 \\ 3 &\ \ 1 & -2 &\ \ 2 \end{bmatrix}\cdot\begin{bmatrix}\ \ 2 \\ -1 \\ \ \ 3 \\ \ \ 2 \end{bmatrix} =\ 2\begin{bmatrix}\ \ 0 \\ \ \ 1 \\ \ \ 2 \\ \ \ 3 \end{bmatrix} + (-1)\begin{bmatrix} -1 \\ \ \ 1 \\ \ \ 4 \\ \ \ 1 \end{bmatrix} + 3\begin{bmatrix} -1 \\ \ \ 1 \\ \ \ 1 \\ -2 \end{bmatrix} + 2\begin{bmatrix}\ \ 1\ \\ \ \ 1\ \\ -2\ \\ \ \ 2 \end{bmatrix} = \begin{bmatrix}\ \ 0 \\ \ \ 6 \\ -1 \\ \ \ 3 \end{bmatrix} {\color{green}\Large{\checkmark}} $$
The solution set is
$$ \left\{\,\begin{bmatrix}\ \ 2 \\ -1 \\ \ \ 3 \\ \ \ 2 \end{bmatrix}\,\right\} $$
1.2.6 Performing Gaussian Elimination¶
Anyone enrolled in an HYO section or who needs more help with Gaussian elimination should study
Elimination with Matrices
Dr. Strang discusses and demonstrates solving a linear system using Gaussian Elimination.
Everyone should study these shorter videos
Using Gaussian Elimination
Using Reduced Row Echelon Form
that demonstrate solving linear systems with Gaussian Elimination. Now that we have augmented matrices and Gaussian Elimination we will not ever use the out-dated methods of solving linear systems from Section 1.1.
Uses Vectors and Matrices¶
- In order to get credit for solving a linear system in your written work, you must use matrices, vectors, and the techniques taught in this course. Writing out a linear system as in the last section will not get any credit.
1.2.7 Examples¶
Example 2¶
Use an augmented matrix, Gaussian elimination, and backward substitution to solve the linear system
$$
\begin{bmatrix}
\ \ 1\ &\ \ 2\ &\ \ 0\ & -1\ \\
\ \ 5\ &\ \ 4\ & -6\ &\ \ 1\ \\
\ \ 0\ &\ \ 4\ &\ \ 4\ & -4\ \\
\ \ 2\ &\ \ 1\ & -3\ &\ \ 1\ \end{bmatrix}\mathbf{x} =
\begin{bmatrix} -3\ \\ -9\ \\ -4\ \\ -3\ \end{bmatrix}
$$
Solution¶
$$
\begin{align*}
\begin{bmatrix}
\ \ {\color{#0066CC}1}\ &\ \ 2\ &\ \ 0\ & -1\ & | & -3\ \\
\ \ {\color{#CC0099}5}\ &\ \ 4\ & -6\ &\ \ 1\ & | & -9\ \\
\ \ 0\ &\ \ 4\ &\ \ 4\ & -4\ & | & -4\ \\
\ \ {\color{#CC0099}2}\ &\ \ 1\ & -3\ &\ \ 1\ & | & -3\ \end{bmatrix}
\begin{array}{l} \\ R_2 - 5R_1 \\ \\ R_4 - 2R_1 \end{array} &\rightarrow
\begin{bmatrix}
\ \ {\color{#0066CC}1}\ &\ \ 2\ &\ \ 0\ & -1\ & | & -3\ \\
\ \ 0\ & -6\ & -6\ &\ \ 6\ & | &\ \ 6\ \\
\ \ 0\ &\ \ {\color{#0066CC}4}\ &\ \ 4\ & -4\ & | & -4\ \\
\ \ 0\ & -3\ & -3\ &\ \ 3\ & | &\ \ 3\ \end{bmatrix}
\begin{array}{l} \\ \\ R_3 + R_4 \\ \\ \end{array} \\
\\
&\rightarrow\begin{bmatrix}
\ \ {\color{#0066CC}1}\ &\ \ 2\ &\ \ 0\ & -1\ & | & -3\ \\
\ \ 0\ & -6\ & -6\ &\ \ 6\ & | &\ \ 6\ \\
\ \ 0\ &\ \ {\color{#0066CC}1}\ &\ \ 1\ & -1\ & | & -1\ \\
\ \ 0\ & -3\ & -3\ &\ \ 3\ & | &\ \ 3\ \end{bmatrix}
\begin{array}{l} \\ \text{switch }R_3 \\ \text{and }R_2 \\ \\ \end{array} \\
\\
&\rightarrow\begin{bmatrix}
\ \ {\color{#0066CC}1}\ &\ \ 2\ &\ \ 0\ & -1\ & | & -3\ \\
\ \ 0\ &\ \ {\color{#0066CC}1}\ &\ \ 1\ & -1\ & | & -1\ \\
\ \ 0\ &{\color{#CC0099}-6}\ & -6\ &\ \ 6\ & | &\ \ 6\ \\
\ \ 0\ &{\color{#CC0099}-3}\ & -3\ &\ \ 3\ & | &\ \ 3\ \end{bmatrix}
\begin{array}{l} \\ \\ R_3 + 6R_2 \\ R_4 + 3R_2 \\ \end{array} \\
\\
&\rightarrow\begin{bmatrix}
\ \ {\color{#0066CC}1}\ &\ \ {\color{#CC0099}2}\ &\ \ 0\ & -1\ & | & -3\ \\
\ \ 0\ &\ \ {\color{#0066CC}1}\ &\ \ 1\ & -1\ & | & -1\ \\
\ \ 0\ &\ \ 0\ &\ \ 0\ &\ \ 0\ & | &\ \ 0\ \\
\ \ 0\ &\ \ 0\ &\ \ 0\ &\ \ 0\ & | &\ \ 0\ \end{bmatrix}
\begin{array}{l} R_1 - 2R_2 \\ \\ \\ \\ \end{array} \\
\\
&\rightarrow\begin{bmatrix}
\ \ {\color{#0066CC}1}\ &\ \ 0\ & -2\ &\ \ 1\ & | & -1\ \\
\ \ 0\ &\ \ {\color{#0066CC}1}\ &\ \ 1\ & -1\ & | & -1\ \\
\ \ 0\ &\ \ 0\ &\ \ 0\ &\ \ 0\ & | &\ \ 0\ \\
\ \ 0\ &\ \ 0\ &\ \ 0\ &\ \ 0\ & | &\ \ 0\ \end{bmatrix}
\end{align*}
$$
The upper triangular and row echelon form of a matrix tell us a lot about both the matrix and the linear system it represents. In this problem, we know that the first and second columns of matrix $A$ are pivot columns, and they are linearly independent columns. We call our coefficients $x_1$ and $x_2$ pivot variables because they are the coefficients of pivot columns. The last two columns are free columns and they are linearly dependent. They are linear combinations of the two pivot columns. In fact the reduced row echelon form of our matrix tells us that the third column in matrix $A$ is $-2$ times the first column added to the second column. In algebra we write this
$$ \mathbf{a}_3 = -2\mathbf{a}_1 + \mathbf{a}_2 $$
We denote the columns of matrix $A$ using the lower case $\mathbf{a}$. The variable for vectors are in bold, or decorated to indicate that they are vectors. The subscript indicates which column the variable represents.
The reduced row echelon form of the augmented matrix also tells us that the fourth column is the first column minus the second column. In algebra one writes
$$ \mathbf{a}_4 = \mathbf{a}_1 - \mathbf{a}_2 $$
Free columns $\mathbf{a}_3$ and $\mathbf{a}_4$ are linear combinations of the pivot columns $\mathbf{a}_1$ and $\mathbf{a}_2$. Hence the span of the columns of matrix $A$ are just the span of the first two columns or a plane in four dimensional space.
This also tells us that any coefficient of a free column is really a coefficient of the pivot columns.
$$ \begin{align*} x_3\mathbf{a}_3 &= -2x_3\mathbf{a}_1 + x_3\mathbf{a}_2 \\ \\ x_4\mathbf{a}_4 &= x_4\mathbf{a}_1 - x_4\mathbf{a}_2 \end{align*} $$
Moreover it tells us that
$$ A\begin{bmatrix}\ \ 2x_3\ \\ -x_3\ \\ \ \ x_3\ \\ \ \ 0\ \end{bmatrix} = 2x_3\mathbf{a}_1 - x_3\mathbf{a}_2 + x_3\mathbf{a}_3 + 0\mathbf{a}_4 = \mathbf{0} $$
and also
$$ A\begin{bmatrix} -x_4\ \\ \ \ x_4\ \\ \ \ 0\ \\ \ \ x_4\ \end{bmatrix} = -x_4\mathbf{a}_1 + x_4\mathbf{a}_2 + 0\mathbf{a}_3 + \mathbf{a}_4 = \mathbf{0} $$
The coefficients of the free columns $\mathbf{a}_3$ and $\mathbf{a}_4$ can be any real number, say $s,\ t\in\mathbb{R}$ and we would still have
$$ A\begin{bmatrix} 2s - t \\ -s + t \\ s \\ t \end{bmatrix} = A\begin{bmatrix}\ 2s \\ -s \\ \ \ s \\ \ \ 0 \end{bmatrix} + A\begin{bmatrix} -t \\ \ \ t \\ \ \ 0 \\ \ \ t \end{bmatrix} = \mathbf{0} $$
Now that we identify columns 1 and 2 as pivot columns (linearly independent columns), and columns 3 and 4 as free columns (linearly dependent columns) we are ready to determine the solution using backward substitution.
$$
\begin{align*}
x_4 &= t \in\mathbb{R} \\
\\
x_3 &= s \in\mathbb{R} \\
\\
x_2 + s - t &= -1 \\
x_2 &= -1 - s + t \\
\\
x_1 -2s + t &= -1 \\
x_1 &= -1 + 2s - t \\
\\
\mathbf{x} &= \begin{bmatrix} -1 + 2s - t \\ -1 - s + t \\ s \\ t \end{bmatrix} = \begin{bmatrix} -1\ \\ -1 \\ \ \ 0\ \\ \ \ 0\ \end{bmatrix} + s\begin{bmatrix} \ \ 2\ \\ -1\ \\ \ \ 1\ \\ \ \ 0\ \end{bmatrix} + t\begin{bmatrix} -1\ \\ \ \ 1\ \\ \ \ 0\ \\ \ \ 1\ \end{bmatrix}
\end{align*}
$$
There are infinitely many solutions because variables $s$ and $t$ may be any real number. There are in fact an infinite plane of solutions in 4 dimensional space $\mathbb{R}^4$.
The solution set is $\left\{\,\begin{bmatrix} -1 + 2s - t \\ -1 - s + t \\ s \\ t \end{bmatrix} = \begin{bmatrix} -1\ \\ -1 \\ \ \ 0\ \\ \ \ 0\ \end{bmatrix} + s\begin{bmatrix} \ \ 2\ \\ -1\ \\ \ \ 1\ \\ \ \ 0\ \end{bmatrix} + t\begin{bmatrix} -1\ \\ \ \ 1\ \\ \ \ 0\ \\ \ \ 1\ \end{bmatrix}\,:\,\alpha,\ \beta\in\mathbb{R}\,\right\}$.
#### Check our solutions!
$$
\begin{align*}
A\mathbf{x} &= A\left(\begin{bmatrix} -1\ \\ -1 \\ \ \ 0\ \\ \ \ 0\ \end{bmatrix} + s\begin{bmatrix} \ \ 2\ \\ -1\ \\ \ \ 1\ \\ \ \ 0\ \end{bmatrix} + t\begin{bmatrix} -1\ \\ \ \ 1\ \\ \ \ 0\ \\ \ \ 1\ \end{bmatrix}\right) \\
\\
&= A\begin{bmatrix} -1\ \\ -1 \\ \ \ 0\ \\ \ \ 0\ \end{bmatrix} + sA\begin{bmatrix} \ \ 2\ \\ -1\ \\ \ \ 1\ \\ \ \ 0\ \end{bmatrix} + tA\begin{bmatrix} -1\ \\ \ \ 1\ \\ \ \ 0\ \\ \ \ 1\ \end{bmatrix} \\
\\
&= \begin{bmatrix} -3\ \\ -9\ \\ -4\ \\ -3\ \end{bmatrix} + s\mathbf{0} + t\mathbf{0} = \begin{bmatrix} -3\ \\ -9\ \\ -4\ \\ -3\ \end{bmatrix} {\color{green}\Large{\checkmark}}
\end{align*}
$$
1.2.8 Exercises¶
Try these exercises before looking at the solution.
Exercise 1¶
Use an augmented matrix, Gaussian elimination, and backward substitution to solve the linear system
$$
\begin{bmatrix}
\ \ 3\ &\ \ 2\ &\ \ 3\ &\ \ 0\ \\
\ \ 9\ &\ \ 5\ &\ 12\ &\ \ 1\ \\
-9\ & -6\ & -7\ &\ \ 3\ \\
\ \ 3\ &\ \ 4\ &\ \ 1\ &\ \ 5\ \end{bmatrix}\mathbf{x} =
\begin{bmatrix}\ \ 4\ \\ \ 15\ \\ -13\ \\ -5\ \end{bmatrix}
$$
View Solution
$$ \begin{align*} \begin{bmatrix} \ \ 3\ &\ \ 2\ &\ \ 3\ &\ \ 0\ & | &\ \ 4\ \\ \ \ 9\ &\ \ 5\ &\ 12\ &\ \ 1\ & | &\ 15\ \\ -9\ & -6\ & -7\ &\ \ 3\ & | & -13\ \\ \ \ 3\ &\ \ 4\ &\ \ 1\ &\ \ 5\ & | & -5\ \end{bmatrix} \begin{array}{l} \\ R_2-3R_1 \\ R_3+3R_1 \\ R_4-R_1 \end{array}&\rightarrow \begin{bmatrix} \ \ 3\ &\ \ 2\ &\ \ 3\ &\ \ 0\ & | &\ \ 4\ \\ \ \ 0\ & -1\ &\ \ 3\ &\ \ 1\ & | &\ \ 3\ \\ \ \ 0\ &\ \ 0\ &\ \ 2\ &\ \ 3\ & | & -1\ \\ \ \ 0\ &\ \ 2\ & -2\ &\ \ 5\ & | & -9\ \end{bmatrix} \begin{array}{l} \\ \\ \\ R_4+2R_2 \end{array} \\ \\ \rightarrow\begin{bmatrix} \ \ 3\ &\ \ 2\ &\ \ 3\ &\ \ 0\ & | &\ \ 4\ \\ \ \ 0\ & -1\ &\ \ 3\ &\ \ 1\ & | &\ \ 3\ \\ \ \ 0\ &\ \ 0\ &\ \ 2\ &\ \ 3\ & | & -1\ \\ \ \ 0\ &\ \ 0\ &\ \ 4\ &\ \ 7\ & | & -3\ \end{bmatrix} \begin{array}{l} \\ \\ \\ R_4-2R_3 \end{array} &\rightarrow \begin{bmatrix} \ \ 3\ &\ \ 2\ &\ \ 3\ &\ \ 0\ & | &\ \ 4\ \\ \ \ 0\ & -1\ &\ \ 3\ &\ \ 1\ & | &\ \ 3\ \\ \ \ 0\ &\ \ 0\ &\ \ 2\ &\ \ 3\ & | & -1\ \\ \ \ 0\ &\ \ 0\ &\ \ 0\ &\ \ 1\ & | & -1\ \end{bmatrix} \begin{array}{l} \\ R_2-R_4 \\ R_3-3R_4 \\ \\ \end{array} \\ \\ \rightarrow\begin{bmatrix} \ \ 3\ &\ \ 2\ &\ \ 3\ &\ \ 0\ & | &\ \ 4\ \\ \ \ 0\ & -1\ &\ \ 3\ &\ \ 0\ & | &\ \ 4\ \\ \ \ 0\ &\ \ 0\ &\ \ 2\ &\ \ 0\ & | &\ \ 2\ \\ \ \ 0\ &\ \ 0\ &\ \ 0\ &\ \ 1\ & | & -1\ \end{bmatrix} \begin{array}{l} \\ -R_2 \\ \frac{1}{2}R_3 \\ \\ \end{array} &\rightarrow \begin{bmatrix} \ \ 3\ &\ \ 2\ &\ \ 3\ &\ \ 0\ & | &\ \ 4\ \\ \ \ 0\ &\ \ 1\ & -3\ &\ \ 0\ & | & -4\ \\ \ \ 0\ &\ \ 0\ &\ \ 1\ &\ \ 0\ & | &\ \ 1\ \\ \ \ 0\ &\ \ 0\ &\ \ 0\ &\ \ 1\ & | & -1\ \end{bmatrix} \begin{array}{l} R_1-3R_3 \\ R_2+3R_3 \\ \\ \\ \end{array} \\ \\ \rightarrow\begin{bmatrix} \ \ 3\ &\ \ 2\ &\ \ 0\ &\ \ 0\ & | &\ \ 1\ \\ \ \ 0\ &\ \ 1\ &\ \ 0\ &\ \ 0\ & | & -1\ \\ \ \ 0\ &\ \ 0\ &\ \ 1\ &\ \ 0\ & | &\ \ 1\ \\ \ \ 0\ &\ \ 0\ &\ \ 0\ &\ \ 1\ & | & -1\ \end{bmatrix} \begin{array}{l} R_1-2R_2 \\ \\ \\ \\ \end{array} &\rightarrow \begin{bmatrix} \ \ 3\ &\ \ 0\ &\ \ 0\ &\ \ 0\ & | &\ \ 3\ \\ \ \ 0\ &\ \ 1\ &\ \ 0\ &\ \ 0\ & | & -1\ \\ \ \ 0\ &\ \ 0\ &\ \ 1\ &\ \ 0\ & | &\ \ 1\ \\ \ \ 0\ &\ \ 0\ &\ \ 0\ &\ \ 1\ & | & -1\ \end{bmatrix} \begin{array}{l} \frac{1}{3}R_1 \\ \\ \\ \\ \end{array} \\ \\ \rightarrow\begin{bmatrix} \ \ 1\ &\ \ 0\ &\ \ 0\ &\ \ 0\ & | &\ \ 1\ \\ \ \ 0\ &\ \ 1\ &\ \ 0\ &\ \ 0\ & | & -1\ \\ \ \ 0\ &\ \ 0\ &\ \ 1\ &\ \ 0\ & | &\ \ 1\ \\ \ \ 0\ &\ \ 0\ &\ \ 0\ &\ \ 1\ & | & -1\ \end{bmatrix} &\ \end{align*} $$
The solution is $\mathbf{x} = \begin{bmatrix}\ \ 1\ \\ -1\ \\ \ \ 1\ \\ -1\ \end{bmatrix}$.
The solution set if $\left\{\,\begin{bmatrix}\ \ 1\ \\ -1\ \\ \ \ 1\ \\ -1\ \end{bmatrix}\,\right\}$.
Exercise 2¶
Use an augmented matrix, Gaussian elimination, and backward substitution to solve the linear system
$$
\begin{bmatrix}
\ \ 3\ &\ \ 6\ &\ \ 3\ &\ \ 0\ \\
-6\ & -9\ & -3\ & -3\ \\
-3\ & -7\ & -4\ &\ \ 1\ \\
\ \ 3\ & -2\ & -5\ &\ \ 8\ \end{bmatrix}\mathbf{x} =
\begin{bmatrix} -3\ \\ \ \ 9\ \\ \ \ 2\ \\ -11\ \end{bmatrix}
$$
View Solution
$$ \begin{align*} \begin{bmatrix} \ \ 3\ &\ \ 6\ &\ \ 3\ &\ \ 0\ & | & -3\ \\ -6\ & -9\ & -3\ & -3\ & | &\ \ 9\ \\ -3\ & -7\ & -4\ &\ \ 1\ & | &\ \ 2\ \\ \ \ 3\ & -2\ & -5\ &\ \ 8\ & | & -11\ \end{bmatrix} \begin{array}{l} \frac{1}{3}R_1 \\ -\frac{1}{3}R_2 \\ \\ \\ \end{array}&\rightarrow \begin{bmatrix} \ \ 1\ &\ \ 2\ &\ \ 1\ &\ \ 0\ & | & -1\ \\ \ \ 2\ &\ \ 3\ &\ \ 1\ &\ \ 1\ & | & -3\ \\ -3\ & -7\ & -4\ &\ \ 1\ & | &\ \ 2\ \\ \ \ 3\ & -2\ & -5\ &\ \ 8\ & | & -11\ \end{bmatrix} \begin{array}{l} \\ R_2-2R_1 \\ R_3+3R_1 \\ R_4-3R_1 \end{array} \\ \\ \rightarrow\begin{bmatrix} \ \ 1\ &\ \ 2\ &\ \ 1\ &\ \ 0\ & | & -1\ \\ \ \ 0\ & -1\ & -1\ &\ \ 1\ & | & -1\ \\ \ \ 0\ & -1\ & -1\ &\ \ 1\ & | & -1\ \\ \ \ 0\ & -8\ & -8\ &\ \ 8\ & | & -8\ \end{bmatrix} \begin{array}{l} \\ -R_2 \\ R_3-R_2 \\ R_4-8R_2 \end{array} &\rightarrow \begin{bmatrix} \ \ 1\ &\ \ 2\ &\ \ 1\ &\ \ 0\ & | & -1\ \\ \ \ 0\ &\ \ 1\ &\ \ 1\ & -1\ & | &\ \ 1\ \\ \ \ 0\ &\ \ 0\ &\ \ 0\ &\ \ 0\ & | &\ \ 0\ \\ \ \ 0\ &\ \ 0\ &\ \ 0\ &\ \ 0\ & | &\ \ 0\ \end{bmatrix} \begin{array}{l} R_1-2R_2 \\ \\ \\ \\ \end{array} \\ \\ \rightarrow\begin{bmatrix} \ \ 1\ &\ \ 0\ & -1\ &\ \ 2\ & | & -3\ \\ \ \ 0\ &\ \ 1\ &\ \ 1\ & -1\ & | &\ \ 1\ \\ \ \ 0\ &\ \ 0\ &\ \ 0\ &\ \ 0\ & | &\ \ 0\ \\ \ \ 0\ &\ \ 0\ &\ \ 0\ &\ \ 0\ & | &\ \ 0\ \end{bmatrix} &\ \end{align*} $$ To find the solution, we need to assign our free variables first. $$ \begin{align*} x_3 &= \alpha\in\mathbb{R} \\ x_4 &= \beta\in\mathbb{R} \\ \\ x_1 - \alpha + 2\beta &= -3 \\ x_1 &= -3 + \alpha - 2\beta \\ \\ x_2 + \alpha - \beta &= 1 \\ x_2 &= 1 - \alpha + \beta \\ \\ \mathbf{x} &= \begin{bmatrix} -3 + \alpha - 2\beta \\ 1 - \alpha + \beta \\ \alpha \\ \beta \end{bmatrix} = \begin{bmatrix} -3\ \\ \ \ 1\ \\ \ \ 0\ \\ \ \ 0\ \end{bmatrix} + \begin{bmatrix}\ \ \alpha\ \\ -\alpha\ \\ \ \ \alpha\ \\ \ \ 0\ \end{bmatrix} + \begin{bmatrix} -2\beta\ \\ \ \ \beta\ \\ \ \ 0\ \\ \ \ \beta \end{bmatrix} \\ &= \begin{bmatrix} -3\ \\ \ \ 1\ \\ \ \ 0\ \\ \ \ 0\ \end{bmatrix} + \alpha\begin{bmatrix}\ \ 1\ \\ -1\ \\ \ \ 1\ \\ \ \ 0\ \end{bmatrix} + \beta\begin{bmatrix} -2\ \\ \ \ 1\ \\ \ \ 0\ \\ \ \ 1\ \end{bmatrix} \end{align*} $$
The solution is the set $\left\{\,\begin{bmatrix} -3\ \\ \ \ 1\ \\ \ \ 0\ \\ \ \ 0\ \end{bmatrix} + \alpha\begin{bmatrix}\ \ 1\ \\ -1\ \\ \ \ 1\ \\ \ \ 0\ \end{bmatrix} + \beta\begin{bmatrix} -2\ \\ \ \ 1\ \\ \ \ 0\ \\ \ \ 1\ \end{bmatrix}\,:\,\alpha,\ \beta\in\mathbb{R} \right\}$.
Exercise 3¶
Use an augmented matrix, Gaussian elimination, and backward substitution to solve the linear system
$$
\begin{bmatrix}
\ \ 1\ &\ \ 0\ & -1\ &\ \ 1\ \\
-2\ &\ \ 1\ &\ \ 2\ & -2\ \\
-1\ & -1\ &\ \ 0\ & -2\ \\
\ \ 1\ & -2\ &\ \ 0\ &\ \ 2\ \end{bmatrix}\mathbf{x} =
\begin{bmatrix} -2\ \\ \ \ 2\ \\ \ \ 3\ \\ \ \ 5\ \end{bmatrix}
$$
View Solution
$$ \begin{align*} \begin{bmatrix} \ \ 1\ &\ \ 0\ & -1\ &\ \ 1\ & | & -2\ \\ -2\ &\ \ 1\ &\ \ 2\ & -2\ & | &\ \ 2\ \\ -1\ & -1\ &\ \ 0\ & -2\ & | &\ \ 3\ \\ \ \ 1\ & -2\ &\ \ 0\ &\ \ 2\ & | &\ \ 5\ \end{bmatrix} \begin{array}{l} \\ R_2+2R_1 \\ R_3+R_1 \\ R_4-R_1 \end{array} &\rightarrow \begin{bmatrix} \ \ 1\ &\ \ 0\ & -1\ &\ \ 1\ & | & -2\ \\ \ \ 0\ &\ \ 1\ &\ \ 0\ &\ \ 0\ & | & -2\ \\ \ \ 0\ & -1\ & -1\ & -1\ & | &\ \ 1\ \\ \ \ 0\ & -2\ &\ \ 1\ &\ \ 1\ & | &\ \ 7\ \end{bmatrix} \begin{array}{l} \\ \\ R_3+R_2 \\ R_4+2R_2 \end{array} \\ \\ \begin{bmatrix} \ \ 1\ &\ \ 0\ & -1\ &\ \ 1\ & | & -2\ \\ \ \ 0\ &\ \ 1\ &\ \ 0\ &\ \ 0\ & | & -2\ \\ \ \ 0\ &\ \ 0\ & -1\ & -1\ & | & -1\ \\ \ \ 0\ &\ \ 0\ &\ \ 1\ &\ \ 1\ & | &\ \ 3\ \end{bmatrix} \begin{array}{l} \\ \\ -R_3 \\ R_4+R_3 \end{array} &\rightarrow \begin{bmatrix} \ \ 1\ &\ \ 0\ & -1\ &\ \ 1\ & | & -2\ \\ \ \ 0\ &\ \ 1\ &\ \ 0\ &\ \ 0\ & | & -2\ \\ \ \ 0\ &\ \ 0\ &\ \ 1\ &\ \ 1\ & | &\ \ 1\ \\ \ \ 0\ &\ \ 0\ &\ \ 0\ &\ \ 0\ & | &\ \ 2\ \end{bmatrix} \end{align*} $$
The last row now reads
$$ 0x_1 + 0x_2 + 0x_3 + 0x_4 = 2 $$
Since this is impossible, this problem has no solution. The solution set is $\left\{\ \right\}=\emptyset$.
Your use of this self-initiated mediated course material is subject to our Creative Commons License 4.0