1.2.1 Coefficient Matrices¶
$$ \require{color} \definecolor{brightblue}{rgb}{.267, .298, .812} \definecolor{darkblue}{rgb}{.08, .18, .28} \definecolor{palepink}{rgb}{1, .73, .8} \definecolor{softmagenta}{rgb}{.99,.34,.86} \definecolor{blueviolet}{rgb}{.537,.192,.937} \definecolor{jonquil}{rgb}{.949,.792,.098} \definecolor{shockingpink}{rgb}{1, 0, .741} \definecolor{royalblue}{rgb}{0, .341, .914} \definecolor{alien}{rgb}{.529,.914,.067} \definecolor{crimson}{rgb}{1, .094, .271} \definecolor{indigo}{rgb}{.8, 0, .6} \def\ihat{\hat{\mmlToken{mi}[mathvariant="bold"]{ı}}} \def\jhat{\hat{\mmlToken{mi}[mathvariant="bold"]{ȷ}}} \def\khat{\hat{\mathsf{k}}} \def\tombstone{\unicode{x220E}} \def\contradiction{\unicode{x2A33}} \def\textregistered{^{\unicode{xAE}}} $$
In Section 1.1 we introduced Systems of Linear Equations
$$ \begin{array}{rcl} a_{11}x_1 + a_{12}x_2 +\ \cdots\ + a_{1n}x_n & = & b_1 \\ a_{21}x_1 + a_{22}x_2 +\ \cdots\ + a_{2n}x_n & = & b_2 \\ \\ \ddots\, \ \ +\ \ \ddots\, \ \ +\ \cdots\ +\ \ \ \ddots\, & = & \vdots \\ \\ a_{m1}x_1 + a_{m2}x_2 +\ \cdots + a_{mn}x_n & = & b_m \end{array} $$
Let us consider such a system:
Example 1¶
$$ \begin{array}{rcl} -\ \ \ x_2 -\ \ x_3 +\ \ x_4 & = &\ \ 0 \\ x_1 +\ \ x_2 +\ \ x_3 +\ \ x_4 & = &\ \ 6 \\ 2x_1 + 4x_2 +\ \ x_3 - 2x_4 & = & -1 \\ 3x_1 +\ \ x_2 - 2x_3 + 2x_4 & = &\ \ 3 \end{array} $$
To the mathematician in all of us, this is still too much to write if we need perform several operations to eliminate variables until it is strict triangular form. This is also not an efficient way to represent the linear system in a computer. What if instead we return to the column picture and agree that the first column of terms all involve the first independent variable $x_1$, the second column of terms only contain terms with the second independent variable $x_2$, and so on? We could represent the system of coefficients using the column picture
$$ \begin{bmatrix} \ 0\ \\ \ x_1\ \\ \ 2x_1\ \\ \ 3x_1\ \end{bmatrix} + \begin{bmatrix} -x_2 \\ \ \ x_2 \\ \ \ 4x_2 \\ \ \ x_2 \end{bmatrix} + \begin{bmatrix} -x_3 \\ \ \ x_3 \\ \ \ x_3 \\ -2x_3 \end{bmatrix} + \begin{bmatrix}\ \ x_4 \\ \ \ x_4 \\ -2x_4 \\ \ \ 2x_4 \end{bmatrix} = x_1\begin{bmatrix} \ 0\ \\ \ 1\ \\ \ 2\ \\ \ 3\ \end{bmatrix} + x_2\begin{bmatrix} -1 \\ \ \ 1 \\ \ \ 4 \\ \ \ 1 \end{bmatrix} + x_3\begin{bmatrix} -1 \\ \ \ 1 \\ \ \ 1 \\ -2 \end{bmatrix} + x_4\begin{bmatrix}\ \ 1 \\ \ \ 1 \\ -2 \\ \ \ 2 \end{bmatrix} = \begin{bmatrix}\ \ 0 \\ \ \ 6 \\ -1 \\ \ \ 3 \end{bmatrix} $$
We could make our representation even more efficient by packaging the columns into a single matrix and define matrix vector multiplication so that
$$ x_1\begin{bmatrix} \ 0\ \\ \ 1\ \\ \ 2\ \\ \ 3\ \end{bmatrix} + x_2\begin{bmatrix} -1 \\ \ \ 1 \\ \ \ 4 \\ \ \ 1 \end{bmatrix} + x_3\begin{bmatrix} -1 \\ \ \ 1 \\ \ \ 1 \\ -2 \end{bmatrix} + x_4\begin{bmatrix}\ \ 1 \\ \ \ 1 \\ -2 \\ \ \ 2 \end{bmatrix} = \begin{bmatrix} 0 & -1 & -1 & \ \ 1 \\ 1 & \ \ 1 & \ \ 1 &\ \ 1 \\ 2 &\ \ 4&\ \ 1& -2 \\ 3 &\ \ 1 & -2 &\ \ 2 \end{bmatrix}\begin{bmatrix} x_1 \\ x_2 \\ x_3 \\ x_4 \end{bmatrix} $$
In other words, when we write a matrix multiplied on the right by a column vector
$$ \begin{bmatrix} 0 & -1 & -1 & \ \ 1 \\ 1 & \ \ 1 & \ \ 1 &\ \ 1 \\ 2 &\ \ 4&\ \ 1& -2 \\ 3 &\ \ 1 & -2 &\ \ 2 \end{bmatrix}\begin{bmatrix} x_1 \\ x_2 \\ x_3 \\ x_4 \end{bmatrix} $$
we mean $x_1$ times the first column of the matrix plus $x_2$ times the second column, etc. In this way, matrix-vector multiplication is defined to be the linear combination of column vectors
$$ x_1\begin{bmatrix} \ 0\ \\ \ 1\ \\ \ 2\ \\ \ 3\ \end{bmatrix} + x_2\begin{bmatrix} -1 \\ \ \ 1 \\ \ \ 4 \\ \ \ 1 \end{bmatrix} + x_3\begin{bmatrix} -1 \\ \ \ 1 \\ \ \ 1 \\ -2 \end{bmatrix} + x_4\begin{bmatrix}\ \ 1 \\ \ \ 1 \\ -2 \\ \ \ 2 \end{bmatrix} = x_1\mathbf{a}_1 + x_2\mathbf{a}_2 + x_3\mathbf{a}_3 + x_4\mathbf{a}_4 $$
If we substitute our new matrix-vector multiplication into our equation we obtain the matrix equation
$$ \begin{bmatrix} 0 & -1 & -1 & \ \ 1 \\ 1 & \ \ 1 & \ \ 1 &\ \ 1 \\ 2 &\ \ 4&\ \ 1& -2 \\ 3 &\ \ 1 & -2 &\ \ 2 \end{bmatrix}\begin{bmatrix} x_1 \\ x_2 \\ x_3 \\ x_4 \end{bmatrix} = \begin{bmatrix} b_1 \\ b_2 \\ b_3 \\ b_4 \end{bmatrix} $$
We call the matrix in our matrix-vector multiplication the coefficient matrix because the elements of the matrix are the coefficients of the variables from the system of linear equations. We will use capital letters to represent matrices, lower case letters in bold to represent vectors and Greek and Latin alphabet letters to represent scalars. We only use this convention so that our linear algebra equations will be easy to read.
We can for example call our coefficient matrix $A$
$$ A = \begin{bmatrix} 0 & -1 & -1 & \ \ 1 \\ 1 & \ \ 1 & \ \ 1 &\ \ 1 \\ 2 &\ \ 4&\ \ 1& -2 \\ 3 &\ \ 1 & -2 &\ \ 2 \end{bmatrix} $$
We can call our vector of independent variables the vector $\mathbf{x}$
$$ \mathbf{x} = \begin{bmatrix} x_1 \\ x_2 \\ x_3 \\ x_4 \end{bmatrix} $$
We can call our vector of constants on the right-hand side of the equation the vector $\mathbf{b}$
$$ \mathbf{b} = \begin{bmatrix} b_1 \\ b_2 \\ b_3 \\ b_4 \end{bmatrix} $$
Our equation becomes
$$ A\mathbf{x} = \mathbf{b}. $$
We still need to perform several operations on the equations to get the system of linear equations into strict triangular form. Notice that every time we write the equivalent linear system as a matrix equation, the independent variable vector $\mathbf{x}$ is the same vector. We want to write the system as efficiently as possible so that we can avoid repetitive and unnecessary writing. We also need an efficient representation so that we can represent the system of linear equations in a computer. To meet these goals, we will write the linear system of equations using only the coefficient matrix $A$ and the constant vector $\mathbf{b}$. We create a partitioned matrix, that is a matrix constructed of two or more submatrices
$$ \left[\, A \,|\, \mathbf{b} \,\right] = \left[ \begin{array}{cccc|c} 0 & -1 & -1 & \ \ 1 &\ \ 0 \\ 1 & \ \ 1 & \ \ 1 &\ \ 1 &\ \ 6 \\ 2 &\ \ 4 &\ \ 1 & -2 & -1 \\ 3 &\ \ 1 & -2 &\ \ 2 &\ \ 3 \end{array} \right] $$
This partitioned matrix is called the augmented matrix for the linear system of equations. The dashed vertical bar reminds us that the augmented matrix consists of the coefficient matrix and the constant vector.
1.2.2 Elementary Row Operations¶
The augmented matrix is a very dense representation of a system of linear equations. It is useful for a computer because we only give the computer only the information necessary to solve the system. The software in our computer only stores the array of numbers
$$ \left[\, A \,|\, \mathbf{b} \,\right] = \left[ \begin{array}{ccccc} 0 & -1 & -1 & \ \ 1 &\ \ 0 \\ 1 & \ \ 1 & \ \ 1 &\ \ 1 &\ \ 6 \\ 2 &\ \ 4 &\ \ 1 & -2 & -1 \\ 3 &\ \ 1 & -2 &\ \ 2 &\ \ 3 \end{array} \right] $$
Notice the lack of the dashed vertical bar that we use to remind us that this is a partitioned matrix. The computer has no concept of what this array of numbers represents. You give the array of numbers meaning. You know that the array of numbers represents a linear system of equations In applications You decide that your physical or abstract model is represented by your linear systems of equations. You decide what the solution to the system of linear equations implies about the state of your physical or abstract model.
Remember that each row of the augmented matrix represents one of the equations in the linear system. Every operation performed on the equations in Section 1.1, can be performed on a row of the augmented matrix.
Augmented Matrix¶
$$ \left[\begin{array}{cccc|c} 0 & -1 & -1 & \ \ 1 &\ \ 0 \\ 1 & \ \ 1 & \ \ 1 &\ \ 1 &\ \ 6 \\ 2 &\ \ 4 &\ \ 1 & -2 & -1 \\ 3 &\ \ 1 & -2 &\ \ 2 &\ \ 3 \end{array}\right] $$
Linear System of Equations¶
$$\begin{array}{rcl} -\ \ \ x_2 -\ \ x_3 +\ \ x_4 & = &\ \ 0 \\ x_1 +\ \ x_2 +\ \ x_3 +\ \ x_4 & = &\ \ 6 \\ 2x_1 + 4x_2 +\ \ x_3 - 2x_4 & = & -1 \\ 3x_1 +\ \ x_2 - 2x_3 + 2x_4 & = &\ \ 3 \end{array} $$
Recall there are three elementary operations we can perform on our system of equations that result in an equivalent system of equations.
Elementary Operations¶
- Interchange two equations
- Multiply an equation on both sides by a nonzero number
- Add a multiple of one equation to another equation
This gives us three elementary row operations we can perform on our augmented matrix that results in the augmented matrix of an equivalent augmented matrix. Remember an equivalent system of linear equations is not precisely the same linear system, but the equivalent system has exactly the same set of solutions. The three elementary row operations are
Definition¶
Elementary Row Operations
$$ \begin{array}{|l|l|} \hline \text{Name} & \text{Operation} \\ \hline \textbf{Type I} & \text{Interchange two rows of the augmented matrix} \\ \hline \textbf{Type II} & \text{Multiply a row by a nonzero number} \\ \hline \textbf{Type III} & \text{Add a multiple of one row of the augmented matrix to another row} \\ \hline \end{array} $$
Type I Elementary Row Operation¶
In our example, the first equation, that is the first row of the augmented matrix, has no value for the first independent variable. Let us perform a Type I Elementary Row Operation and interchange the first and second row to obtain
$$ \left[\begin{array}{cccc|c} {\color{royalblue}1} & \ \ 1 & \ \ 1 &\ \ 1 &\ \ 6 \\ 0 & -1 & -1 & \ \ 1 &\ \ 0 \\ 2 &\ \ 4 &\ \ 1 & -2 & -1 \\ 3 &\ \ 1 & -2 &\ \ 2 &\ \ 3 \end{array}\right] $$
This resulting system of linear equations is not exactly the same as the previous system but has the same set of solutions. We call the old augmented matrix and the new one row equivalent matrices. Let use proceed to solve the linear system.
1.2.3 Upper Triangular Form¶
Definition¶
The Diagonal of a Matrix
The diagonal of a matrix are the elements whose row and column index are the same. The diagonal elements of an $m\times n$ matrix $A$ are the elements
$$ a_{ii} $$
for $1\le i\le \min(m,n)$.
- The upper triangle of a matrix are the elements whose column index is greater than the row index. (above the diagonal)
- The lower triangle of a matrix are the elements whose column index is less than the row index. (below the diagonal)
Triangular Matrices¶
- A matrix whose lower triangular entries are all zeros is called an upper triangular matrix
- A matrix whose upper triangular entries are all zeros is called a lower triangular matrix
- A matrix that is both upper and lower triangular is called a diagonal matrix
You can read more about triangular matrices and get some great images of them in Wikipedia.
Elementary row operations are used to reduce a matrix into a simpler form that is readily solvable. Consider the following uses of elementary row operations.
Type III Elementary Row Operations¶
Recall that in the previous section a type I row operation was employed to obtain a nonzero coefficient in the first element of the augmented matrix. Next eliminate the first variable from all of the remaining equations using type III row operations. The coefficients of the selected coefficient is rendered in royal blue, while the terms to be eliminated are rendered in indigo.
$$ \begin{align*} \left[\begin{array}{cccc|c} {\color{royalblue}1} & \ \ 1 & \ \ 1 &\ \ 1 &\ \ 6 \\ 0 & -1 & -1 & \ \ 1 &\ \ 0 \\ {\color{indigo}2} &\ \ 4 &\ \ 1 & -2 & -1 \\ {\color{indigo}3} &\ \ 1 & -2 &\ \ 2 &\ \ 3 \end{array}\right] &\begin{array}{r} \ \\ \ \\ R_3 - 2R_1 \\ R_4 - 3R_1 \end{array} \end{align*} $$
These two Type III row operations result in the new equivalent linear system
$$ \left[\begin{array}{cccc|c} {\color{royalblue}1} & \ \ 1 & \ \ 1 &\ \ 1 &\ \ 6 \\ 0 & -1 & -1 & \ \ 1 &\ \ 0 \\ 0 &\ \ 2 & -1 & -4 & -13 \\ 0 & -2 & -5 & -1 & -15 \end{array}\right] $$
Type II Elementary Row Operation¶
We can use two Type II elementary row operations and multiply both sides of rows two and four by $-1$. This results in the equivalent system
$$ \begin{align*} \left[\begin{array}{cccc|c} {\color{royalblue}1} & \ \ 1 & \ \ 1 &\ \ 1 &\ \ 6 \\ 0 & -1 & -1 & \ \ 1 &\ \ 0 \\ 0 &\ \ 2 & -1 & -4 & -13 \\ 0 & -2 & -5 & -1 & -15 \end{array}\right]\begin{array}{l} \\ -R_2 \\ \\ -R_4 \end{array} &\rightarrow \left[\begin{array}{cccc|c} {\color{royalblue}1} & \ \ 1 & \ \ 1 &\ \ 1 &\ \ 6 \\ 0 &\ \ 1 &\ \ 1 & -1 &\ \ 0 \\ 0 &\ \ 2 & -1 & -4 & -13 \\ 0 &\ \ 2 &\ \ 5 &\ \ 1 &\ \ 15 \end{array}\right] \end{align*} $$
Now only the first equation contains the first variable from the linear system. Proceed down one row and right one column in order to select a nonzero coefficient for the second variable. Type I row operations may be used to obtain a nonzero coefficient in the selected position. Then use type III row operations to eliminate the second variable from the remaining rows.
$$ \begin{align*} \left[\begin{array}{cccc|c} {\color{royalblue}1} & \ \ 1 & \ \ 1 &\ \ 1 &\ \ 6 \\ 0 &\ \ {\color{royalblue}1} &\ \ 1 & -1 &\ \ 0 \\ 0 &\ \ {\color{indigo}2} & -1 & -4 & -13 \\ 0 &\ \ {\color{indigo}2} &\ \ 5 &\ \ 1 &\ \ 15 \end{array}\right] &\begin{array}{l} \ \\ \ \\ R_3-2R_2 \\ R_4-2R_2 \end{array} &\rightarrow \left[\begin{array}{cccc|c} {\color{royalblue}1} & \ \ 1 & \ \ 1 &\ \ 1 &\ \ 6 \\ 0 &\ \ {\color{royalblue}1} &\ \ 1 & -1 &\ \ 0 \\ 0 &\ \ 0 & -3 & -2 & -13 \\ 0 &\ \ 0 &\ \ 3 &\ \ 3 &\ \ 15 \end{array}\right] \end{align*} $$
Again one proceeds down one row and to the right one column in order to select a nonzero coefficient for the third variable. Type III row operations are used to eliminate the third variable from the remaining rows.
$$ \begin{align*} \left[\begin{array}{cccc|c} {\color{royalblue}1} & \ \ 1 & \ \ 1 &\ \ 1 &\ \ 6 \\ 0 &\ \ {\color{royalblue}1} &\ \ 1 & -1 &\ \ 0 \\ 0 &\ \ 0 & {\color{royalblue}-3} & -2 & -13 \\ 0 &\ \ 0 &\ \ {\color{indigo}3} &\ \ 3 &\ \ 15 \end{array}\right] &\begin{array}{l} \ \\ \ \\ \\ R_4+R_3 \end{array} &\rightarrow \left[\begin{array}{cccc|c} {\color{royalblue}1} & \ \ 1 & \ \ 1 &\ \ 1 &\ \ 6 \\ 0 &\ \ {\color{royalblue}1} &\ \ 1 & -1 &\ \ 0 \\ 0 &\ \ 0 & {\color{royalblue}-3} & -2 & -13 \\ 0 &\ \ 0 &\ \ 0 &\ \ 1 &\ \ 2 \end{array}\right] \end{align*} $$
Finally proceed down to the last row and to the right to the last column. If the remaining coefficient is nonzero then it is selected.
$$ \begin{align*} \left[\begin{array}{cccc|c} {\color{royalblue}1} & \ \ 1 & \ \ 1 &\ \ 1 &\ \ 6 \\ 0 &\ \ {\color{royalblue}1} &\ \ 1 & -1 &\ \ 0 \\ 0 &\ \ 0 & {\color{royalblue}-3} & -2 & -13 \\ 0 &\ \ 0 &\ \ 0 &\ \ {\color{royalblue}1} &\ \ 2 \end{array}\right] \end{align*} $$
In order to make the selected coefficients positive, employ a type II row operation. Multiply both sides of equation three (row three) by $-1$.
$$ \begin{align*} \left[\begin{array}{cccc|c} {\color{royalblue}1} & \ \ 1 & \ \ 1 &\ \ 1 &\ \ 6 \\ 0 &\ \ {\color{royalblue}1} &\ \ 1 & -1 &\ \ 0 \\ 0 &\ \ 0 & {\color{royalblue}-3} & -2 & -13 \\ 0 &\ \ 0 &\ \ 0 &\ \ {\color{royalblue}1} &\ \ 2 \end{array}\right]\begin{array}{l} \\ \\ -R_3 \\ \end{array} &\rightarrow \left[\begin{array}{cccc|c} {\color{royalblue}1} & \ \ 1 & \ \ 1 &\ \ 1 &\ \ 6 \\ 0 &\ \ {\color{royalblue}1} &\ \ 1 & -1 &\ \ 0 \\ 0 &\ \ 0 &\ \ {\color{royalblue}3} &\ \ 2 &\ \ 13 \\ 0 &\ \ 0 &\ \ 0 &\ \ {\color{royalblue}1} &\ \ 2 \end{array}\right] \end{align*} $$
Notice that the coefficient matrix in the last augmented matrix is an upper triangular matrix.
Definition¶
Upper Triangular Form
The coefficient matrix of a linear system is in upper triangular form when the coefficient matrix is upper triangular.
The linear system represented by the last form of the augmented matrix is given by
$$ \begin{bmatrix} 1 & \ \ 1 & \ \ 1 &\ \ 1 \\ 0 &\ \ 1 &\ \ 1 & -1 \\ 0 &\ \ 0 &\ \ 3 &\ \ 2 \\ 0 &\ \ 0 &\ \ 0 &\ \ 1 \end{bmatrix}\cdot\begin{bmatrix} x_1 \\ x_2 \\ x_3 \\ x_4 \end{bmatrix} = \begin{bmatrix} 6 \\ 0 \\ 13 \\ 2 \end{bmatrix} $$
The matrices
$$ \begin{bmatrix} 0 & -1 & -1 & \ \ 1 \\ 1 & \ \ 1 & \ \ 1 &\ \ 1 \\ 2 &\ \ 4 &\ \ 1 & -2 \\ 3 &\ \ 1 & -2 &\ \ 2 \end{bmatrix}\longleftrightarrow\begin{bmatrix} 1 & \ \ 1 & \ \ 1 &\ \ 1 \\ 0 &\ \ 1 &\ \ 1 & -1 \\ 0 &\ \ 0 &\ \ 3 &\ \ 2 \\ 0 &\ \ 0 &\ \ 0 &\ \ 1 \end{bmatrix} $$
are called row equivalent because one uses a finite number of elementary row operations to obtain either matrix from the other.
Definition¶
Row Equivalent Matrices
Two $m\times n$ matrices $A$ and $B$ are called row equivalent if and only if matrix $B$ may be obtained by performing a finite number of elementary row operations upon matrix $A$.
Since row operations a clearly reversible, this means that matrix $A$ may also be obtained by performing a finite number of elementary row operations upone matrix $B$.
The two row equivalent matrices are not equal. The graphs of the equations represented by their rows are different planes. However, all of the matrices constructed in this section using elementary row operations has the same set of solutions. Performing elementary row operations changes the description of the planes and their graphs. It preserves the solutions. For this reason we use the term equivalent.
1.2.4 Gaussian-Jordan Elimination¶
Now that we know the three elementary row operations, let us use them to solve some linear systems of equations. The method of
Gaussian-Jordan Elimination
should be used by all students to solve any linear system. Even $2\times 2$ linear systems must be solved using Gauss-Jordan Elimination. This allows the student to demonstrate mastery of the method. When completing homework or exams, one must always use these techniques to obtain any credit for solving a problem.
Example 3¶
Use Gauss-Jordan Elimination to reduce the linear system into upper triangular form and use backward substitution to solve
$$ \begin{align*} x_1 \qquad\,\ \ - 3x_3 &= 5 \\ 3x_1 +\,\ x_2 - 2x_3 &= 5 \\ x_1 + 2x_2 - 2x_3 &= -2 \end{align*} $$
First one must create the Augmented Matrix.
Definition¶
Augmented Matrix
The augmented matrix is a matrix created by partitioning the coefficient matrix $A$ with the vector $\mathbf{b}$ from the right-hand side of our linear equation. The coefficient matrix is given by
$$ A = \begin{bmatrix}\ \ 1\ &\ \ 0\ & -3\ \\ \ \ 3\ &\ \ 1\ & -2\ \\ \ \ 1\ &\ \ 2\ & -2\ \end{bmatrix} $$
The target vector $\mathbf{b} = \begin{bmatrix}\ \ 5\ \\ \ \ 5\ \\ -2\ \end{bmatrix}$.
The partition matrix becomes
$$ \left[\,A\,|\,\mathbf{b}\,\right] = \begin{bmatrix}\ \ 1\ &\ \ 0\ & -3\ & | &\ \ 5\ \\ \ \ 3\ &\ \ 1\ & -2\ & | &\ \ 5\ \\ \ \ 1\ &\ \ 2\ & -2\ & | & -2\ \end{bmatrix}. $$
One now performs the three elementary row operations on the augmented matrix to reduce the augmented matrix into upper-triangular form.
1. Determine the first pivot¶
It is customary, but not necessary to use row operations to get a nonzero value in the first position $a_{11}$ of the matrix. Since the first element of matrix $A$ is already nonzero, $a_{11}=1$, we identify
- This first nonzero coefficient is called a pivot.
- The first column $\mathbf{a}_1$ as a pivot column.
- The first row $\mathbf{a}^1$ as a pivot row.
- The first variable from our problem $\mathbf{x}_1$ as a pivot variable.
- The pivot coefficient is the value of the coefficient matrix in the pivot position, $a_{11}=1$.
$$ \begin{align*} \begin{bmatrix}\ \ {\color{royalblue}1}\ &\ \ 0\ & -3\ & | &\ \ 5\ \\ \ \ 3\ &\ \ 1\ & -2\ & | &\ \ 5\ \\ \ \ 1\ &\ \ 2\ & -2\ & | & -2\ \end{bmatrix} \end{align*} $$
2. Eliminate the pivot variable from all following rows.¶
We proceed to use row operations to eliminate the pivot variable from the equations that follow the pivot row. In this case,
- Subtract three times row one (the pivot row) from row two (a following row)
- Subtract row one (the pivot row) from row three
Everyone should denote the row operation as well as compute the row operation. Arithmetic mistakes happen. We are grading your problem solving process; not your arithmetic skills. Communicate your steps so that you can receive credit for them in your score.
$$ \begin{align*} \begin{bmatrix}\ \ {\color{royalblue}1}\ &\ \ 0\ & -3\ & | &\ \ 5\ \\ \ \ {\color{indigo}3}\ &\ \ 1\ & -2\ & | &\ \ 5\ \\ \ \ {\color{indigo}1}\ &\ \ 2\ & -2\ & | & -2\ \end{bmatrix}\begin{array}{l} \\ R_2-3R_1 \\ R_3-R_1 \\ \end{array} \rightarrow \begin{bmatrix}\ \ {\color{royalblue}1}\ &\ \ 0\ & -3\ & | &\ \ 5\ \\ \ \ 0\ &\ \ 1\ &\ \ 7\ & | & -10\ \\ \ \ 0\ &\ \ 2\ &\ \ 1\ & | & -7\ \end{bmatrix} \end{align*} $$
The notation specifies two type III row operations:
- Replace row 2 with row 2 minus 3 times row 1
- Replace row 3 with row 3 minus row 1
Notice that we use an arrow instead of an equal sign because the augmented matrices and the linear systems they represent are NOT EQUAL! They are equivalent linear systems. This means that they have the same solution. If we continue to use only Type I, Type II, and Type III row operations then each resulting linear system will be equivalent and have the same set of solutions.
3. Identify the next pivot¶
Continue using row operations until the augmented matrix is in upper triangular form. Access the matrix. Is there a row with a nonzero entry row just to the right of the current pivot column? It might be necessary to use type I row operations to move a nonzero coefficient into a position in the row immediately below the current pivot row, and in the adjacent column to the current pivot column.
$$ \begin{align*} \begin{bmatrix}\ \ {\color{royalblue}1}\ &\ \ 0\ & -3\ & | &\ \ 5\ \\ \ \ 0\ &\ \ {\color{royalblue}1}\ &\ \ 7\ & | & -10\ \\ \ \ 0\ &\ \ 2\ &\ \ 1\ & | & -7\ \end{bmatrix} \end{align*} $$
The next pivot is $a_{22}=1$. Thus
- the second column $\mathbf{a}_2$ is a pivot column
- the second row $\mathbf{a}^2$ is a pivot row
- the second variable $x_2$ is a pivot variable
4. Repeat step 2 until there are no remaining pivots¶
Once the next pivot is identified, continue to use row operations to eliminate the new pivot variable from all rows below our new pivot row.
$$ \begin{align*} \begin{bmatrix}\ \ {\color{royalblue}1}\ &\ \ 0\ & -3\ & | &\ \ 5\ \\ \ \ 0\ &\ \ {\color{royalblue}1}\ &\ \ 7\ & | & -10\ \\ \ \ 0\ &\ \ {\color{indigo}2}\ &\ \ 1\ & | & -7\ \end{bmatrix}\begin{array}{l} \\ \\ R_3 - 2R_2 \end{array} &\rightarrow \begin{bmatrix}\ \ {\color{royalblue}1}\ &\ \ 0\ & -3\ & | &\ \ 5\ \\ \ \ 0\ &\ \ {\color{royalblue}1}\ &\ \ 7\ & | & -10\ \\ \ \ 0\ &\ \ 0\ & -13\ & | &\ 13\ \end{bmatrix} \end{align*} $$
3. We are in step 3 again¶
The next pivot is $a_{33}=-13$
$$ \begin{align*} \begin{bmatrix}\ \ {\color{royalblue}1}\ &\ \ 0\ & -3\ & | &\ \ 5\ \\ \ \ 0\ &\ \ {\color{royalblue}1}\ &\ \ 7\ & | & -10\ \\ \ \ 0\ &\ \ 0\ & {\color{royalblue}-13}\ & | &\ 13\ \end{bmatrix} \end{align*} $$
- the third column $\mathbf{a}_3$ is a pivot column
- the third row $\mathbf{a}^3$ is a pivot row
- the third variable $x_3$ is a pivot variable
4. Repeat step 4¶
There are no rows below the current pivot row $\mathbf{a}^3$ so we are done. The augmented matrix is now in upper triangular form.
1.2.5 Row Echelon From¶
Using Gaussian Elimination one reduces a matrix to upper triangular form or Row Echelon Form
In row echelon form, the pivots are all ones. This has some advantages over upper triangular form, however, it may not be possible to represent a linear system without denominators. Using type II row operations to ensure that the pivots all have a value of one, may require other coefficients to be fractions or real numbers. Unless one is specifically asked for row echelon form, upper triangular form will always suffice.
Definition¶
Row Echelon Form
An upper triangular matrix whose diagonal pivots are all ones is said to be in row echelon form.
Example 2¶
Use Gauss-Jordan Elimination to reduce the linear system into row echelon form.
$$ \left[\begin{array}{cccc|c} {\color{royalblue}1} & \ \ 1 & \ \ 1 &\ \ 1 &\ \ 6 \\ 0 &\ \ {\color{royalblue}1} &\ \ 1 & -1 &\ \ 0 \\ 0 &\ \ 0 &\ \ {\color{royalblue}3} &\ \ 2 &\ \ 13 \\ 0 &\ \ 0 &\ \ 0 &\ \ {\color{royalblue}1} &\ \ 2 \end{array}\right] \begin{array}{l} \ \\ \ \\ \frac{1}{3}R_3 \\ \ \end{array}\longrightarrow\left[\begin{array}{cccc|c} {\color{royalblue}1} & \ \ 1 & \ \ 1 &\ \ 1 &\ \ 6 \\ 0 &\ \ {\color{royalblue}1} &\ \ 1 & -1 &\ \ 0 \\ 0 &\ \ 0 &\ \ {\color{royalblue}1} &\ \ \frac{2}{3} &\ \ \frac{13}{3} \\ 0 &\ \ 0 &\ \ 0 &\ \ {\color{royalblue}1} &\ \ 2 \end{array}\right] $$
Example 3¶
$$ \begin{bmatrix}\ \ {\color{royalblue}1}\ &\ \ 0\ & -3\ & | &\ \ 5\ \\ \ \ 0\ &\ \ {\color{royalblue}1}\ &\ \ 7\ & | & -10\ \\ \ \ 0\ &\ \ 0\ & {\color{royalblue}-13}\ & | &\ 13\ \end{bmatrix}\begin{array}{l} \\ \\ -\frac{1}{13}R_3 \end{array} \rightarrow \begin{bmatrix}\ \ {\color{royalblue}1}\ &\ \ 0\ & -3\ & | &\ \ 5\ \\ \ \ 0\ &\ \ {\color{royalblue}1}\ &\ \ 7\ & | & -10\ \\ \ \ 0\ &\ \ 0\ &\ \ {\color{royalblue}1}\ & | & -1\ \end{bmatrix} $$
Example 4¶
Use Gauss-Jordan Elimination to reduce the linear system into row echelon form.
$$ \begin{align*} 2x_1 - 3x_2 +\ \ 7x_3 &= 4 \\ x_1 \qquad\,\ \ +\ \ 3x_3 &= 3 \\ 4x_1 - 9x_2 + 15x_3 &= 6 \end{align*} $$
First one must create the Augmented Matrix. The augmented matrix is a matrix created by partitioning the coefficient matrix $A$ with the vector $\mathbf{b}$ on the right-hand side of our linear equation. The coefficient matrix is given by
$$ A = \begin{bmatrix}\ \ 2\ & -3\ &\ \ 7\ \\ \ \ 1\ &\ \ 0\ &\ \ 3\ \\ \ \ 4\ & -9\ & 15\ \end{bmatrix} $$
The target vector $\mathbf{b} = \begin{bmatrix}\ 4\ \\ \ 3\ \\ \ 6\ \end{bmatrix}$.
The partition matrix becomes
$$ \left[\,A\,|\,\mathbf{b}\,\right] = \begin{bmatrix}\ \ 2\ & -3\ &\ \ 7\ & | &\ 4\ \\ \ \ 1\ &\ \ 0\ &\ \ 3\ & | &\ 3\ \\ \ \ 4\ & -9\ & 15\ & | &\ 6\ \end{bmatrix}. $$
One now performs the three elementary row operations on the augmented matrix to reduce the augmented matrix into row echelon form.
$$ \begin{align*} \begin{bmatrix}\ \ 2\ & -3\ &\ \ 7\ & | &\ 4\ \\ \ \ 1\ &\ \ 0\ &\ \ 3\ & | &\ 3\ \\ \ \ 4\ & -9\ & 15\ & | &\ 6\ \end{bmatrix}\begin{array}{l} R_2 \\ R_1 \\ \\ \end{array} &\rightarrow \begin{bmatrix}\ \ {\color{royalblue}1}\ &\ \ 0\ &\ \ 3\ & | &\ 3\ \\ \ \ 2\ & -3\ &\ \ 7\ & | &\ 4\ \\ \ \ 4\ & -9\ & 15\ & | &\ 6\ \end{bmatrix}\begin{array}{l} \end{array} \quad\text{Type I Row Operation} \\ \\ \begin{bmatrix}\ \ {\color{royalblue}1}\ &\ \ 0\ &\ \ 3\ & | &\ 3\ \\ \ \ {\color{indigo}2}\ & -3\ &\ \ 7\ & | &\ 4\ \\ \ \ {\color{indigo}4}\ & -9\ & 15\ & | &\ 6\ \end{bmatrix}\begin{array}{l} \\ R_2 - 2R_1 \\ R_3 - 4R_1 \end{array} &\rightarrow \begin{bmatrix}\ \ {\color{royalblue}1}\ &\ \ 0\ &\ \ 3\ & | &\ \ 3\ \\ \ \ 0\ & -3\ &\ \ 1\ & | & -2\ \\ \ \ 0\ & -9\ &\ \ 3\ & | & -6\ \end{bmatrix} \quad\text{Type III Row Operations} \\ \\ \begin{bmatrix}\ \ {\color{royalblue}1}\ &\ \ 0\ &\ \ 3\ & | &\ \ 3\ \\ \ \ 0\ & {\color{royalblue}-3}\ &\ \ 1\ & | & -2\ \\ \ \ 0\ & {\color{indigo}-9}\ &\ \ 3\ & | & -6\ \end{bmatrix}\begin{array}{l} \\ \\ R_3 - 3R_2 \end{array} &\rightarrow \begin{bmatrix}\ \ {\color{royalblue}1}\ &\ \ 0\ &\ \ 3\ & | &\ \ 3\ \\ \ \ 0\ & {\color{royalblue}-3}\ &\ \ 1\ & | & -2\ \\ \ \ 0\ &\ \ 0\ &\ \ 0\ & | &\ \ 0\ \end{bmatrix} \quad\text{Type III Row Operation} \\ \\ \begin{bmatrix}\ \ {\color{royalblue}1}\ &\ \ 0\ &\ \ 3\ & | &\ \ 3\ \\ \ \ 0\ & {\color{royalblue}-3}\ &\ \ 1\ & | & -2\ \\ \ \ 0\ &\ \ 0\ &\ \ 0\ & | &\ \ 0\ \end{bmatrix}\begin{array}{l} \\ \frac{R_2}{-3} \\ \\ \end{array} &\rightarrow \begin{bmatrix}\ \ {\color{royalblue} 1}\ &\ \ 0\ &\ \ 3\ & | &\ \ 3\ \\ \ \ 0\ &\ \ {\color{royalblue} 1}\ & -\frac{1}{3}\ & | &\ \ \frac{2}{3}\ \\ \ \ 0\ &\ \ 0\ &\ \ 0\ & | &\ \ 0\ \end{bmatrix} \quad\text{Type II Row Operation} \end{align*} $$
1.2.6 Backward Substitution¶
Definition¶
Backward Substitution
Backward substitution is performed by assigning any free variables and proceeding from the last row to the first row. For each row one should have
- an algebraically solvable equation that can be solved for one of the pivot variables
- an unsolvable algebraic equation that indicates that the linear system has no solution.
Example 2¶
$$ \left[\begin{array}{cccc|c}\ \ {\color{royalblue}1}\ & \ \ 1 & \ \ 1 &\ \ 1 &\ \ 6\ \\ \ 0 &\ \ {\color{royalblue}1} &\ \ 1 & -1 &\ \ 0\ \\ \ 0 &\ \ 0 &\ \ {\color{royalblue}1} &\ \ \frac{2}{3} &\ \ \frac{13}{3}\ \\ \ 0 &\ \ 0 &\ \ 0 &\ \ {\color{royalblue}1} &\ \ 2\ \end{array}\right] $$
Each column of the coefficient matrix in our augmented matrix is a pivot column. The pivots are all ones since the augmented matrix is in row echelon form. Since all of the columns are pivot columns, the linear system is independent. Since all of the equations (rows) are algebraically solvable, the linear system is consistent. Hence the linear system has a unique solution. Employing backward substitution the last equation in our example now reads:
$$ x_4 = 2 $$
Proceeding from bottom-to-top and substituting $2$ or $x_4$ in the next row yields
$$ \begin{align*} x_3 + \frac{2}{3}x_4 = x_3 + \frac{2}{3}\,2 &= \frac{13}{3} \\ \\ x_3 &= \frac{13}{3} - \frac{4}{3} = \frac{9}{3} = 3 \end{align*} $$
Continuing to the next row and substituting for $x_3$ and $x_4$ gives
$$ \begin{align*} x_2 + x_3 - x_4 = x_2 + 3 - 2 &= 0 \\ \\ x_2 &= -1 \end{align*} $$
Finally substituting the values of $x_2$, $x_3$ and $x_4$ into the first equation results in
$$ \begin{align*} x_1 + x_2 + x_3 + x_4 &= x_1 - 1 + 3 + 2 = 6 \\ \\ x_1 + 4 &= 6 \\ \\ x_1 &= 2 \end{align*} $$
This gives us the unique solution $\begin{bmatrix}\ \ 2\ \\ -1\ \\ \ \ 3\ \\ \ \ 2\ \end{bmatrix} = \langle\, 2,\ -1,\ 3,\ 2 \,\rangle$
The solution set is
$$ \left\{\, \begin{bmatrix}\ \ 2\ \\ -1\ \\ \ \ 3\ \\ \ \ 2\ \end{bmatrix} \,\right\} $$
Example 3¶
$$ \begin{align*} \begin{bmatrix}\ \ {\color{royalblue}1}\ &\ \ 0\ & -3\ & | &\ \ 5\ \\ \ \ 0\ &\ \ {\color{royalblue}1}\ &\ \ 7\ & | & -10\ \\ \ \ 0\ &\ \ 0\ &\ \ {\color{royalblue}1}\ & | & -1\ \end{bmatrix} \end{align*} $$
Each column of matrix $A$ in our augmented matrix is a pivot column. Since all of the columns are pivot columns, the linear system is independent. Since all of the equations (rows) are algebraically solvable, the linear system is consistent. Hence the linear system has a unique solution. Now that the linear system is in row echelon form, employ backward substitution to obtain the solution.
$$ \begin{align*} x_3 &= -1 \\ \\ x_2 + 7(-1) &= -10 \\ x_2 &= -3 \\ \\ x_1 - 3(-1) &= 5 \\ x_1 &= 2 \end{align*} $$
From this one reports that the solution to the linear system $A\mathbf{x} = \mathbf{b}$ is the vector
$$ \mathbf{x} = \begin{bmatrix}\ \ 2\ \\ -3\ \\ -1\ \end{bmatrix} = \langle 2,\ -3,\ -1 \rangle $$
The solution set is
$$ \left\{ \begin{bmatrix}\ \ 2\ \\ -3\ \\ -1\ \end{bmatrix} \right\} $$
Example 4¶
$$ \begin{bmatrix}\ \ {\color{royalblue} 1}\ &\ \ 0\ &\ \ 3\ & | &\ \ 3\ \\ \ \ 0\ &\ \ {\color{royalblue} 1}\ & -\frac{1}{3}\ & | &\ \ \frac{2}{3}\ \\ \ \ 0\ &\ \ 0\ &\ \ 0\ & | &\ \ 0\ \end{bmatrix} $$
In this problem the first two columns are pivot columns. Accessing the reduced matrix reveals important information about matrix $A$.
Pivots | Pivot Columns | Pivot Rows | Pivot Variables |
---|---|---|---|
$1,\ 1$ | $\mathbf{a}_1,\ \mathbf{a}_2$ | $\mathbf{a}^1,\ \mathbf{a}^2$ | $x_1,\ x_2$ |
Notice that column three $\mathbf{a}_3$ has no pivot. We call $\mathbf{a}_3$ a free column. Column three is a linear combination of the pivot columns. It adds nothing to the span of the columns of our linear system. Hence
- the third column $\mathbf{a}_3$ is a free column
- the third row $\mathbf{a}^3$ is a free row
- the third variable $x_3$ is a free variable
We call $x_3$ a free variable because no matter what value $x_3$ is assigned, the pivot variables can be adjusted to remove it from the calculations. In this problem
$$ \begin{align*} \mathbf{a}_3 &= 3\mathbf{a}_1 - \frac{1}{3}\mathbf{a}_2 \\ \\ \mathbf{a}_3 &= 3\begin{bmatrix}\ 2\ \\ \ 1\ \\ \ 4\ \end{bmatrix} - \frac{1}{3}\begin{bmatrix} -3\ \\ \ \ 0\ \\ -9\ \end{bmatrix} = \begin{bmatrix}\ 6\ \\ \ 3\ \\ 12\ \end{bmatrix} + \begin{bmatrix}\ 1\ \\ \ 0\ \\ \ 3\ \end{bmatrix} = \begin{bmatrix}\ 7\ \\ \ 3\ \\ 15\ \end{bmatrix} \quad{\color{green}\Large{\checkmark}} \end{align*} $$
Consequently any scalar multiple of the free column can be eliminated from the linear combination of columns.
$$ {\color{indigo}x_3}\mathbf{a}_3 - 3{\color{indigo}x_3}\mathbf{a}_1 + \frac{{\color{indigo}x_3}}{3}\mathbf{a}_3 = {\color{indigo}x_3}\left( \begin{bmatrix}\ 7\ \\ \ 3\ \\ 15\ \end{bmatrix} - 3\begin{bmatrix}\ 2\ \\ \ 1\ \\ \ 4\ \end{bmatrix} + \frac{1}{3}\begin{bmatrix} -3\ \\ \ \ 0\ \\ -9\ \end{bmatrix} \right) = x_3\begin{bmatrix}\ 0\ \\ \ 0\ \\ \ 0\ \end{bmatrix} = \mathbf{0} $$
This means that the coefficient can be any real number and the pivot variables adjust to still obtain $A\mathbf{x}= \mathbf{b}$.
Since the third column is a free column, the linear system is dependent. Since all three of the rows (equations) make sense, the linear system is consistent. Hence the linear system has infinitely many solutions, as the free variable can have any real value.
Notice the last equation $0x_1 + 0x_2 + 0x_3 = 0$ tells us nothing about the $x_3$. We say that $x_3$ can be any real number $s$,
$$ x_3 = s\in\mathbb{R} $$
Continuing using backward substitution one obtains
$$ \begin{align*} x_2 - \frac{1}{3}s &= \frac{2}{3} \\ x_2 &= \frac{1}{3}s + \frac{2}{3} \\ \\ x_1 + 3s &= 3 \\ x_1 &= -3s + 3 \end{align*} $$
One reports that the infinitely many solutions have the form
$$ \begin{align*} \mathbf{x} &= \begin{bmatrix} -3s + 3 \\ \frac{1}{3}s + \frac{2}{3} \\ s \end{bmatrix} = \left\langle -3s+3,\ \frac{1}{3}s+\frac{2}{3},\ s \right\rangle \\ \\ &= {\color{red} \begin{bmatrix} -3\ \\ \ \ \frac{1}{3}\ \\ \ \ 1\ \end{bmatrix}}s + {\color{blue}\begin{bmatrix}\ 3\ \\ \ \frac{2}{3}\ \\ \ 0\ \end{bmatrix}},\ \text{for any }s\in\mathbb{R} \end{align*} $$
The infinitely many vectors $\left\{\, {\color{red} \begin{bmatrix} -3\ \\ \ \ \frac{1}{3}\ \\ \ \ 1\ \end{bmatrix}}s\,:\,s\in\mathbb{R}\,\right\}$ is called the homogeneous solution. The vector ${\color{blue}\begin{bmatrix}\ 3\ \\ \ \frac{2}{3}\ \\ \ 0\ \end{bmatrix}}$ is called a particular solution.
1.2.7 Reduced Row Echelon Form¶
Reduced Row Echelon Form
is the goal of Gauss-Jordan Elimination. Reduced Row Echelon Form is used to perform the Column-Rank Factorization (CR). Backward substitution is simplest when a linear system is written in reduced row echelon form. Gauss-Jordan Elimination is also used to reduce a matrix into Jordan Normal Form, and Hermite Normal Form. Column-Rank Factorization, Jordan Normal Form, and Hermite Normal Form are introduced later in the course.
Definition¶
Reduced Row Echelon Form
We say that $m\times n$ matrix $A$ in reduced row echelon form if the matrix is in row echelon form, and the pivot variables are eliminated from all of the other equations. The matrix has zeros below and above each pivot. Notice that backward substitution is easiest in reduced row echelon form.
Example 2¶
For purposes of demonstration we continue to use elementary row operations to eliminate variables above the pivots as well as below the pivots. To continue we start with the last equation and use Type III elementary row operations to eliminate the pivot variable $x_4$ from equations one, two and three.
$$\left[\begin{array}{cccc|c} {\color{royalblue}1} & \ \ 1 & \ \ 1 &\ \ {\color{indigo}1} &\ \ 6 \\ 0 &\ \ {\color{royalblue}1} &\ \ 1 & {\color{indigo}{-1}} &\ \ 0 \\ 0 &\ \ 0 &\ \ {\color{royalblue}1} &\ \ {\color{indigo}{\frac{2}{3}}} &\ \ \frac{13}{3} \\ 0 &\ \ 0 &\ \ 0 &\ \ {\color{royalblue}1} &\ \ 2 \end{array}\right]\begin{array}{l} R_1-R_4 \\ R_2+R_4 \\ R_3 - \frac{2}{3}R_4 \\ \ \end{array}\longrightarrow\left[\begin{array}{cccc|c} {\color{royalblue}1} & \ \ 1 & \ \ 1 &\ \ 0 &\ \ 4 \\ 0 &\ \ {\color{royalblue}1} &\ \ 1 & \ \ 0 &\ \ 2 \\ 0 &\ \ 0 &\ \ {\color{royalblue}1} &\ \ 0 &\ \ 3 \\ 0 &\ \ 0 &\ \ 0 &\ \ {\color{royalblue}1} &\ \ 2 \end{array}\right]$$
Using the third equation we perform two Type III elementary row operations on rows one and two to eliminate the pivot variable $x_3$ from equations one and two.
$$ \left[\begin{array}{cccc|c} {\color{royalblue}1} & \ \ 1 & \ \ {\color{indigo}{1}} &\ \ 0 &\ \ 4 \\ 0 &\ \ {\color{royalblue}1} &\ \ {\color{indigo}{1}} & \ \ 0 &\ \ 2 \\ 0 &\ \ 0 &\ \ {\color{royalblue}1} &\ \ 0 &\ \ 3 \\ 0 &\ \ 0 &\ \ 0 &\ \ {\color{royalblue}1} &\ \ 2 \end{array}\right]\begin{array}{l} R_1-R_3 \\ R_2-R_3 \\ \ \\ \ \end{array}\longrightarrow\left[\begin{array}{cccc|c} {\color{royalblue}1} & \ \ 1 & \ \ 0 &\ \ 0 &\ \ 1 \\ 0 &\ \ {\color{royalblue}1} &\ \ 0 & \ \ 0 & -1 \\ 0 &\ \ 0 &\ \ {\color{royalblue}1} &\ \ 0 &\ \ 3 \\ 0 &\ \ 0 &\ \ 0 &\ \ {\color{royalblue}1} &\ \ 2 \end{array}\right] $$
Finally we eliminate the pivot variable $x_2$ from equation one using a Type III elementary row operation.
$$ \left[\begin{array}{cccc|c} {\color{royalblue}1} & \ \ {\color{indigo}{1}} & \ \ 0 &\ \ 0 &\ \ 1 \\ 0 &\ \ {\color{royalblue}1} &\ \ 0 & \ \ 0 & -1 \\ 0 &\ \ 0 &\ \ {\color{royalblue}1} &\ \ 0 &\ \ 3 \\ 0 &\ \ 0 &\ \ 0 &\ \ {\color{royalblue}1} &\ \ 2 \end{array}\right]\begin{array}{l} R_1-R_2 \\ \ \\ \ \\ \ \end{array}\longrightarrow\left[\begin{array}{cccc|c} {\color{royalblue}1} & \ \ 0 & \ \ 0 &\ \ 0 &\ \ 2 \\ 0 &\ \ {\color{royalblue}1} &\ \ 0 & \ \ 0 & -1 \\ 0 &\ \ 0 &\ \ {\color{royalblue}1} &\ \ 0 &\ \ 3 \\ 0 &\ \ 0 &\ \ 0 &\ \ {\color{royalblue}1} &\ \ 2 \end{array}\right] $$
Now backward substitution is very easy to perform.
$$ x_1 = 2,\ x_2 = -1,\ x_3 = 3,\ x_4 = 2.$$
The solution of the matrix form of the linear system $A\mathbf{x} = \mathbf{b}$ or
$$\begin{bmatrix} 0 & -1 & -1 & \ \ 1 \\ 1 & \ \ 1 & \ \ 1 &\ \ 1 \\ 2 &\ \ 4&\ \ 1& -2 \\ 3 &\ \ 1 & -2 &\ \ 2 \end{bmatrix}\cdot\begin{bmatrix} x_1 \\ x_2 \\ x_3 \\ x_4 \end{bmatrix} = \begin{bmatrix}\ \ 0 \\ \ \ 6 \\ -1 \\ \ \ 3 \end{bmatrix}$$
is the vector
$$\mathbf{x} = \begin{bmatrix}\ \ 2 \\ -1 \\ \ \ 3 \\ \ \ 2 \end{bmatrix}$$
as
$$
\begin{bmatrix} 0 & -1 & -1 & \ \ 1 \\ 1 & \ \ 1 & \ \ 1 &\ \ 1 \\ 2 &\ \ 4&\ \ 1& -2 \\ 3 &\ \ 1 & -2 &\ \ 2 \end{bmatrix}\cdot\begin{bmatrix}\ \ 2 \\ -1 \\ \ \ 3 \\ \ \ 2 \end{bmatrix} =\ 2\begin{bmatrix}\ \ 0 \\ \ \ 1 \\ \ \ 2 \\ \ \ 3 \end{bmatrix} + (-1)\begin{bmatrix} -1 \\ \ \ 1 \\ \ \ 4 \\ \ \ 1 \end{bmatrix} + 3\begin{bmatrix} -1 \\ \ \ 1 \\ \ \ 1 \\ -2 \end{bmatrix} + 2\begin{bmatrix}\ \ 1\ \\ \ \ 1\ \\ -2\ \\ \ \ 2 \end{bmatrix} = \begin{bmatrix}\ \ 0 \\ \ \ 6 \\ -1 \\ \ \ 3 \end{bmatrix} {\color{green}\Large{\checkmark}}
$$
The solution set is
$$ \left\{\,\begin{bmatrix}\ \ 2 \\ -1 \\ \ \ 3 \\ \ \ 2 \end{bmatrix}\,\right\} $$
Example 3¶
As in example 2, one proceeds from the last column left to the first column eliminating variables above the pivots.
$$ \begin{align*} \begin{bmatrix}\ \ {\color{royalblue}1}\ &\ \ 0\ & {\color{indigo}-3}\ & | &\ \ 5\ \\ \ \ 0\ &\ \ {\color{royalblue}1}\ &\ \ {\color{indigo}7}\ & | & -10\ \\ \ \ 0\ &\ \ 0\ &\ \ {\color{royalblue}1}\ & | & -1\ \end{bmatrix}\begin{array}{l} R_1+3R_3 \\ R_2 - 7R_3 \\ \\ \end{array} &\rightarrow \begin{bmatrix}\ \ {\color{royalblue}1}\ &\ \ 0\ &\ \ 0\ & | &\ \ 2\ \\ \ \ 0\ &\ \ {\color{royalblue}1}\ &\ \ 0\ & | & -3\ \\ \ \ 0\ &\ \ 0\ &\ \ {\color{royalblue}1}\ & | & -1\ \end{bmatrix} \end{align*} $$
The solution is very easy to determine. The solution set is
$$ \left\{ \begin{bmatrix}\ \ 2\ \\ -3\ \\ -1\ \end{bmatrix} \right\} $$
Example 4¶
$$ \begin{bmatrix}\ \ {\color{royalblue} 1}\ &\ \ 0\ &\ \ 3\ & | &\ \ 3\ \\ \ \ 0\ &\ \ {\color{royalblue} 1}\ & -\frac{1}{3}\ & | &\ \ \frac{2}{3}\ \\ \ \ 0\ &\ \ 0\ &\ \ 0\ & | &\ \ 0\ \end{bmatrix} $$
In this problem, the row echelon form of the matrix and the reduced row echelon form are the same.
Example 5¶
Use Gauss-Jordan Elimination to reduce the linear system into reduced row echelon form and use backward substitution to solve
$$ \begin{align*} x_1 +\,\ x_2 +\ \ 4x_3 &= 6 \\ 2x_1 + 5x_2 + 11x_3 &= 4 \\ x_1 + 4x_2 +\ \ 7x_3 &= 4 \end{align*} $$
First one must create the Augmented Matrix. The augmented matrix is a matrix created by partitioning the coefficient matrix $A$ with the vector $\mathbf{b}$ on the right-hand side of our linear equation. The coefficient matrix is given by
$$ A = \begin{bmatrix}\ 1\ &\ 1\ &\ 4\ \\ \ 2\ &\ 5\ &\ 11\ \\ \ 1\ &\ 4\ &\ 7\ \end{bmatrix} $$
The target vector $\mathbf{b} = \begin{bmatrix}\ 6\ \\ \ 3\ \\ \ 4\ \end{bmatrix}$.
The partition matrix becomes
$$ \left[\,A\,|\,\mathbf{b}\,\right] = \begin{bmatrix}\ 1\ &\ 1\ &\ 4\ & | &\ 6\ \\ \ 2\ &\ 5\ &\ 11\ & | &\ 3\ \\ \ 1\ &\ 4\ &\ 7\ & | &\ 4\ \end{bmatrix}. $$
One now performs the three elementary row operations on the augmented matrix to reduce the augmented matrix into reduced row echelon form.
$$ \begin{align*} \begin{bmatrix}\ {\color{royalblue}1}\ &\ 1\ &\ 4\ & | &\ 6\ \\ \ {\color{indigo}2}\ &\ 5\ &\ 11\ & | &\ 3\ \\ \ {\color{indigo}1}\ &\ 4\ &\ 7\ & | &\ 4\ \end{bmatrix}\begin{array}{l} \\ R_2-2R_1 \\ R_3-R_1 \end{array} &\rightarrow \begin{bmatrix}\ {\color{royalblue}1}\ &\ 1\ &\ 4\ & | &\ 6\ \\ \ 0\ &\ {\color{royalblue}3}\ &\ 3\ & | & -9\ \\ \ 0\ &\ {\color{indigo}3}\ &\ 3\ & | & -2\ \end{bmatrix}\begin{array}{l} \\ \\ R_3-R_2 \end{array} \\ \\ &\rightarrow \begin{bmatrix}\ {\color{royalblue}1}\ &\ 1\ &\ 4\ & | &\ 6\ \\ \ 0\ &\ {\color{royalblue}3}\ &\ 3\ & | & -9\ \\ \ 0\ &\ 0\ &\ 0\ & | &\ 7\ \end{bmatrix}\begin{array}{l} \\ R_2/3 \\ \\ \end{array} \qquad\text{Upper Triangular Form} \\ \\ &\rightarrow \begin{bmatrix}\ {\color{royalblue}1}\ &\ 1\ &\ 4\ & | &\ 6\ \\ \ 0\ &\ {\color{royalblue}1}\ &\ 1\ & | & -3\ \\ \ 0\ &\ 0\ &\ 0\ & | &\ 7\ \end{bmatrix}\begin{array}{l} R_1-R_2 \\ \\ \\ \end{array} \quad\text{Row Echelon Form} \\ \\ &\rightarrow \begin{bmatrix}\ {\color{royalblue} 1}\ &\ 0\ &\ 3\ & | &\ 9\ \\ \ 0\ &\ {\color{royalblue} 1}\ &\ 1\ & | & -3\ \\ \ 0\ &\ 0\ &\ 0\ & | &\ 7\ \end{bmatrix} \qquad\qquad\text{Reduced Row Echelon Form} \end{align*} $$
Now that all of the pivots have zeros above and below them, and the pivot coefficients are ones, the augmented matrix is in reduced row echelon form. Since matrix $A$ has at least one free column, the linear system is dependent. However the last row (equation) reads
$$ 0x_1 + 0x_2 + 0x_3 = 7 $$
This is impossible for any values of $x_1$, $x_2$, or $x_3$. Hence the linear system is inconsistent. There are no solutions.
1.2.8 Homogeneous Linear Systems¶
Definition¶
Homogeneous Linear System
A linear system $A\mathbf{x}=\mathbf{b}$ is called homogeneous when vector $\mathbf{b}$ is the zero vector, $\mathbf{b}=\mathbf{0}$.
A homogeneous linear system is ALWAYS consistent because zero is always a solution, $A\mathbf{0} = \mathbf{0}$.
Associated Homogeneous Linear System
Every linear system of equations $A\mathbf{x}=\mathbf{b}$ has an associated homogeneous linear system of the form $A\mathbf{x}=\mathbf{0}$.
Example 5¶
In this example, the associated homogeneous linear system is given by
$$ \begin{align*} x_1 +\,\ x_2 +\ \ 4x_3 &= 0 \\ 2x_1 + 5x_2 + 11x_3 &= 0 \\ x_1 + 4x_2 +\ \ 7x_3 &= 0 \end{align*} $$
or
$$ A\mathbf{x} = \begin{bmatrix}\ 1\ &\ 1\ &\ 4\ \\ \ 2\ &\ 5\ &\ 11\ \\ \ 1\ &\ 4\ &\ 7\ \end{bmatrix}\mathbf{x} = \begin{bmatrix}\ 0\ \\ \ 0\ \\ \ 0\ \end{bmatrix} = \mathbf{0} $$
The augmented matrix has a column of zeros that is never changed by any row operation. As a result, the column is usually excluded.
$$ \begin{bmatrix}\ 1\ &\ 1\ &\ 4\ & | &\ 0\ \\ \ 2\ &\ 5\ &\ 11\ & | &\ 0\ \\ \ 1\ &\ 4\ &\ 7\ & | &\ 0\ \end{bmatrix} = \begin{bmatrix}\ 1\ &\ 1\ &\ 4\ \\ \ 2\ &\ 5\ &\ 11\ \\ \ 1\ &\ 4\ &\ 7\ \end{bmatrix} $$
The homogeneous version of the augmented matrix precludes the right partition because the coefficients are always zero!
This means that when augmented matrix has no vertical bar, the right partition of zeros is implied. One must include a vertical bar in the augmented matrix in order to properly communicate the correct vector $\mathbf{b}$ when submitting work for an homework or exam problem.
Solving the Associated Homogeneous Linear System¶
Precisely the same row operations are employed to reduce any version of $A\mathbf{x} = \mathbf{b}$. The form of the coefficient matrix is always the same. It is the affect the row operations have upon vector $\mathbf{b}$, the right-hand partition of the augmented matrix that produces the solution set.
When solving the associated homogeneous linear system one obtains the row equivalent forms
$$ \begin{bmatrix}\ 1\ &\ 1\ &\ 4\ \\ \ 2\ &\ 5\ &\ 11\ \\ \ 1\ &\ 4\ &\ 7\ \end{bmatrix} \longleftrightarrow \begin{bmatrix}\ {\color{royalblue} 1}\ &\ 0\ &\ {\color{indigo}3}\ \\ \ 0\ &\ {\color{royalblue} 1}\ &\ {\color{indigo}1}\ \\ \ 0\ &\ 0\ &\ 0\ \end{bmatrix} $$
Performing backward substitution yields
$$ \begin{align*} x_3 &= s\in\mathbb{R} \\ \\ x_2 + s &= 0 \\ x_2 &= -s \\ \\ x_1 + 3s &= 0 \\ x_1 &= 3s \end{align*} $$
The homogeneous solution
$$ \left\{\, \begin{bmatrix} -3s\ \\ -s\ \\ \ \ s\ \end{bmatrix} \,:\, s\in\mathbb{R} \,\right\} $$
This is particularly true for $s=-1$. When $s=-1$, the linear system of equations becomes
$$ \begin{align*} \begin{bmatrix}\ 1\ &\ 1\ &\ 4\ \\ \ 2\ &\ 5\ &\ 11\ \\ \ 1\ &\ 4\ &\ 7\ \end{bmatrix}\begin{bmatrix}\ \ 3\ \\ \ \ 1\ \\ -1\ \end{bmatrix} &= \begin{bmatrix}\ 0\ \\ \ 0\ \\ \ 0\ \end{bmatrix} \\ \\ 3\begin{bmatrix}\ 1\ \\ \ 2\ \\ \ 1\ \end{bmatrix} + \begin{bmatrix}\ 1\ \\ \ 5\ \\ \ 4\ \end{bmatrix} - \begin{bmatrix}\ 1\ \\ \ 4\ \\ \ 7\ \end{bmatrix} &= \mathbf{0} \\ \\ 3\begin{bmatrix}\ 1\ \\ \ 2\ \\ \ 1\ \end{bmatrix} + \begin{bmatrix}\ 1\ \\ \ 5\ \\ \ 4\ \end{bmatrix} &= \begin{bmatrix}\ 1\ \\ \ 4\ \\ \ 7\ \end{bmatrix} \end{align*} $$
Linear Combinations¶
In example 5 we also see one of the advantages of reduced row echelon form. The reduced row echelon form a coefficient matrix confers important information about the linearly dependent or free columns. Recall the reduced row echelon form
$$ \begin{bmatrix}\ {\color{royalblue} 1}\ &\ 0\ &\ {\color{indigo}3}\ \\ \ 0\ &\ {\color{royalblue} 1}\ &\ {\color{indigo}1}\ \\ \ 0\ &\ 0\ &\ 0\ \end{bmatrix} $$
The values of the free column furnishes the linear combination of the pivot columns of the original coefficient matrix that yields the free column. In example 5,
$$ \begin{align*} {\color{indigo}3}{\color{royalblue}\mathbf{a}_1} + {\color{indigo}1}{\color{royalblue}\mathbf{a}_2} &= {\color{indigo}\mathbf{a}_3} \\ \\ {\color{indigo}3}{\color{royalblue}\begin{bmatrix}\ 1\ \\ \ 2\ \\ \ 1\ \end{bmatrix}} + {\color{indigo}1}{\color{royalblue}\begin{bmatrix}\ 1\ \\ \ 5\ \\ \ 4\ \end{bmatrix}} &= {\color{royalblue}\begin{bmatrix}\ {\color{indigo}3}(1) + {\color{indigo}1}(1)\ \\ \ {\color{indigo}3}(2) + {\color{indigo}1}(5)\ \\ \ {\color{indigo}3}(1) + {\color{indigo}1}(4)\ \end{bmatrix}} = {\color{indigo}\begin{bmatrix}\ 4\ \\ \ 11\ \\ \ 7\ \end{bmatrix}} \\ \end{align*} $$
Definition¶
Pivot Columns and Free Columns
Every free column of and $m\times n$ matrix $A$ can be written as a linear combination of the pivot columns. The coefficients of the pivot columns are revealed in the reduced row echelon form of the matrix.
Applying this important property of the columns of a matrix yields the column-rank factorization, null space, column space, and row space of a matrix. All subjects studied later in this course.
1.2.9 Examples¶
This would be a good spot to take a break, and return refreshed to study Dr. Strang's video Elimination with Matrices. before continuing with more examples. You need only watch Dr. Strang's lecture to 19:04 minutes into the lecture. We will return to the lecture in
Section 2.4.
Example 6¶
Use an augmented matrix, Gaussian elimination, and backward substitution to solve the linear system
$$
\begin{bmatrix}
\ \ 1\ &\ \ 2\ &\ \ 0\ & -1\ \\
\ \ 5\ &\ \ 4\ & -6\ &\ \ 1\ \\
\ \ 0\ &\ \ 4\ &\ \ 4\ & -4\ \\
\ \ 2\ &\ \ 1\ & -3\ &\ \ 1\ \end{bmatrix}\mathbf{x} =
\begin{bmatrix} -3\ \\ -9\ \\ -4\ \\ -3\ \end{bmatrix}
$$
Solution¶
$$
\begin{align*}
\begin{bmatrix}
\ \ {\color{royalblue}1}\ &\ \ 2\ &\ \ 0\ & -1\ & | & -3\ \\
\ \ {\color{indigo}5}\ &\ \ 4\ & -6\ &\ \ 1\ & | & -9\ \\
\ \ 0\ &\ \ 4\ &\ \ 4\ & -4\ & | & -4\ \\
\ \ {\color{indigo}2}\ &\ \ 1\ & -3\ &\ \ 1\ & | & -3\ \end{bmatrix}
\begin{array}{l} \\ R_2 - 5R_1 \\ \\ R_4 - 2R_1 \end{array} &\rightarrow
\begin{bmatrix}
\ \ {\color{royalblue}1}\ &\ \ 2\ &\ \ 0\ & -1\ & | & -3\ \\
\ \ 0\ & -6\ & -6\ &\ \ 6\ & | &\ \ 6\ \\
\ \ 0\ &\ \ {\color{royalblue}4}\ &\ \ 4\ & -4\ & | & -4\ \\
\ \ 0\ & -3\ & -3\ &\ \ 3\ & | &\ \ 3\ \end{bmatrix}
\begin{array}{l} \\ \\ R_3 + R_4 \\ \\ \end{array} \\
\\
&\rightarrow\begin{bmatrix}
\ \ {\color{royalblue}1}\ &\ \ 2\ &\ \ 0\ & -1\ & | & -3\ \\
\ \ 0\ & -6\ & -6\ &\ \ 6\ & | &\ \ 6\ \\
\ \ 0\ &\ \ {\color{royalblue}1}\ &\ \ 1\ & -1\ & | & -1\ \\
\ \ 0\ & -3\ & -3\ &\ \ 3\ & | &\ \ 3\ \end{bmatrix}
\begin{array}{l} \\ \text{switch }R_3 \\ \text{and }R_2 \\ \\ \end{array} \\
\\
&\rightarrow\begin{bmatrix}
\ \ {\color{royalblue}1}\ &\ \ 2\ &\ \ 0\ & -1\ & | & -3\ \\
\ \ 0\ &\ \ {\color{royalblue}1}\ &\ \ 1\ & -1\ & | & -1\ \\
\ \ 0\ &{\color{indigo}-6}\ & -6\ &\ \ 6\ & | &\ \ 6\ \\
\ \ 0\ &{\color{indigo}-3}\ & -3\ &\ \ 3\ & | &\ \ 3\ \end{bmatrix}
\begin{array}{l} \\ \\ R_3 + 6R_2 \\ R_4 + 3R_2 \\ \end{array} \\
\\
&\rightarrow\begin{bmatrix}
\ \ {\color{royalblue}1}\ &\ \ {\color{indigo}2}\ &\ \ 0\ & -1\ & | & -3\ \\
\ \ 0\ &\ \ {\color{royalblue}1}\ &\ \ 1\ & -1\ & | & -1\ \\
\ \ 0\ &\ \ 0\ &\ \ 0\ &\ \ 0\ & | &\ \ 0\ \\
\ \ 0\ &\ \ 0\ &\ \ 0\ &\ \ 0\ & | &\ \ 0\ \end{bmatrix}
\begin{array}{l} R_1 - 2R_2 \\ \\ \\ \\ \end{array} \\
\\
&\rightarrow\begin{bmatrix}
\ \ {\color{royalblue}1}\ &\ \ 0\ & -2\ &\ \ 1\ & | & -1\ \\
\ \ 0\ &\ \ {\color{royalblue}1}\ &\ \ 1\ & -1\ & | & -1\ \\
\ \ 0\ &\ \ 0\ &\ \ 0\ &\ \ 0\ & | &\ \ 0\ \\
\ \ 0\ &\ \ 0\ &\ \ 0\ &\ \ 0\ & | &\ \ 0\ \end{bmatrix}
\end{align*}
$$
This reduced row echelon form of the linear system reveals much about both the coefficient matrix and the linear system it represents.
Property | Objects |
---|---|
Pivot Columns | $\mathbf{a}_1$ and $\mathbf{a}_2$ |
Pivot Variables | $x_1$ and $x_2$ |
Free Columns | $\mathbf{a}_3$ and $\mathbf{a}_4$ |
Free Variables | $x_3$ and $x_4$ |
Additionally the reduced row echelon form of the coefficient matrix conveys the linear combinations of the pivot columns that yield the free columns of the original matrix.
$$ \begin{align*} \text{column 3:}\quad &\mathbf{a}_3 = \begin{bmatrix} -2\ \\ \ \ 1\ \\ \ \ 0\ \\ \ \ 0\ \end{bmatrix} &\mathbf{a}_3 &= -2\mathbf{a}_1 + \mathbf{a}_2 \\ \\ \text{column 4:}\quad &\mathbf{a}_4 = \begin{bmatrix}\ \ 1\ \\ -1\ \\ \ \ 0\ \\ \ \ 0\ \end{bmatrix} &\mathbf{a}_4 &= \mathbf{a}_1 - \mathbf{a}_2 \end{align*} $$
Free columns $\mathbf{a}_3$ and $\mathbf{a}_4$ are linear combinations of the pivot columns $\mathbf{a}_1$ and $\mathbf{a}_2$. Hence the span of the columns of matrix $A$ are just the span of the first two columns or a plane in four dimensional space.
This also tells us that any coefficient of a free column is really a coefficient of the pivot columns.
$$ \begin{align*} x_3\mathbf{a}_3 &= -2x_3\mathbf{a}_1 + x_3\mathbf{a}_2 \\ \\ x_4\mathbf{a}_4 &= x_4\mathbf{a}_1 - x_4\mathbf{a}_2 \end{align*} $$
Moreover it tells us that
$$ A\begin{bmatrix}\ \ 2x_3\ \\ -x_3\ \\ \ \ x_3\ \\ \ \ 0\ \end{bmatrix} = 2x_3\mathbf{a}_1 - x_3\mathbf{a}_2 + x_3\mathbf{a}_3 + 0\mathbf{a}_4 = \mathbf{0} $$
and also
$$ A\begin{bmatrix} -x_4\ \\ \ \ x_4\ \\ \ \ 0\ \\ \ \ x_4\ \end{bmatrix} = -x_4\mathbf{a}_1 + x_4\mathbf{a}_2 + 0\mathbf{a}_3 + \mathbf{a}_4 = \mathbf{0} $$
The coefficients of the free columns $\mathbf{a}_3$ and $\mathbf{a}_4$ can be any real number, say $s,\ t\in\mathbb{R}$ and we would still have
$$ A\begin{bmatrix} 2s - t \\ -s + t \\ s \\ t \end{bmatrix} = A\begin{bmatrix}\ 2s \\ -s \\ \ \ s \\ \ \ 0 \end{bmatrix} + A\begin{bmatrix} -t \\ \ \ t \\ \ \ 0 \\ \ \ t \end{bmatrix} = \mathbf{0} $$
Now that we identify columns 1 and 2 as pivot columns (linearly independent columns), and columns 3 and 4 as free columns (linearly dependent columns) we are ready to determine the solution using backward substitution.
$$
\begin{align*}
x_4 &= t \in\mathbb{R} \\
\\
x_3 &= s \in\mathbb{R} \\
\\
x_2 + s - t &= -1 \\
x_2 &= -1 - s + t \\
\\
x_1 -2s + t &= -1 \\
x_1 &= -1 + 2s - t \\
\\
\mathbf{x} &= \begin{bmatrix} -1 + 2s - t \\ -1 - s + t \\ s \\ t \end{bmatrix} = \begin{bmatrix} -1\ \\ -1 \\ \ \ 0\ \\ \ \ 0\ \end{bmatrix} + s\begin{bmatrix} \ \ 2\ \\ -1\ \\ \ \ 1\ \\ \ \ 0\ \end{bmatrix} + t\begin{bmatrix} -1\ \\ \ \ 1\ \\ \ \ 0\ \\ \ \ 1\ \end{bmatrix}
\end{align*}
$$
There are infinitely many solutions because variables $s$ and $t$ may be any real number. There are in fact an infinite plane of solutions in 4 dimensional space $\mathbb{R}^4$.
The solution set is $\left\{\,\begin{bmatrix} -1 + 2s - t \\ -1 - s + t \\ s \\ t \end{bmatrix} = \begin{bmatrix} -1\ \\ -1 \\ \ \ 0\ \\ \ \ 0\ \end{bmatrix} + s\begin{bmatrix} \ \ 2\ \\ -1\ \\ \ \ 1\ \\ \ \ 0\ \end{bmatrix} + t\begin{bmatrix} -1\ \\ \ \ 1\ \\ \ \ 0\ \\ \ \ 1\ \end{bmatrix}\,:\,\alpha,\ \beta\in\mathbb{R}\,\right\}$.
The infinite set of solutions
$$
\left\{\, s\begin{bmatrix} \ \ 2\ \\ -1\ \\ \ \ 1\ \\ \ \ 0\ \end{bmatrix} + t\begin{bmatrix} -1\ \\ \ \ 1\ \\ \ \ 0\ \\ \ \ 1\ \end{bmatrix}\,:\,\alpha,\ \beta\in\mathbb{R}\, \right\}
$$
are called the homogeneous solutions. The vector
$$
\left\{\, \begin{bmatrix} -1\ \\ -1 \\ \ \ 0\ \\ \ \ 0\ \end{bmatrix} \,\right\}
$$
is called a particular solution.
#### Check our solutions!
$$
\begin{align*}
A\mathbf{x} &= A\left(\begin{bmatrix} -1\ \\ -1 \\ \ \ 0\ \\ \ \ 0\ \end{bmatrix} + s\begin{bmatrix} \ \ 2\ \\ -1\ \\ \ \ 1\ \\ \ \ 0\ \end{bmatrix} + t\begin{bmatrix} -1\ \\ \ \ 1\ \\ \ \ 0\ \\ \ \ 1\ \end{bmatrix}\right) \\
\\
&= A\begin{bmatrix} -1\ \\ -1 \\ \ \ 0\ \\ \ \ 0\ \end{bmatrix} + sA\begin{bmatrix} \ \ 2\ \\ -1\ \\ \ \ 1\ \\ \ \ 0\ \end{bmatrix} + tA\begin{bmatrix} -1\ \\ \ \ 1\ \\ \ \ 0\ \\ \ \ 1\ \end{bmatrix} \\
\\
&= \begin{bmatrix} -3\ \\ -9\ \\ -4\ \\ -3\ \end{bmatrix} + s\mathbf{0} + t\mathbf{0} = \begin{bmatrix} -3\ \\ -9\ \\ -4\ \\ -3\ \end{bmatrix} {\color{green}\Large{\checkmark}}
\end{align*}
$$
1.2.10 Exercises¶
Try these exercises before looking at the solution.
Exercise 1¶
Use an augmented matrix, Gaussian elimination, and backward substitution to solve the linear system
$$
\begin{bmatrix}
\ \ 3\ &\ \ 2\ &\ \ 3\ &\ \ 0\ \\
\ \ 9\ &\ \ 5\ &\ 12\ &\ \ 1\ \\
-9\ & -6\ & -7\ &\ \ 3\ \\
\ \ 3\ &\ \ 4\ &\ \ 1\ &\ \ 5\ \end{bmatrix}\mathbf{x} =
\begin{bmatrix}\ \ 4\ \\ \ 15\ \\ -13\ \\ -5\ \end{bmatrix}
$$
View Solution
$$ \begin{align*} \begin{bmatrix} \ \ 3\ &\ \ 2\ &\ \ 3\ &\ \ 0\ & | &\ \ 4\ \\ \ \ 9\ &\ \ 5\ &\ 12\ &\ \ 1\ & | &\ 15\ \\ -9\ & -6\ & -7\ &\ \ 3\ & | & -13\ \\ \ \ 3\ &\ \ 4\ &\ \ 1\ &\ \ 5\ & | & -5\ \end{bmatrix} \begin{array}{l} \\ R_2-3R_1 \\ R_3+3R_1 \\ R_4-R_1 \end{array}&\rightarrow \begin{bmatrix} \ \ 3\ &\ \ 2\ &\ \ 3\ &\ \ 0\ & | &\ \ 4\ \\ \ \ 0\ & -1\ &\ \ 3\ &\ \ 1\ & | &\ \ 3\ \\ \ \ 0\ &\ \ 0\ &\ \ 2\ &\ \ 3\ & | & -1\ \\ \ \ 0\ &\ \ 2\ & -2\ &\ \ 5\ & | & -9\ \end{bmatrix} \begin{array}{l} \\ \\ \\ R_4+2R_2 \end{array} \\ \\ \rightarrow\begin{bmatrix} \ \ 3\ &\ \ 2\ &\ \ 3\ &\ \ 0\ & | &\ \ 4\ \\ \ \ 0\ & -1\ &\ \ 3\ &\ \ 1\ & | &\ \ 3\ \\ \ \ 0\ &\ \ 0\ &\ \ 2\ &\ \ 3\ & | & -1\ \\ \ \ 0\ &\ \ 0\ &\ \ 4\ &\ \ 7\ & | & -3\ \end{bmatrix} \begin{array}{l} \\ \\ \\ R_4-2R_3 \end{array} &\rightarrow \begin{bmatrix} \ \ 3\ &\ \ 2\ &\ \ 3\ &\ \ 0\ & | &\ \ 4\ \\ \ \ 0\ & -1\ &\ \ 3\ &\ \ 1\ & | &\ \ 3\ \\ \ \ 0\ &\ \ 0\ &\ \ 2\ &\ \ 3\ & | & -1\ \\ \ \ 0\ &\ \ 0\ &\ \ 0\ &\ \ 1\ & | & -1\ \end{bmatrix} \begin{array}{l} \\ R_2-R_4 \\ R_3-3R_4 \\ \\ \end{array} \\ \\ \rightarrow\begin{bmatrix} \ \ 3\ &\ \ 2\ &\ \ 3\ &\ \ 0\ & | &\ \ 4\ \\ \ \ 0\ & -1\ &\ \ 3\ &\ \ 0\ & | &\ \ 4\ \\ \ \ 0\ &\ \ 0\ &\ \ 2\ &\ \ 0\ & | &\ \ 2\ \\ \ \ 0\ &\ \ 0\ &\ \ 0\ &\ \ 1\ & | & -1\ \end{bmatrix} \begin{array}{l} \\ -R_2 \\ \frac{1}{2}R_3 \\ \\ \end{array} &\rightarrow \begin{bmatrix} \ \ 3\ &\ \ 2\ &\ \ 3\ &\ \ 0\ & | &\ \ 4\ \\ \ \ 0\ &\ \ 1\ & -3\ &\ \ 0\ & | & -4\ \\ \ \ 0\ &\ \ 0\ &\ \ 1\ &\ \ 0\ & | &\ \ 1\ \\ \ \ 0\ &\ \ 0\ &\ \ 0\ &\ \ 1\ & | & -1\ \end{bmatrix} \begin{array}{l} R_1-3R_3 \\ R_2+3R_3 \\ \\ \\ \end{array} \\ \\ \rightarrow\begin{bmatrix} \ \ 3\ &\ \ 2\ &\ \ 0\ &\ \ 0\ & | &\ \ 1\ \\ \ \ 0\ &\ \ 1\ &\ \ 0\ &\ \ 0\ & | & -1\ \\ \ \ 0\ &\ \ 0\ &\ \ 1\ &\ \ 0\ & | &\ \ 1\ \\ \ \ 0\ &\ \ 0\ &\ \ 0\ &\ \ 1\ & | & -1\ \end{bmatrix} \begin{array}{l} R_1-2R_2 \\ \\ \\ \\ \end{array} &\rightarrow \begin{bmatrix} \ \ 3\ &\ \ 0\ &\ \ 0\ &\ \ 0\ & | &\ \ 3\ \\ \ \ 0\ &\ \ 1\ &\ \ 0\ &\ \ 0\ & | & -1\ \\ \ \ 0\ &\ \ 0\ &\ \ 1\ &\ \ 0\ & | &\ \ 1\ \\ \ \ 0\ &\ \ 0\ &\ \ 0\ &\ \ 1\ & | & -1\ \end{bmatrix} \begin{array}{l} \frac{1}{3}R_1 \\ \\ \\ \\ \end{array} \\ \\ \rightarrow\begin{bmatrix} \ \ 1\ &\ \ 0\ &\ \ 0\ &\ \ 0\ & | &\ \ 1\ \\ \ \ 0\ &\ \ 1\ &\ \ 0\ &\ \ 0\ & | & -1\ \\ \ \ 0\ &\ \ 0\ &\ \ 1\ &\ \ 0\ & | &\ \ 1\ \\ \ \ 0\ &\ \ 0\ &\ \ 0\ &\ \ 1\ & | & -1\ \end{bmatrix} &\ \end{align*} $$
The solution is $\mathbf{x} = \begin{bmatrix}\ \ 1\ \\ -1\ \\ \ \ 1\ \\ -1\ \end{bmatrix}$.
The solution set if $\left\{\,\begin{bmatrix}\ \ 1\ \\ -1\ \\ \ \ 1\ \\ -1\ \end{bmatrix}\,\right\}$.
Exercise 2¶
Use an augmented matrix, Gaussian elimination, and backward substitution to solve the linear system
$$
\begin{bmatrix}
\ \ 3\ &\ \ 6\ &\ \ 3\ &\ \ 0\ \\
-6\ & -9\ & -3\ & -3\ \\
-3\ & -7\ & -4\ &\ \ 1\ \\
\ \ 3\ & -2\ & -5\ &\ \ 8\ \end{bmatrix}\mathbf{x} =
\begin{bmatrix} -3\ \\ \ \ 9\ \\ \ \ 2\ \\ -11\ \end{bmatrix}
$$
View Solution
$$ \begin{align*} \begin{bmatrix} \ \ 3\ &\ \ 6\ &\ \ 3\ &\ \ 0\ & | & -3\ \\ -6\ & -9\ & -3\ & -3\ & | &\ \ 9\ \\ -3\ & -7\ & -4\ &\ \ 1\ & | &\ \ 2\ \\ \ \ 3\ & -2\ & -5\ &\ \ 8\ & | & -11\ \end{bmatrix} \begin{array}{l} \frac{1}{3}R_1 \\ -\frac{1}{3}R_2 \\ \\ \\ \end{array}&\rightarrow \begin{bmatrix} \ \ 1\ &\ \ 2\ &\ \ 1\ &\ \ 0\ & | & -1\ \\ \ \ 2\ &\ \ 3\ &\ \ 1\ &\ \ 1\ & | & -3\ \\ -3\ & -7\ & -4\ &\ \ 1\ & | &\ \ 2\ \\ \ \ 3\ & -2\ & -5\ &\ \ 8\ & | & -11\ \end{bmatrix} \begin{array}{l} \\ R_2-2R_1 \\ R_3+3R_1 \\ R_4-3R_1 \end{array} \\ \\ \rightarrow\begin{bmatrix} \ \ 1\ &\ \ 2\ &\ \ 1\ &\ \ 0\ & | & -1\ \\ \ \ 0\ & -1\ & -1\ &\ \ 1\ & | & -1\ \\ \ \ 0\ & -1\ & -1\ &\ \ 1\ & | & -1\ \\ \ \ 0\ & -8\ & -8\ &\ \ 8\ & | & -8\ \end{bmatrix} \begin{array}{l} \\ -R_2 \\ R_3-R_2 \\ R_4-8R_2 \end{array} &\rightarrow \begin{bmatrix} \ \ 1\ &\ \ 2\ &\ \ 1\ &\ \ 0\ & | & -1\ \\ \ \ 0\ &\ \ 1\ &\ \ 1\ & -1\ & | &\ \ 1\ \\ \ \ 0\ &\ \ 0\ &\ \ 0\ &\ \ 0\ & | &\ \ 0\ \\ \ \ 0\ &\ \ 0\ &\ \ 0\ &\ \ 0\ & | &\ \ 0\ \end{bmatrix} \begin{array}{l} R_1-2R_2 \\ \\ \\ \\ \end{array} \\ \\ \rightarrow\begin{bmatrix} \ \ 1\ &\ \ 0\ & -1\ &\ \ 2\ & | & -3\ \\ \ \ 0\ &\ \ 1\ &\ \ 1\ & -1\ & | &\ \ 1\ \\ \ \ 0\ &\ \ 0\ &\ \ 0\ &\ \ 0\ & | &\ \ 0\ \\ \ \ 0\ &\ \ 0\ &\ \ 0\ &\ \ 0\ & | &\ \ 0\ \end{bmatrix} &\ \end{align*} $$ To find the solution, we need to assign our free variables first. $$ \begin{align*} x_3 &= \alpha\in\mathbb{R} \\ x_4 &= \beta\in\mathbb{R} \\ \\ x_1 - \alpha + 2\beta &= -3 \\ x_1 &= -3 + \alpha - 2\beta \\ \\ x_2 + \alpha - \beta &= 1 \\ x_2 &= 1 - \alpha + \beta \\ \\ \mathbf{x} &= \begin{bmatrix} -3 + \alpha - 2\beta \\ 1 - \alpha + \beta \\ \alpha \\ \beta \end{bmatrix} = \begin{bmatrix} -3\ \\ \ \ 1\ \\ \ \ 0\ \\ \ \ 0\ \end{bmatrix} + \begin{bmatrix}\ \ \alpha\ \\ -\alpha\ \\ \ \ \alpha\ \\ \ \ 0\ \end{bmatrix} + \begin{bmatrix} -2\beta\ \\ \ \ \beta\ \\ \ \ 0\ \\ \ \ \beta \end{bmatrix} \\ &= \begin{bmatrix} -3\ \\ \ \ 1\ \\ \ \ 0\ \\ \ \ 0\ \end{bmatrix} + \alpha\begin{bmatrix}\ \ 1\ \\ -1\ \\ \ \ 1\ \\ \ \ 0\ \end{bmatrix} + \beta\begin{bmatrix} -2\ \\ \ \ 1\ \\ \ \ 0\ \\ \ \ 1\ \end{bmatrix} \end{align*} $$
The solution is the set $\left\{\,\begin{bmatrix} -3\ \\ \ \ 1\ \\ \ \ 0\ \\ \ \ 0\ \end{bmatrix} + \alpha\begin{bmatrix}\ \ 1\ \\ -1\ \\ \ \ 1\ \\ \ \ 0\ \end{bmatrix} + \beta\begin{bmatrix} -2\ \\ \ \ 1\ \\ \ \ 0\ \\ \ \ 1\ \end{bmatrix}\,:\,\alpha,\ \beta\in\mathbb{R} \right\}$.
Exercise 3¶
Use an augmented matrix, Gaussian elimination, and backward substitution to solve the linear system
$$
\begin{bmatrix}
\ \ 1\ &\ \ 0\ & -1\ &\ \ 1\ \\
-2\ &\ \ 1\ &\ \ 2\ & -2\ \\
-1\ & -1\ &\ \ 0\ & -2\ \\
\ \ 1\ & -2\ &\ \ 0\ &\ \ 2\ \end{bmatrix}\mathbf{x} =
\begin{bmatrix} -2\ \\ \ \ 2\ \\ \ \ 3\ \\ \ \ 5\ \end{bmatrix}
$$
View Solution
$$ \begin{align*} \begin{bmatrix} \ \ 1\ &\ \ 0\ & -1\ &\ \ 1\ & | & -2\ \\ -2\ &\ \ 1\ &\ \ 2\ & -2\ & | &\ \ 2\ \\ -1\ & -1\ &\ \ 0\ & -2\ & | &\ \ 3\ \\ \ \ 1\ & -2\ &\ \ 0\ &\ \ 2\ & | &\ \ 5\ \end{bmatrix} \begin{array}{l} \\ R_2+2R_1 \\ R_3+R_1 \\ R_4-R_1 \end{array} &\rightarrow \begin{bmatrix} \ \ 1\ &\ \ 0\ & -1\ &\ \ 1\ & | & -2\ \\ \ \ 0\ &\ \ 1\ &\ \ 0\ &\ \ 0\ & | & -2\ \\ \ \ 0\ & -1\ & -1\ & -1\ & | &\ \ 1\ \\ \ \ 0\ & -2\ &\ \ 1\ &\ \ 1\ & | &\ \ 7\ \end{bmatrix} \begin{array}{l} \\ \\ R_3+R_2 \\ R_4+2R_2 \end{array} \\ \\ \begin{bmatrix} \ \ 1\ &\ \ 0\ & -1\ &\ \ 1\ & | & -2\ \\ \ \ 0\ &\ \ 1\ &\ \ 0\ &\ \ 0\ & | & -2\ \\ \ \ 0\ &\ \ 0\ & -1\ & -1\ & | & -1\ \\ \ \ 0\ &\ \ 0\ &\ \ 1\ &\ \ 1\ & | &\ \ 3\ \end{bmatrix} \begin{array}{l} \\ \\ -R_3 \\ R_4+R_3 \end{array} &\rightarrow \begin{bmatrix} \ \ 1\ &\ \ 0\ & -1\ &\ \ 1\ & | & -2\ \\ \ \ 0\ &\ \ 1\ &\ \ 0\ &\ \ 0\ & | & -2\ \\ \ \ 0\ &\ \ 0\ &\ \ 1\ &\ \ 1\ & | &\ \ 1\ \\ \ \ 0\ &\ \ 0\ &\ \ 0\ &\ \ 0\ & | &\ \ 2\ \end{bmatrix} \end{align*} $$
The last row now reads
$$ 0x_1 + 0x_2 + 0x_3 + 0x_4 = 2 $$
Since this is impossible, this problem has no solution. The solution set is $\left\{\ \right\}=\emptyset$.
Uses Vectors and Matrices¶
In order to get credit for solving a linear system in your written work, you must use matrices, vectors, and the techniques taught in this course.