M511: Linear Algebra
3.1 Determinants
3.1.1 Determinants¶
$$ \require{color} \definecolor{brightblue}{rgb}{.267, .298, .812} \definecolor{darkblue}{rgb}{0.0, 0.0, 1.0} \definecolor{palepink}{rgb}{1, .73, .8} \definecolor{softmagenta}{rgb}{.99,.34,.86} \definecolor{blueviolet}{rgb}{.537,.192,.937} \definecolor{jonquil}{rgb}{.949,.792,.098} \definecolor{shockingpink}{rgb}{1, 0, .741} \definecolor{royalblue}{rgb}{0, .341, .914} \definecolor{alien}{rgb}{.529,.914,.067} \definecolor{crimson}{rgb}{1, .094, .271} \def\ihat{\mathbf{\hat{\unicode{x0131}}}} \def\jhat{\mathbf{\hat{\unicode{x0237}}}} \def\khat{\mathrm{\hat{k}}} \def\tombstone{\unicode{x220E}} \def\contradiction{\unicode{x2A33}} $$
The determinant of a $n\times n$ matrix is the scaling factor of the change in volume(area) in $n$-dimensional space taking orientation into account
The determinant is best understood in terms of this geometrical notion and understanding its properties.
We denote determinant of $n\times n$ matrix $A$ by
$$ \det(A) = |A| $$
Notice that we use the absolute value bars even though the determinant of a matrix may have a negative value indicating that the orientation of space has been changed by the linear transformation encoded by the matrix.
For example if $A$ is the matrix
$$A = \begin{bmatrix} 3 & 0 \\ 0 & 2 \end{bmatrix},$$
then we denote the determinant of $A$ by either
$$ \det\left(A\right) = \left|\,A\,\right| = \det\left(\begin{bmatrix} 3 & 0 \\ 0 & 2 \end{bmatrix}\right) = \begin{vmatrix} 3 & 0 \\ 0 & 2 \end{vmatrix} = 6. $$
This indicates that the area of any region $D$ in the domain $\mathbb{R}^2$ will be stretched to a region six times the original area of $D$ and orientation will be preserved.
3.1.2 Domains and Codomains¶
Let us take a minute to discuss some vocabulary for matrices and the linear transformations they encode. Whenever one writes the definition of a function, on is expected to include
- A set of valid inputs for the function called its domain
- A set of valid outputs for the function called its codomain
- A rule or set of rules that defines exactly one output in the codomain for every input in the domain.
Rule three declares that every element of the domain must be mapped to some element of the codomain, and each element of the domain must be mapped to only one element of the codomain. When graphing functions with one real input and one real output on the Cartesian plane, this is often referred to as the vertical line test.
We are not always interested in the entire domain of a function; we are often interested in the what happens to some subset of the domain when the function is applied. Using the verb applied here connotes the idea of motion in space. However it can also imply that a process has occurred that somehow results in the appropriate output for each input.
By the time one studies differential or integral calculus much of these details are simply implied by an equation or expression. For example when one reads about the real function
$$
y = \sqrt{9 - x^2},
$$
the reader is expected to determine the definition of the function just from the equation.
The domain of the implied function is the interval $[-3,3]$. Square brackets indicate that the endpoints $\pm3$ are included in the interval; that is the interval is a closed interval.
The codomain of the implied function is the real line $\mathbb{R}$, or the interval of the entire real line $(-\infty,\infty)$.
The rule for a function usually requires a variable that represents each element of the equation defining the rule:* The independent variable $x$ represents an input from the domain $[-3,3]$ so $-3\le x\le 3$
The dependent variable $y$ represents an output from the codomain and the value of the output can be computed from the equation
$$ y = f(x) = \sqrt{9 - x^2} $$
When the author and reader agree, a function doesn't really require an explicit name like $f$. In many cases the output variable can be used as a substitute for the name of the function.
$$
y' = \dfrac{-x}{\sqrt{9-x^2}}
$$
Likewise, one typically writes a matrix expecting the reader to infer properties about the linear transformation represented by the matrix. For example
$$
A = \begin{bmatrix} 3 & 0 \\ 0 & 2 \end{bmatrix}
$$
This $2\times 2$ matrix represents a linear transformation $\mathbb{R}^2$ to $\mathbb{R}^2$.
- The domain of the linear transformation is all of the vector space $\mathbb{R}^2$.
- The codomain of the linear transformation is all of the vector space $\mathbb{R}^2$.
- The rule for computing an output vector $y$ from an input vector $x$ is given by
$$ \mathbf{y} = A\mathbf{x} = \begin{bmatrix} 3 & 0 \\ 0 & 2 \end{bmatrix}\begin{bmatrix} x_1 \\ x_2 \end{bmatrix} = \begin{bmatrix} 3x_1 \\ 2x_2 \end{bmatrix} $$
3.1.3 Subsets, Regions, and Images¶
A subset of some set $U$ is just a collection of some of the elements of $U$. If all of the elements of set $A$ are also in set $U$, we denote that set $A$ is a subset of $U$ by
$$ A\subset U $$
Some subsets are trivial
$\varnothing$ denotes the empty set and has no elements in it. The empty set $\ \varnothing = \left\{\,\right\}$ and $\ \varnothing\subset U$ no matter what is set $\ U$ (even if $\ U = \varnothing$!)
$U\subset U$; that is every set is a subset of itself because every element of the set $\ U$ is also an element of $\ U$.
In linear algebra every vector space contains at least one element, the zero vector $\mathbf{0}$. A vector space with only this one element $\left\{\mathbf{0}\right\}$ is called a trivial vector space.
Some subsets are intended to restrict our attention to a specific region of the domain.
$[0,3]\subset[-3,3]$, perhaps we are only interested in the nonnegative inputs.
A region is a set (or subset) with desirable properties. In linear algebra the term region is often used to refer to a subset of a vector space whose graph is an area or volume.
For example the unit square in $\mathbb{R}^2$ is the subset of the plane $\mathbb{R}^2$ that consists of a square with vertices $(0,0)$, $(1,0)$, $(1,1)$, and $(0,1)$.
The unit square is often called a region of the plane or a domain. In this case we want to restrict the domain of our linear transformation to only those vectors that lie in the unit square. If a vector is in standard position, the tip sits in the unit square, that is the vector $\mathbf{v}$ has standard coordinates so that
$$ \mathbf{v} = \begin{bmatrix} v_1 \\ v_2 \end{bmatrix},\ 0\le v_1\le 1,\ 0\le v_2\le 1 $$
The unit square is described using set builder notation. A set can be described by listing the elements of the set
$$ \left\{ (x,y)\,:\,x,y\in [0,1] \right\} $$
Usually sets are too large to list all of the elements so we describe the set using
- a domain of discourse that specifies the types of numbers we are considering
- a vertical bar or colon to separate the domain from the rule
- a rule that the elements of the domain must satisfy to be members of the set.
The following are equivalent:
$$
\begin{align*}
U &= \left\{\,\mathbf{v}\in\mathbb{R}^2\,|\,0\le v_1\le 1,\ 0\le v_2\le 1\,\right\} \\
\\
U &= \left\{\,\mathbf{v}\in\mathbb{R}^2\,:\,0\le v_1\le 1,\ 0\le v_2\le 1\,\right\} \\
\end{align*}
$$
When we apply the linear transformation $A$ to the unit square we obtain a region in the codomain $\mathbb{R}^2$.
The image of an element of the domain is the output value mapped to the input element. So if $y=\sqrt{9 - x^2}$, then the image of $3$ is $y = \sqrt{9 - 3^2} = 0$; the image of 3 is 0.
Image can also be used to refer to subsets of the codomain. If $\mathbf{y} = A\mathbf{x}$, then
- the image of the zero vector is $A\mathbf{0}=\mathbf{0}$, the zero vector
- the image of the empty set $\ \varnothing$ is the empty set $\ \varnothing$
- the image of the line segment from $(0,0)$ to $(1,0)$ in the domain is the line segment from $(0,0)$ to $(3,0)$ in the codomain.
Exercise 1¶
How do we know that the image of a straight line segment is a straight line segment?
View Solution
Linear transformations keep parallel lines parallel and evenly spaced.
Exercise 2¶
Show algebraically that for $m\times n$ matrix $A$ the image of a straight line in $\mathbb{R}^n$ by the linear transformation $\mathbf{y} = A\mathbf{x}$ is a straight line in $\mathbb{R}^m$.
View Solution
The equation of a straight line in $\mathbb{R}^n$ is given by
$$ \mathbf{x} = \mathbf{m}t + \mathbf{b}, $$
where the vector $\mathbf{m}$ is called the slope vector or direction vector, and vector $\mathbf{b}$ is any vector on the line.
The set of vectors on the line $D = \left\{ \mathbf{m}t + \mathbf{b}\,:\,t\in\mathbb{R}\,\right\}\subset\mathbb{R}^2$, the domain. The image of this line is given by
$$ \mathbf{y} = A\mathbf{x} = A\left(\mathbf{m}t + \mathbf{b}\right) = A\mathbf{m}t + A\mathbf{b} = \left(A\mathbf{m}\right)t + A\mathbf{b} $$
The equation $\mathbf{y} = \left(A\mathbf{m}\right)t + A\mathbf{b}$ is the equation of a line with slope vector $A\mathbf{m}$ passing through the vector $A\mathbf{b}$. This establishes that the image of a line in $\mathbb{R}^n$ is a line in $\mathbb{R}^m$.
- Notice that the images of two parallel lines
$$ \begin{align*} \mathbf{y} &= \mathbf{m}t + {\color{brightblue} \mathbf{b}_1} \\ \mathbf{y} &= \mathbf{m}t + {\color{softmagenta} \mathbf{b}_2} \\ \\ &\text{is given by} \\ \\ \mathbf{y} &= A\mathbf{m}t + {\color{brightblue} A\mathbf{b}_1} \\ \mathbf{y} &= A\mathbf{m}t + {\color{softmagenta} A\mathbf{b}_2} \\ \\ &\text{The image lines both have the same slope so they are still parallel lines.} \end{align*} $$
- Notice also that the lines pass through $\ {\color{brightblue} \mathbf{b}_1}$ and $\ {\color{softmagenta} \mathbf{b}_2}$ while the image lines pass through ${\color{brightblue} A\mathbf{b}_1}$ and ${\color{softmagenta} A\mathbf{b}_2}$ Any two parallel lines separated by $\ \mathbf{b_1} - \mathbf{b_2}$ will have images separated by $\ A\left(\mathbf{b_1} - \mathbf{b_2}\right)$ so they will remain evenly spaced.
3.1.4 Determinant of the Unit Square¶
The image of a set of elements of the domain is written just like the image of a single element of the domain. The image of the region $S$ in the domain of function $f$ is denoted
$$ f(S) = \left\{\,f(x)\,:\,x\in S\,\right\} $$
In the same way, if $D$ is the unit square in the domain, the image $R$ of the unit square under the linear transformation $A$ is the region given by
$$ R = A(D) = \left\{\,A\mathbf{v}\,|\,\mathbf{v}\in D\,\right\} $$
Consider our example matrix
$$ A = \begin{bmatrix} 3 & 0 \\ 0 & 2 \end{bmatrix} $$
Since the area of the unit square is one and the determinant of matrix $A$ is the scaling factor 6, matrix $A$ stretches the area of regions of the domain to regions of the codomain. Hence the area of region $R = AD$ is
$$ \text{Area}(R) = 6\cdot\text{Area}(D) = 6\cdot 1 = 6 $$
This answer can be readily obtained from the graph of the image above. This means that we can compute the determinant of a $2\times 2$ matrix by looking at the image of the unit square when the matrix is applied to it. The area of the resulting parallelogram is the determinant of the matrix.
Likewise we can compute the determinant of a $3\times 3$ matrix by graphing the image of the unit cube in $\mathbb{R}^3$ and calculating the volume of the parallelepiped image. This technique becomes exceedingly difficult computing volumes of higher dimensional parallelepipeds. We have so far only computed the determinant of a diagonal matrix. If the matrix is not diagonal, the volume of a parallelepiped will be even more difficult. We need a better technique for computing determinants.
3.1.5 Properties of Determinants¶
People and computer programs calculate determinants very differently. There are two main techniques for people to compute the determinant of an $n\times n$ matrix.
- Computing a determinant by simplifying the matrix using the following properties of determinants.
- The Laplace Expansion
In your homework, quizzes or exams always use the first method of computing the determinant of an $n\times n$ matrix. There are only a few instances where you can use the Laplace Expansion and receive credit for solving the problem. The Laplace Expansion, and the limited number of times you can use it will be explained in the next section.
Students often confuse the Laplace Expansion with the definition of the determinant of a matrix. The formal definition of determinant is given by
Definition of Determinant¶
The determinant of an $n\times n$ matrix is a function $\ \text{det}:\mathbb{R}^{n\times n}\rightarrow\mathbb{R}$ that satisfies the following three properties
- det$\ (I_n) = 1$, where $\ I_n$ is the $n\times n$ identity matrix
- The determinant is alternating
- The determinant is multilinear
Since these are the defining properties of the determinant, we will study them first.
3.1.6 Property 1 - The Determinant of the Identity¶
The identity matrix represents the identity linear transformation that does not change volumes at all. So the scaling factor for the identity matrix is $1$. Matrix $I_2$ is a $2\times 2$
$$ I_2 = \begin{bmatrix} 1 & 0 \\ 0 & 1 \end{bmatrix}. $$
The identity map sends $\ihat$ to $\ihat$ and $\jhat$ to $\jhat$. No change in area occurs. Matrix $I_3$ is a $3\times 3$ matrix
$$ I_3 = \begin{bmatrix} 1 & 0 & 0 \\ 0 & 1 & 0 \\ 0 & 0 & 1 \end{bmatrix}. $$
The identity map sends $\ihat$ to $\ihat$, $\jhat$ to $\jhat$ and $\khat$ to $\khat$. No change in the volume occurs. Matrix $I_n$ is an $n\times n$ matrix
$$ I_n = \begin{bmatrix} 1 & 0 & \cdots & 0 \\ 0 & 1 & \cdots & 0 \\ \vdots & \vdots & \ddots & \vdots \\ 0 & 0 & \cdots & 1 \end{bmatrix}. $$
The identity map sends $\mathbf{e}_1$ to $\mathbf{e}_1$, $\mathbf{e}_2$ to $\mathbf{e}_2$, $\cdots$, and $\mathbf{e}_n$ to $\mathbf{e}_n$. No change in the volume occurs. Notice the use of the word volume for any dimension $> 2$. One can use the term volume as a generic term even if the generic use may refer to dimensions that include 2 as well.
Property 1¶
Matrix $I_n$ is an $n\times n$ identity matrix with determinant equal to one.
$$ \det\left(I_n\right) = \left|I_n\right| = \left|\,\delta_{ij}\,\right| = 1 $$
Similarly, the zero linear transformation on $\mathbb{R}^n$ is the linear transformation that maps every vector in $\mathbb{R}^n$ to the zero vector. It maps $\mathbf{e}_1, \mathbf{e}_2,\cdots,\mathbf{e}_n$ all to $\mathbf{0}$. Thus all of the columns of the zero matrix are $\mathbf{0}$. We denote the zero matrix using the pattern notation, $[0]$, since all of the elements of the zero matrix are zero scalars. The image of any region in $\mathbb{R}^n$ is a single point at the origin. As the volume of a point is zero we have
$$ \text{det}\left[0\right] = \left|[0]\right| = 0 $$
3.1.7 Property 2 - Alternating¶
Let us consider matrix $A$ defined as before
$$A = \begin{bmatrix} 3 & 0 \\ 0 & 2 \end{bmatrix}.$$
This is the matrix used in our video and the determinant is $6$ because $\ihat$ is stretched by $3$ to $3\ihat$, and $\jhat$ is stretched by $2$ to $2\jhat$. This leaves us with a rectangle with base $3$ and height $2$. This new area represents the image of the unit square under the transformation $A$. The new area is $3\times 2 = 6$ so the determinant is $6$.
Now let us flip the rows so that we have a new matrix
$$B = \begin{bmatrix} 0 & 2 \\ 3 & 0 \end{bmatrix}$$
This matrix sends $\ihat$ to $3\jhat$ and $\jhat$ to $2\ihat$. This linear transformation represented by matrix B changes the orientation of the basis vector $\ihat$ and $\jhat$ so that $B\,\jhat$ is to the right of $B\,\ihat$ instead of on the left.
The determinant of matrix $B$ is
$$ \text{det}(B) = |B| = \begin{vmatrix} 0 & 2 \\ 3 & 0 \end{vmatrix} = -6 $$
We can now compute the determinant of any elementary permutation matrix, that is any type I elementary matrix. Since an elementary permutation matrix E exchanges exactly one pair of rows of the identity matrix, it changes the orientation of the domain, thus
$$ \text{det}(E)=|E|= -1 $$
We can also compute the determinant of any permutation matrix P by counting the number of row exchanges.
Property 2¶
If permutation matrix $P$ exchanges $n$ pairs of rows of the identity matrix, then
$$ \text{det}(P) = |P| = \left\{\begin{array}{rcl}\ \ 1 & \ & \text{if $n$ is even} \\ -1 & \ & \text{if $n$ is odd} \end{array} \right. = (-1)^n $$
Exercise 3¶
Consider the $4\times 4$ matrix that exchanges row 1 and 4, and rows 2 and 3 is given by
$$
P = \begin{bmatrix} 0 & 0 & 0 & 1 \\ 0 & 0 & 1 & 0 \\ 0 & 1 & 0 & 0 \\ 1 & 0 & 0 & 0 \end{bmatrix}
$$
View Solution
This constitutes two row exchanges so the determinant of $P$ results in
$$ \text{det}(P) = |P| = (-1)^2 = 1 $$
3.1.8 Property 3(a) - Multilinear¶
The Multilinearity Property contains two parts
3(a) scalar multiplication of a single row
3(b) addition of a single row
Let us return to our previous example $A = \begin{bmatrix} 3 & 0 \\ 0 & 2 \end{bmatrix}$ and compare it to $B = \begin{bmatrix} 3 & 0 \\ 0 & 4 \end{bmatrix}$
Notice that by multiplying only one side of the image rectangle (row of the matrix) by two multiplies the area by two as well.
$$ \text{det}(B) = \begin{vmatrix} 3 & 0 \\ 0 & 4 \end{vmatrix} = \begin{vmatrix} 3 & 0 \\ 2\cdot 0 & 2\cdot 4 \end{vmatrix} = 2\begin{vmatrix} 3 & 0 \\ 0 & 2 \end{vmatrix} = 2\cdot 6 = 12 $$
Property 3(a)¶
If the elements of a row of a matrix have a common factor $t$, then $$ A = \begin{bmatrix} ta & tb \\ c & d \end{bmatrix} $$
then $$ \text{det}(A) = \begin{vmatrix} ta & tb \\ c & d \end{vmatrix} = t\begin{vmatrix} a & b \\ c & d \end{vmatrix} $$
We can now compute the determinant of a type II elementary matrix.
For example
$$
\begin{align*}
\begin{vmatrix} 3 & 0 \\ 0 & 1 \end{vmatrix} &= 3\begin{vmatrix} 1 & 0 \\ 0 & 1 \end{vmatrix} = 3\cdot 1 = 3 \\
\\
\begin{vmatrix} 1 & 0 & 0 \\ 0 & 4 & 0 \\ 0 & 0 & 1 \end{vmatrix} &= 4\begin{vmatrix} 1 & 0 & 0 \\ 0 & 1 & 0 \\ 0 & 0 & 1 \end{vmatrix} = 4\cdot 1 = 4
\end{align*}
$$
Given an $n\times n$ type II elementary matrix and non-zero scalar $\alpha$
$$
\begin{vmatrix} 1 & 0 & \cdots & 0 & \cdots & 0 & 0 \\
0 & 1 & \cdots & 0 & \cdots & 0 & 0 \\
\vdots & \vdots & \ddots & \vdots & \cdots & \vdots & \vdots \\
0 & 0 & \cdots & \alpha & \cdots & 0 & 0 \\
\vdots & \vdots & \ddots & \vdots & \cdots & \vdots & \vdots \\
0 & 0 & \cdots & 0 & \cdots & 1 & 0 \\
0 & 0 & \cdots & 0 & \cdots & 0 & 1 \end{vmatrix} =
\alpha|I| = \alpha\cdot 1 = \alpha
$$
Exercise 4¶
Use Property 3(a) and Property 2 to compute the determinant of matrix
$$
A = \begin{bmatrix} 0 & 3 \\ -4 & 0 \end{bmatrix}
$$
View Solution
$$ \begin{align*} \left|A\right| &= \begin{vmatrix} 0 & 3 \\ -4 & 0 \end{vmatrix} \\ \\ &= \begin{vmatrix} 3\cdot 0 & 3\cdot 1 \\ -4 & 0 \end{vmatrix} \\ \\ &= 3\begin{vmatrix} 0 & 1 \\ -4\cdot 1 & -4\cdot 0 \end{vmatrix} \\ \\ &= 3(-4)\begin{vmatrix} 0 & 1 \\ 1 & 0 \end{vmatrix} \\ \\ &= 3(-4)(-1) = 12 \end{align*} $$
3.1.9 Property 3(b) - Multilinear¶
Property 3(b)¶
If a single row of a matrix consists of a sum of two rows,
$$ A = \begin{bmatrix} a + {\color{darkblue}\hat{a}} & b + {\color{darkblue}\hat{b}} \\ c & d \end{bmatrix} $$
then the determinant of matrix $A$ can be written as a
sum of two determinants
$$ \text{det}(A) = |A| = \begin{vmatrix} a + {\color{darkblue}\hat{a}} & b + {\color{darkblue}\hat{b}} \\ c & d \end{vmatrix} = \begin{vmatrix} a & b \\ c & d \end{vmatrix} + \begin{vmatrix} {\color{darkblue}\hat{a}} & {\color{darkblue}\hat{b}} \\ c & d \end{vmatrix} $$
Notice that all of the other rows must be identical for both terms of the sum.
Exercise 5¶
Use properties 1-3 of the determinant to compute $|A|$ where
$$
A = \begin{bmatrix} 5 & 2 \\ 0 & 2 \end{bmatrix}
$$
View Solution
$$ \begin{align*} \left|A\right| = \begin{vmatrix} 5 & 2 \\ 0 & 2 \end{vmatrix} &= \begin{vmatrix} 3+{\color{darkblue}2} & 0+{\color{darkblue}2} \\ 0 & 2 \end{vmatrix} \\ \\ &= \begin{vmatrix} 3 & 0 \\ 0 & 2 \end{vmatrix} + \begin{vmatrix} {\color{darkblue}2} & {\color{darkblue}2} \\ 0 & 2 \end{vmatrix} \\ \\ &= \begin{vmatrix} 3 & 0 \\ 0 & 2 \end{vmatrix} + \begin{vmatrix} 2+{\color{darkblue}0} & 0 + {\color{darkblue}2} \\ 0 & 2 \end{vmatrix} \\ \\ &= \begin{vmatrix} 3 & 0 \\ 0 & 2 \end{vmatrix} + \begin{vmatrix} 2 & 0 \\ 0 & 2 \end{vmatrix} + \begin{vmatrix} {\color{darkblue}0} &{\color{darkblue}2} \\ 0 & 2 \end{vmatrix} \\ \\ &= 3\begin{vmatrix} 1 & 0 \\ 0 & 2 \end{vmatrix} + 2\begin{vmatrix} 1 & 0 \\ 0 & 2 \end{vmatrix} + 0 \\ \\ &= 3(2)\begin{vmatrix} 1 & 0 \\ 0 & 1 \end{vmatrix} + 2(2)\begin{vmatrix} 1 & 0 \\ 0 & 1 \end{vmatrix} + 0 \\ \\ &= 3(2)(1) + 2(2)(1) + 0 = 6 + 4 + 0 = 10 \end{align*} $$
Matrix $\begin{bmatrix} 0 & 2 \\ 0 & 2 \end{bmatrix}$ maps $\ihat$ to the zero vector $\mathbf{0}$ and $\jhat$ to the vector $\begin{bmatrix} 2 \\ 2 \end{bmatrix}$. This matrix squishes the unit square onto a line from $\begin{bmatrix} 0 \\ 0 \end{bmatrix}$ to $\begin{bmatrix} 2 \\ 2 \end{bmatrix}$. Since a line has no area in $\mathbb{R}^2$ the determinant is zero.
3.1.10 Computing a Determinant¶
From the three properties above we can conclude all of the other properties and formulas for determinants. For example we can now derive the equation for the determinant of any $2\times 2$ matrix. Let matrix $A$ be defined by
$$ A = \begin{bmatrix} a & b \\ c & d \end{bmatrix} $$
then the determinant of matrix $A$ becomes
$$ \begin{align*} \text{det}(A) = |A| &= \begin{vmatrix} a & b \\ c & d \end{vmatrix} \\ \\ &= \begin{vmatrix} a + {\color{darkblue}0} & 0 + {\color{darkblue}b} \\ c & d \end{vmatrix} \\ \\ &= \begin{vmatrix} a & 0 \\ c & d \end{vmatrix} + \begin{vmatrix} 0 & b \\ c & d \end{vmatrix} \qquad &\text{Property 3(b)} \\ \\ &= \begin{vmatrix} a & 0 \\ c & d \end{vmatrix} + (-1) \begin{vmatrix} {\color{darkblue} c} & {\color{darkblue} d} \\ {\color{darkblue} 0} & {\color{darkblue} b} \end{vmatrix} &\text{Property 2} \\ \\ &= \begin{vmatrix} a & 0 \\ 0 + {\color{darkblue}c} & d + {\color{darkblue}0} \end{vmatrix} - \begin{vmatrix} c + {\color{darkblue}0} & 0 + {\color{darkblue}d} \\ 0 & b \end{vmatrix} \\ \\ &= \begin{vmatrix} a & 0 \\ 0 & d \end{vmatrix} + \begin{vmatrix} a & 0 \\ {\color{darkblue}c} & {\color{darkblue}0} \end{vmatrix} - \begin{vmatrix} c & 0 \\ 0 & b \end{vmatrix} - \begin{vmatrix} {\color{darkblue}0} & {\color{darkblue}d} \\ 0 & b \end{vmatrix} \qquad &\text{Property 3(b)} \\ \\ &= {\color{darkblue}a}\begin{vmatrix} 1 & 0 \\ 0 & d \end{vmatrix} + 0 - {\color{darkblue}c}\begin{vmatrix} 1 & 0 \\ 0 & b \end{vmatrix} - 0 \qquad &\text{Property 3(a)} \\ \\ &= a{\color{darkblue}d}\begin{vmatrix} 1 & 0 \\ 0 & 1 \end{vmatrix} - c{\color{darkblue}b}\begin{vmatrix} 1 & 0 \\ 0 & 1 \end{vmatrix} \qquad &\text{Property 3(a)} \\ \\ &= ad - bc \qquad\qquad\qquad &\text{Property 1} \end{align*} $$
The Laplace Expansion of a $2\times 2$ matrix¶
The determinant of a $2\times 2$ matrix $\begin{bmatrix} a & b \\ c & d \end{bmatrix}$ is given by the formula
$$ \begin{vmatrix} a & b \\ c & d \end{vmatrix} = ad - bc $$
This is an example of the Laplace expansion of the determinant of a matrix. We will discuss the Laplace expansion of the determinant in the next section. Computer algorithms never use the Laplace expansion to compute a determinant.
On an exam, quiz or homework assignment there are a small number of instances where the Laplace expansion can simplify the computation of a determinant. Computing the determinant of a $2\times 2$ matrix is one of these instances.
3.1.11 Property 4¶
Consider matrix $B$ given by
$$ B = \begin{bmatrix} -1 & \ \ 4\ &\ \ 8\ \\ \ \ 1\ &\ \ 8\ & -1\ \\ -1\ &\ \ 4\ &\ \ 8\ \end{bmatrix} $$
We can use property 2 to exchange rows 1 and 3 and obtain
$$ |B| = \begin{vmatrix} -1 & \ \ 4\ &\ \ 8\ \\ \ \ 1\ &\ \ 8\ & -1\ \\ -1\ &\ \ 4\ &\ \ 8\ \end{vmatrix} = -\begin{bmatrix} -1 & \ \ 4\ &\ \ 8\ \\ \ \ 1\ &\ \ 8\ & -1\ \\ -1\ &\ \ 4\ &\ \ 8\ \end{bmatrix} = -|B| $$
In other words, exchanging the first and third rows using property 2 gives us the negative of the original matrix; however, the first and third rows are identical so the matrix is unchanged. This algebra results in the equation
$$ |B| = -|B| $$
There is only one real (or complex) scalar equal to its own negative, zero. Hence $|B| = 0$. Notice that we use property 2 to derive property 4.
Property 4¶
If an $n\times n$ matrix $A$ has two equal rows, then the determinant $|A| = 0$.
3.1.12 Property 5¶
Property 2 tells us how a type I row operation affects the value of a determinant and property 3(a) informs us of the effect of a type II row operation. Let us consider a type III row operation. If we add a nonzero multiple $\alpha$ of row 1 to row 2 of a $2\times 2$ matrix
$$ B = \begin{bmatrix} a & b \\ c & d \end{bmatrix} $$
we obtain a new matrix
$$ C = \begin{bmatrix} a & b \\ c + \alpha a & d + \alpha b \end{bmatrix} $$
Computing the determinant of matrix $C$ one obtains
$$ \begin{align*} |C| &= \begin{vmatrix} a & b \\ c + \alpha a & d + \alpha b \end{vmatrix} \\ \\ &= \begin{vmatrix} a & b \\ c & d \end{vmatrix} + \begin{vmatrix} a & b \\ \alpha a & \alpha b \end{vmatrix} \qquad &\text{Property 3(b)} \\ \\ &= \begin{vmatrix} a & b \\ c & d \end{vmatrix} + \alpha\begin{vmatrix} a & b \\ a & b \end{vmatrix} \qquad &\text{Property 3(a)} \\ \\ &= \begin{vmatrix} a & b \\ c & d \end{vmatrix} + \alpha\cdot 0 \qquad &\text{Property 4} \\ \\ &= |B| \end{align*} $$
Property 5¶
Performing a type III row operation on a matrix does not change the value of the determinant!
Exercise 6¶
Use properties 1-5 of the determinant to compute $|A|$ where
$$
A = \begin{bmatrix}\ \ 2\ &\ \ 6\ &\ \ 2\ \\ -2\ &\ \ 6\ &\ \ 3\ \\ \ \ 8 &\ 36\ &\ \ 0\ \end{bmatrix}
$$
View Solution
$$ \begin{align*} \begin{vmatrix}\ \ 2 & 6 & 2 \\ -2 & 6 & 3 \\ \ \ 8 & 36 & 0\ \end{vmatrix} &= \begin{vmatrix}\ \ 2 & 6 & 2 \\ \ \ 0 & 12 & 5 \\ \ \ 8 & 36 & 0\ \end{vmatrix} \qquad &\begin{array}{l} \text{Property 5} \\ R_2 + R_1 \end{array} \\ \\ &= \begin{vmatrix}\ \ 2 & 6 & 2 \\ \ \ 0 & 12 & 5 \\ \ \ 0 & 12 & -8\ \end{vmatrix} \qquad &\begin{array}{l} \text{Property 5} \\ R_3 - 4R_1 \end{array} \\ \\ &= \begin{vmatrix}\ \ 2 & 6 & 2 \\ \ \ 0 & 12 & 5 \\ \ \ 0 & 0 & -13\ \end{vmatrix} \qquad &\begin{array}{l} \text{Property 5} \\ R_3 - R_2 \end{array} \\ \\ &= \begin{vmatrix}\ \ 2 & 6 & 2 \\ \ \ 0 & 12 & 0 \\ \ \ 0 & 0 & -13\ \end{vmatrix} \qquad &\begin{array}{l} \text{Property 5} \\ R_2 + \frac{5}{13}R_3 \end{array} \\ \\ &= \begin{vmatrix}\ \ 2 & 6 & 0 \\ \ \ 0 & 12 & 0 \\ \ \ 0 & 0 & -13\ \end{vmatrix} \qquad &\begin{array}{l} \text{Property 5} \\ R_1 + \frac{2}{13}R_3 \end{array} \\ \\ &= \begin{vmatrix}\ \ 2 & 0 & 0 \\ \ \ 0 & 12 & 0 \\ \ \ 0 & 0 & -13\ \end{vmatrix} \qquad &\begin{array}{l} \text{Property 5} \\ R_1 - \frac{1}{2}R_2 \end{array} \\ \\ &= 2(12)(-13)\begin{vmatrix}\ \ 1 & 0 & 0 \\ \ \ 0 & 1 & 0 \\ \ \ 0 & 0 & 1\ \end{vmatrix} \qquad &\begin{array}{l} \text{Property 3(a)} \\ \end{array} \\ \\ &= 2(12)(-13)(1) = 24(-13) = -240-72 = -312 \end{align*} $$
3.1.13 Property 6¶
Let us consider the matrix
$$ B = \begin{bmatrix} 0 & 0 \\ 3 & 2 \end{bmatrix} $$
If we use property 3(a)
$$ \begin{align*} |B| &= \begin{vmatrix} 0 & 0 \\ 3 & 2 \end{vmatrix} \\ \\ &= \begin{vmatrix} 5280\cdot 0 & 5280\cdot 0 \\ 3 & 2 \end{vmatrix} \\ \\ &= 5280\begin{vmatrix} 0 & 0 \\ 3 & 2 \end{vmatrix} \\ \\ &= 5280\cdot|B| \end{align*} $$
Subtracting $|B|$ from both sides yields
$$ 0 = 5279\cdot|B| $$
Hence $|B| = 0$.
If matrix $A$ is any $n\times n$ matrix with a row of zeros, then we can use property 3(a) and any scalar not equal to 0 or 1 to obtain
$$ (\alpha -1)|A| = 0 $$
Property 6¶
If an $n\times n$ matrix $A$ has a row of zeros, then $|A| = 0$.
Notice that this means that if one row is a linear combination of the other rows, we can use row operations to reduce the linearly dependent row to a row of zeros.
Corollary 6¶
If one row in an $n\times n$ matrix $A$ is a linear combination of the other rows, then $|A|=0$.
3.1.14 Property 7¶
Consider a diagonal matrix
$$ A = \begin{bmatrix} a_{11} & 0 & \cdots & 0 & 0 \\ 0 & a_{22} & \cdots & 0 & 0 \\ \vdots & \vdots & \ddots & \vdots & \vdots \\ 0 & 0 & \cdots & a_{n-1,n-1} & 0 \\ 0 & 0 & \cdots & 0 & a_{nn} \end{bmatrix} $$
Using property 3(a) we obtain
$$ \begin{align*} |A| &= \begin{vmatrix} a_{11} & 0 & \cdots & 0 & 0 \\ 0 & a_{22} & \cdots & 0 & 0 \\ \vdots & \vdots & \ddots & \vdots & \vdots \\ 0 & 0 & \cdots & a_{n-1,n-1} & 0 \\ 0 & 0 & \cdots & 0 & a_{nn} \end{vmatrix} \\ \\ &= a_{11}\begin{vmatrix} 1 & 0 & \cdots & 0 & 0 \\ 0 & a_{22} & \cdots & 0 & 0 \\ \vdots & \vdots & \ddots & \vdots & \vdots \\ 0 & 0 & \cdots & a_{n-1,n-1} & 0 \\ 0 & 0 & \cdots & 0 & a_{nn} \end{vmatrix} \\ \\ &= a_{11}a_{22}\begin{vmatrix} 1 & 0 & \cdots & 0 & 0 \\ 0 & 1 & \cdots & 0 & 0 \\ \vdots & \vdots & \ddots & \vdots & \vdots \\ 0 & 0 & \cdots & a_{n-1,n-1} & 0 \\ 0 & 0 & \cdots & 0 & a_{nn} \end{vmatrix} \\ \\ &\ddots \\ \\ &= a_{11}a_{22}\cdots a_{n-1,n-1} \begin{vmatrix} 1 & 0 & \cdots & 0 & 0 \\ 0 & 1 & \cdots & 0 & 0 \\ \vdots & \vdots & \ddots & \vdots & \vdots \\ 0 & 0 & \cdots & 1 & 0 \\ 0 & 0 & \cdots & 0 & a_{nn} \end{vmatrix} \\ \\ &= a_{11}a_{22}\cdots a_{n-1,n-1}a_{nn} \begin{vmatrix} 1 & 0 & \cdots & 0 & 0 \\ 0 & 1 & \cdots & 0 & 0 \\ \vdots & \vdots & \ddots & \vdots & \vdots \\ 0 & 0 & \cdots & 1 & 0 \\ 0 & 0 & \cdots & 0 & 1 \end{vmatrix} \\ \\ &= a_{11}a_{22}\cdots a_{n-1,n-1}a_{nn}\left|I_n\right| = \displaystyle\prod_{k=1}^n a_{kk} \end{align*} $$
Let us consider an upper triangular matrix
$$ B = \begin{bmatrix} b_{11} & * & \cdots & * & * \\ 0 & b_{22} & \cdots & * & * \\ \vdots & \vdots & \ddots & \vdots & \vdots \\ 0 & 0 & \cdots & b_{n-1,n-1} & * \\ 0 & 0 & \cdots & 0 & b_{nn} \end{bmatrix} $$
Here the splat $*$ indicates the placement of a real number. We are running out of letters so in linear algebra we use the splat to represent an element of a matrix whose value is unimportant to the property of the matrix in question. The values of each element of the lower triangle are all zero. This defines matrix $B$ as an upper triangular matrix. The elements in the upper triangle can be any real number without affecting our result.
The important consideration here is that we may use type III row operations to reduce the matrix so that there are zeros above each nonzero diagonal element without affecting the value of the determinant.
Two results by occur:
1. If all of the diagonal elements are nonzero then we will reduce matrix $B$ to a diagonal matrix without changing the value of the determinant so
$$ \begin{align*} |B| &= \begin{vmatrix} b_{11} & * & \cdots & * & * \\ 0 & b_{22} & \cdots & * & * \\ \vdots & \vdots & \ddots & \vdots & \vdots \\ 0 & 0 & \cdots & b_{n-1,n-1} & * \\ 0 & 0 & \cdots & 0 & b_{nn} \end{vmatrix} \\ \\ &= \begin{vmatrix} b_{11} & 0 & \cdots & 0 & 0 \\ 0 & b_{22} & \cdots & 0 & 0 \\ \vdots & \vdots & \ddots & \vdots & \vdots \\ 0 & 0 & \cdots & b_{n-1,n-1} & 0 \\ 0 & 0 & \cdots & 0 & b_{nn} \end{vmatrix} \\ \\ &= \displaystyle\prod_{k=1}^n b_{kk} \end{align*} $$
2. If there are diagonal elements that are equal to zero, then there is a zero diagonal element with largest index $k$, $b_{kk}$. Our determinant becomes
$$ |B| = \begin{vmatrix} b_{11} & * & \cdots & * & * \\ 0 & b_{22} & \cdots & * & * \\ \vdots & \vdots & \ddots & \vdots & \vdots \\ 0 & 0 & \cdots & 0 & * \\ 0 & 0 & \cdots & 0 & b_{nn} \end{vmatrix} = \begin{vmatrix} b_{11} & 0 & \cdots & * & 0 \\ 0 & b_{22} & \cdots & * & 0 \\ \vdots & \vdots & \ddots & \vdots & \vdots \\ 0 & 0 & \cdots & 0 & 0 \\ 0 & 0 & \cdots & 0 & b_{nn} \end{vmatrix} = 0 $$
The last zero diagonal element will find itself in a row of zeros, hence by property 6, the value of the determinant is zero. The result is that the determinant is this upper triangular matrix is still the product of the diagonal elements.
If matrix $C$ is a lower triangular matrix then the same reasoning will yield the same result.
Property 7¶
The determinant of an $n\times n$ upper triangular, lower triangular, or diagonal matrix is equal to the products of its diagonal elements.
Exercise 7¶
Compute the determinant of the following matrix.
$$
A = \begin{bmatrix}\ \ 2\ &\ \ 3\ &\ 12\ & -2\ \\ 0 &\ \ 3\ & -1\ &\ \ 7\ \\ \ \ 0\ &\ \ 0\ & \ \ 8 &\ 36\ \\ \ \ 0\ &\ \ 0\ &\ \ 0\ & -3\end{bmatrix}
$$
View Solution
$$ |A| = \begin{vmatrix}\ \ 2\ &\ \ 3\ &\ 12\ & -2\ \\ 0 &\ \ 3\ & -1\ &\ \ 7\ \\ \ \ 0\ &\ \ 0\ & \ \ 8 &\ 36\ \\ \ \ 0\ &\ \ 0\ &\ \ 0\ & -3\end{vmatrix} = 2(3)(8)(-3) = 6(-24) = -144 $$
3.1.15 Property 8¶
Property 8¶
The determinant of an $n\times n$ matrix is equal to zero if and only if the matrix is singular
So far we observed that an $n\times n$ matrix has linearly independent columns when all of the columns are pivot columns. If matrix $A$ is an $n\times n$ matrix with $n$ pivot columns, then the matrix has an inverse and we can compute the inverse by partitioning the matrix $A$ with the $n\times n$ identity matrix $I_n$
$$ \left[\,A\,|\,I_n\,\right] $$
Since all of the columns matrix $A$ are pivot columns, matrix $A$ is row equivalent to $I_n$ so we can perform row operations of the partitioned matrix $\left[\,A\,|\,I_n\,\right]$ until we have the left partition equal to $I_n$. This yields
$$ \left[\,I_n\,|\,A^{-1}\,\right] $$
Thus an $n\times n$ matrix with full rank, all of the columns are pivot columns, is row equivalent to $I_n$, and invertible. We can use properties 1 through 7 to reduce the matrix to upper triangular form and all of the diagonal elements will be nonzero. Thus the determinant of the matrix will also be nonzero by property 8.
We call such a matrix nonsingular.
Theorem 1¶
The following are equivalent statements about $n\times n$ matrix $A$:
- All of the columns of matrix $A$ are pivot columns
- Matrix $A$ is row equivalent to $I_n$
- Matrix $A$ is invertible, there is an $n\times n$ matrix $A^{-1}$ so that $AA^{-1} = A^{-1}A = I_n$.
- The determinant of matrix $A$ is nonzero
- Matrix $A$ is nonsingular
We can re-state Theorem 1 as follows
Corollary 1¶
The following are equivalent statements about $n\times n$ matrix $A$:
- Matrix $A$ has at least on free column
- Matrix $A$ is not row equivalent to $I_n$
- Matrix $A$ has no inverse
- The determinant of matrix $A$ is equal to zero
- Matrix $A$ is singular
Exercise 8¶
Determine if the following matrix is invertible.
$$
A = \begin{bmatrix}\ \ 2\ &\ \ 6\ &\ \ 2\ \\ -2\ &\ \ 6\ &\ \ 3\ \\ \ \ 8 &\ 36\ &\ \ 0\ \end{bmatrix}
$$
View Solution
$$ |A| = \begin{vmatrix}\ \ 2\ &\ \ 6\ &\ \ 2\ \\ -2\ &\ \ 6\ &\ \ 3\ \\ \ \ 8 &\ 36\ &\ \ 0\ \end{vmatrix} = -312 \neq 0 $$
Since the determinant of matrix $A$ is nonzero, matrix $A$ is nonsingular and thus invertible.
3.1.16 Property 9¶
Property 9¶
The determinant of a product of $n\times n$ matrices is the product of the determinants of the matrices.
If you watched the video by Grant Sanderson you should realize that the product of two $n\times n$ matrices is the composition of two linear transformations
$$ AB\mathbf{x} = A\left(B\mathbf{x}\right) $$
When we calculate the determinate of the composition $AB$ we compute the scaling factor of the product. However we can also think of this as the scaling factor for matrix $A$ applied to the scaling factor for matrix $B$. That means we apply matrix $B$ to the unit square to obtain a parallelogram with area $|B|$. Then we apply matrix $A$ to the parallelogram $B(D)$ to obtain a new parallelogram with area $|A|(|B|)$. This is parallelogram $A(B(D))$.
$$ \text{det}(AB) = \text{det}(A)\,\text{det}(B) $$
If matrix A is a nonsingular matrix then $|A|\neq 0$ and we can derive the determinant of $A^{-1}$ using this property.
$$ 1 = \text{det}(I_n) = \text{det}\left(A^{-1}A\right) = \text{det}\left(A^{-1}\right)\,\text{det}(A) $$
We can divide both sides of this equation by the nonzero number $\text{det}(A)$ to obtain
$$ \dfrac{1}{\text{det}(A)} = \text{det}\left(A^{-1}\right) $$
Exercise 9¶
What is the determinant of the inverse of the following matrix?
$$
A = \begin{bmatrix}\ \ 2\ &\ \ 6\ &\ \ 2\ \\ -2\ &\ \ 6\ &\ \ 3\ \\ \ \ 8 &\ 36\ &\ \ 0\ \end{bmatrix}
$$
View Solution
$$ |A| = \begin{vmatrix}\ \ 2\ &\ \ 6\ &\ \ 2\ \\ -2\ &\ \ 6\ &\ \ 3\ \\ \ \ 8 &\ 36\ &\ \ 0\ \end{vmatrix} = -312 $$
Thus
$$ \left|A^{-1}\right| = \frac{1}{-312} = -\frac{1}{312} $$
3.1.17 Property 10¶
Property 10¶
The determinant of an $n\times n$ matrix is equal to the determinant of its transpose.
Recall that we can decompose any matrix into a product of a lower triangular and upper triangular matrices. This is the $LU$-Decomposition. That is any $n\times n$ matrix $A$ may be factored into
$$ A = LU $$
where $L$ is a lower triangular matrix with 1 in each of the diagonal entries, and $U$ is an upper triangular matrix. Thus
$$ |A| = |LU| = |L|\cdot|U| = 1\cdot\displaystyle\prod_{k=1}^n u_{kk} = \displaystyle\prod_{k=1}^n u_{kk} $$
Now let's use the properties of determinants to compute the determinant of $A^T$.
$$ \left|A^T\right| = \left|\left(LU\right)^T\right| = \left|U^TL^T\right| = \left|U^T\right|\cdot\left|L^T\right| $$
Now, $L^T$ is an upper triangular matrix with 1 in each of the diagonal entries, and $U^T$ is a lower triangular matrix. So
$$ \left|A^T\right| = \left|U^T\right|\cdot\left|L^T\right| = \left(\displaystyle\prod_{k=1}^n u_{kk}\right)\cdot 1 = |A| $$
Exercise 10¶
Compute the determinant of $A^T$ where
$$
A = \begin{bmatrix}\ \ 2\ &\ \ 6\ &\ \ 2\ \\ -2\ &\ \ 6\ &\ \ 3\ \\ \ \ 8 &\ 36\ &\ \ 0\ \end{bmatrix}
$$
View Solution
$$ \left|A^T\right| = |A| = -312 $$
<br>
3.1.18 Column Properties¶
Property 10 tells us that every row property is also a column property.
Column Properties¶
If $A$ is an $n\times n$ matrix then
Property 2¶
If permutation matrix $P$ exchanges $n$ pairs of columns of the identity matrix, then
$$ \text{det}(P) = |P| = \left\{\begin{array}{rcl}\ \ 1 & \ & \text{if $n$ is even} \\ -1 & \ & \text{if $n$ is odd} \end{array} \right. = (-1)^n $$
Property 3(a)¶
If the elements of a matrix $A$ have a common factor $t$, then
$$ A = \begin{bmatrix} a & tb \\ c & td \end{bmatrix} $$
then
$$ \text{det}(A) = \begin{vmatrix} a & tb \\ c & td \end{vmatrix} = t\begin{vmatrix} a & b \\ c & d \end{vmatrix} $$
Property 3(b)¶
If a single column of a matrix $A$ consists of a sum of two columns,
$$ A = \begin{bmatrix} a & b + {\color{darkblue}\hat{b}} \\ c & d + {\color{darkblue}\hat{d}} \end{bmatrix} $$
then the determinant of matrix $A$ can be written as a
sum of two determinants
$$ \text{det}(A) = |A| = \begin{vmatrix} a & b + {\color{darkblue}\hat{b}} \\ c & d + {\color{darkblue}\hat{d}} \end{vmatrix} = \begin{vmatrix} a & b \\ c & d \end{vmatrix} + \begin{vmatrix} a & {\color{darkblue}\hat{b}} \\ c & {\color{darkblue}\hat{d}} \end{vmatrix} $$
Property 4¶
If matrix $A$ has two equal columns, then the determinant $|A| = 0$.
Property 5¶
Performing a type III column operation on matrix $A$ does not change the value of the determinant!
Property 6¶
If matrix $A$ has a column of zeros, then $|A| = 0$.
Corollary 6¶
- If one column in matrix $A$ is a linear combination of the other columns, then $|A|=0$.
- If matrix $A$ has a free column, then $|A|=0$.
Your use of this self-initiated mediated course material is subject to our Creative Commons License 4.0