Wichita State University Logo

Math 511: Linear Algebra

3.3 Applications of the Determinant


3.3.1 The Adjugate Matrix¶

$$ \require{color} \definecolor{brightblue}{rgb}{.267, .298, .812} \definecolor{darkblue}{rgb}{0.0, 0.0, 1.0} \definecolor{palepink}{rgb}{1, .73, .8} \definecolor{softmagenta}{rgb}{.99,.34,.86} \definecolor{blueviolet}{rgb}{.537,.192,.937} \definecolor{jonquil}{rgb}{.949,.792,.098} \definecolor{shockingpink}{rgb}{1, 0, .741} \definecolor{royalblue}{rgb}{0, .341, .914} \definecolor{alien}{rgb}{.529,.914,.067} \definecolor{crimson}{rgb}{1, .094, .271} \def\ihat{\mathbf{\hat{\unicode{x0131}}}} \def\jhat{\mathbf{\hat{\unicode{x0237}}}} \def\khat{\mathrm{\hat{k}}} \def\tombstone{\unicode{x220E}} \def\contradiction{\unicode{x2A33}} $$

The Adjugate Matrix is the classical adjoint of an $n\times n$ matrix. The adjugate matrix is the transpose of the cofactor matrix of a $n\times n$ matrix. For $n\times n$ matrix $A$,

$$ \begin{align*} \text{adj}(A) &= \text{Cof}(A)^T \\ \\ &= \begin{bmatrix} \ A_{11}\ &\ A_{12}\ &\ \cdots\ &\ A_{1n}\ \\ \ A_{21}\ &\ A_{22}\ &\ \cdots\ &\ A_{2n}\ \\ \ \vdots\ &\ \vdots\ &\ \ddots\ &\ \vdots\ \\ \ A_{n1}\ &\ A_{n2}\ &\ \cdots\ &\ A_{nn}\ \end{bmatrix}^T \\ \\ &= \begin{bmatrix} \ A_{11}\ &\ A_{21}\ &\ \cdots\ &\ A_{n1}\ \\ \ A_{12}\ &\ A_{22}\ &\ \cdots\ &\ A_{n2}\ \\ \ \vdots\ &\ \vdots\ &\ \ddots\ &\ \vdots\ \\ \ A_{1n}\ &\ A_{2n}\ &\ \cdots\ &\ A_{nn}\ \end{bmatrix} \\ \\ \end{align*} $$

Recall that the cofactor of element $a_{ij}$ is

$$ A_{ij} = (-1)^{i+j}\left|M_{ij}\right| $$


3.3.2 Computing the Determinant using the Adjugate¶

If $A$ is and $n\times n$ matrix then recall the Laplace expansion of the determinant of $A$ using the $i^{\text{th}}$ row of matrix $A$.

$$ \begin{align*} |A| &= \displaystyle\sum_{j=1}^n a_{ij}A_{ij} \\ \\ &= a_{i1}A_{i1} + a_{i2}A_{i2} + \cdots + a_{in}A_{in} \\ \\ &= \begin{bmatrix} a_{i1}\ & a_{i2}\ & \cdots\ & a_{in}\ \end{bmatrix} \begin{bmatrix} A_{i1}\ \\ A_{i2}\ \\ \vdots\ \\ A_{in}\ \end{bmatrix} \\ \\ &= \mathbf{a}^i\left(\text{adj}\mathbf{(A)}\right)_i \end{align*} $$

The last expression is the $i^{\text{th}}$ row of matrix $A$ times the $i^{\text{th}}$ column of matrix adj$(A)$.

Similarly we can compute the determinant using the $j^{\text{th}}$ column of matrix $A$.

$$ \begin{align*} |A| &= \displaystyle\sum_{i=1}^n a_{ij}A_{ij} \\ \\ &= a_{1j}A_{1j} + a_{2j}A_{2j} + \cdots a_{nj}A_{nj} \\ \\ &= \begin{bmatrix} A_{1j} & A_{2j} & \cdots & A_{nj} \end{bmatrix} \begin{bmatrix} a_{1j} \\ a_{2j} \\ \vdots \\ a_{nj} \end{bmatrix} \\ &= \left(\text{adj}\mathbf{(A)}\right)^j\,\mathbf{a}_j \end{align*} $$

What happens if we multiply the $i^{\text{th}}$ row of matrix $A$ times another column of the adjugate matrix adj$(A)$?

Remember that when we compute the Laplace expansion of the determinant of a matrix,

  • we use the factors $a_{ij}$ from a row or column,
  • however all of the minors $M_{ij}$ used to compute the cofactors never use the elements of our selected row or column. We obtain the minor matrix $M_{ij}$ by removing the $i^{\text{th}}$ row and $j^{\text{th}}$ column.

The key here is that we never use the values in the $i^{\text{th}}$ row that we use to compute the Laplace expansion of the determinant so their values can be anything. So let us create a new matrix $B$ that has all the same rows as matrix $A$ except the $i^{\text{th}}$ one.

We will pick one of the other rows, row $k$, where $k\neq i$ and copy row $\mathbf{a}^k$ into the $i^{\text{th}}$ row of matrix $B$. Then we will have each row of $B$

$$ \mathbf{b}^j = \left\{\begin{array}{rcl} \mathbf{a}^j, & & i\neq j \\ \mathbf{a}^k, & & i=j \end{array}\right. $$

Recall we picked the $i^{\text{th}}$ row for our Laplace expansion so

$$ B = \begin{bmatrix} \mathbf{a}^1 \\ \mathbf{a}^2 \\ \vdots \\ \mathbf{b}^i = \mathbf{a}^k \\ \vdots \\ \mathbf{a}^n \end{bmatrix} $$

This means that matrix $B$ has two rows equal to $\mathbf{a}^k$, the $k^{\text{th}}$ and the $i^{\text{th}}$ one, so the determinant of matrix $B$ is zero as it has two equal rows. This also means that computing the product of the $k^{\text{th}}$ row of matrix $A$ times the $i^{\text{th}}$ column of $\text{adj}(A)$ when $i\neq k$ yields

$$ \begin{align*} \mathbf{a}^k\left(\text{adj}\mathbf{(A)}\right)_i &= \begin{bmatrix} a_{k1}\ & a_{k2}\ & \cdots\ & a_{kn}\ \end{bmatrix} \begin{bmatrix} A_{i1}\ \\ A_{i2}\ \\ \vdots\ \\ A_{in}\ \end{bmatrix} \\ \\ &= \begin{bmatrix} b_{k1}\ & b_{k2}\ & \cdots\ & b_{kn}\ \end{bmatrix} \begin{bmatrix} B_{i1}\ \\ B_{i2}\ \\ \vdots\ \\ B_{in}\ \end{bmatrix} \\ \\ &= \begin{bmatrix} b_{i1}\ & b_{i2}\ & \cdots\ & b_{in}\ \end{bmatrix} \begin{bmatrix} B_{i1}\ \\ B_{i2}\ \\ \vdots\ \\ B_{in}\ \end{bmatrix} \\ \\ &= \mathbf{b}^i\left(\text{adj}\mathbf{(B)}\right)_i = |B| = 0 \end{align*} $$

Since this is true for any for $k\neq i$ we have

$$ \mathbf{a}^i\left(\mathbf{\text{adj}(A)}\right)_k = \left\{\begin{array}{rcl} |A| &\ &\ i=k \\ 0 &\ &\ i\neq k \end{array}\right. $$

Theorem 3.3.1¶

The product of matrices $A$ and $\text{adj}(A)$ results in

$$ A\,\text{adj}(A) = \left[\,\mathbf{a}^i\left(\mathbf{\text{adj}(A)}\right)_k\,\right] = \left[\, |A|\delta_{ik}\,\right] = |A|I_n. $$

Notice that since a matrix is singular if and only if $\text{det}(A) = 0$, we have a nice corollary of this theorem.

Corollary 3.3.2¶

A matrix $A$ is singular if and only if the matrix product $ A\,\text{adj}(A) $ is the zero matrix.


3.3.3 Properties of the Adjugate Matrix¶

If matrix $A$ is nonsingular, then $|A|\neq 0$ so we have

$$ \begin{align*} A\,\text{adj}(A) = |A|\,I_n \\ \\ A\,\frac{1}{|A|}\,\text{adj}(A) = I_n \end{align*} $$

Theorem 3.3.2¶

If matrix $A$ is nonsingular, then

$$ A^{-1} = \dfrac{1}{|A|}\text{adj}(A). $$

We can derive several more identities involving the adjugate matrix,

  1. The adjugate of the zero matrix is the zero matrix.

    $$ \text{adj}(\mathbf{0}) = \text{Cof}(\mathbf{0})^T = \mathbf{0}^T = \mathbf{0} $$

  2. The adjugate of the $n\times n$ identity matrix is the $n\times n$ identity matrix.

    $$ \text{adj}(I_n) = \text{Cof}(I_n)^T = \left[ \delta_{ij} \right]^T = I_n $$

  3. For every element $a_{ij}$ of $n\times n$ matrix $A$ and nonzero scalar $c$, the cofactor $\ ca_{ij}$ of matrix $cA$ is given by

    $$ \left(cA\right)_{ij} = (-1)^{i+j}\left|\,cM_{ij}\,\right| = c^{n-1}(-1)^{i+j}\left|\,M_{ij}\,\right| = c^{n-1}A_{ij} $$
    because $M_{ij}$ is an $(n-1)\times (n-1)$ matrix. Thus the cofactor matrix $\ \text{Cof}(cA) = \left[ c^{n-1}A_{ij} \right] = c^{n-1}\left[A_{ij}\right]$. Hence

    $$ \text{adj}(cA) = \text{Cof}(cA)^T = c^{n-1}\text{Cof}(A)^T = c^{n-1}\text{adj}(A) $$


  4. Notice that if $B = A^T$, the cofactor of $a_{ij}$ equals the cofactor of $b_{ji}$ because $(-1)^{i+j} = (-1)^{j+i}$ and the determinant of the $(n-1)\times(n-1)$ submatrix equals the determinant of its transpose. So

    $$ \begin{align*} \text{adj}\left(A^T\right) &= \text{Cof}\left(A^T\right)^T \\ \\ &= \left[ \left(A^T\right)_{ij} \right]^T \\ \\ &= \left[ A_{ji} \right]^T \\ \\ &= \left[ A_{ij} \right] = \text{adj}(A)^T \end{align*} $$

  5. If matrix $A$ is nonsingular, then $|A|\neq 0$ and

    $$ A^{-1}\,\text{adj}\left(A^{-1}\right) = \left|A^{-1}\right|\,I_n $$
    If one multiplies both sides of this equation on the left by matrix $A$,

    $$ \text{adj}\left(A^{-1}\right) = \dfrac{1}{\left|A\right|}\,A $$
    Moreover,

    $$ \text{adj}\left(A^{-1}\right)\text{adj}(A) = \dfrac{1}{\left|A\right|}\,A\,|A|\,A^{-1} = I_n $$
    So if matrix $A$ is invertible, then $\text{adj}(A)$ is invertible and

    $$ \left(\text{adj}(A)\right)^{-1} = \text{adj}\left(A^{-1}\right). $$




  6. Notice also that since the cofactor $A_{ij} = (-1)^{i+j}\left|M_{ij}\right|$.

    $$ \begin{align*} \left(A_{ij}\right)^* &= \left((-1)^{i+j}\left| M_{ij} \right| \right)^* \\ \\ &= (-1)^{i+j}\left|\,M_{ij}^*\,\right| \\ \\ &= \left(A^*\right)_{ij} \end{align*} $$
    Thus the complex conjugate of the cofactor of element $\ a_{ij}\ $ in matrix $A$ is the cofactor of element $(i,j)$ of the conjugate of matrix $A$. Hence the adjugate of the conjugate of matrix $A$ is the conjugate of the adjugate matrix of $A$ as well

    $$ \text{adj}\left(A^*\right) = \left(\text{adj}(A)\right)^* $$


  7. Combining properties 4 and 6 gives us that the adjugate matrix of the Hermitian of matrix $A$ is the Hermitian of the adjugate of matrix $A$.

    $$ \text{adj}\left(A^{\dagger}\right) = \text{adj}\left(A^H\right) = \text{adj}(A)^H = \text{adj}(A)^{\dagger} $$

  8. Like the transpose, Hermitian or inverse of a product of two matrices, if $A$ and $B$ are nonsingular $n\times n$ matrices we have


    $$ \begin{align*} \text{adj}(AB) &= \text{det}(AB)\,(AB)^{-1} = |A||B|B^{-1}A^{-1} \\ \\ &= |B|B^{-1}\,|A|A^{-1} = \text{adj}(B)\,\text{adj}(A) \end{align*} $$
    We would need two results from vector calculus to conclude that for any $n\times n$ matrices $A$ and $B$ we have

    $$ \text{adj}(AB) = \text{adj}(B)\,\text{adj}(A). $$



  9. From Property 8 we have

    $$ \text{adj}\left(A^k\right) = \left(\text{adj}(A)\right)^k $$

  10. In general, for two $n\times n$ matrices, matrix multiplication is not commutative. But if two $n\times n$ matrices commute

    $$ AB = BA $$
    then if we multiply both sides of this equation by the matrix $\text{adj}(A)$ on the left and right one obtains

    $$ \begin{align*} \text{adj}(A)(AB)\text{adj}(A) &= \text{adj}(A)(BA)\text{adj}(A) \\ \\ \left(\text{adj}(A)\,A\right)\left(B\,\text{adj}(A)\right) &= \left(\text{adj}(A)\,B\right)\left(A\,\text{adj}(A)\right) \\ \\ \left(|A|\,I_n\right)\left(B\,\text{adj}(A)\right) &= \left(\text{adj}(A)\,B\right)\left(|A|\,I_n\right) \\ \\ |A|\,B\,\text{adj}(A) &= |A|\,\text{adj}(A)\,B \end{align*} $$
    If matrix $A$ is nonsingular then
    $$ B\,\text{adj}(A) = \text{adj}(A)\,B $$
    If one multiplies both sides of the resulting equation on the left and right by $\text{adj}(B)$ on obtains

    $$ \begin{align*} \text{adj}(B)\,\left(B\,\text{adj}(A)\right)\,\text{adj}(B) &= \text{adj}(B)\,\left(\text{adj}(A)\,B\right)\,\text{adj}(B) \\ \\ \left(\text{adj}(B)\,B\right)\,\left(\text{adj}(A)\,\text{adj}(B)\right) &= \left(\text{adj}(B)\,\text{adj}(A)\right)\,\left(B\,\text{adj}(B)\right) \\ \\ \left(|B|\,I_n\right)\,\left(\text{adj}(A)\,\text{adj}(B)\right) &= \left(\text{adj}(B)\,\text{adj}(A)\right)\,\left(|B|\,I_n\right) \\ \\ |B|\,\text{adj}(A)\,\text{adj}(B) &= |B|\,\text{adj}(B)\,\text{adj}(A) \end{align*} $$
    If matrix $B$ is also nonsingular
    $$ \text{adj}(A)\,\text{adj}(B) = \text{adj}(B)\,\text{adj}(A) $$



  11. From all of these properties we have that if $n\times n$ matrix $A$ has any of the following attributes, then so does its adjugate:

    • Upper Triangular
    • Lower Triangular
    • Diagonal
    • Symmetric
    • Hermitian

3.3.4 Computing Inverses using the Adjugate¶

The adjugate of a $2\times 2$ matrix $A$ is simple to compute

$$ \begin{align*} \text{adj}(A) &= \text{adj}\left(\begin{bmatrix}\ a\ &\ b\ \\ \ c\ &\ d\ \end{bmatrix}\right) \\ \\ &= \text{Cof}\left(\begin{bmatrix}\ a\ &\ b\ \\ \ c\ &\ d\ \end{bmatrix}\right)^T \\ \\ &= \begin{bmatrix}\ d\ & -c\ \\ -b\ &\ a\ \end{bmatrix}^T \\ \\ &= \begin{bmatrix}\ d\ & -b\ \\ -c\ &\ a\ \end{bmatrix} \end{align*} $$

If $2\times 2$ matrix $A$ is nonsingular

$$ A^{-1} = \dfrac{1}{|A|}\text{adj}(A) = \dfrac{1}{|A|}\,\begin{bmatrix}\ d\ & -b\ \\ -c\ &\ a\ \end{bmatrix} $$

In order to compute the inverse of a $3\times 3$ matrix,

$$ A = \begin{bmatrix}\ a_{11}\ &\ a_{12}\ &\ a_{13}\ \\ \ a_{21}\ &\ a_{22}\ &\ a_{23}\ \\ \ a_{31}\ &\ a_{32}\ &\ a_{33}\ \end{bmatrix} $$

one has nine $2\times 2$ determinants to compute for the cofactors and a $3\times 3$ determinant for

$$ A^{-1} = \dfrac{1}{|A|}\begin{bmatrix}\ A_{11}\ &\ A_{21}\ &\ A_{31}\ \\ \ A_{12}\ &\ A_{22}\ &\ A_{32}\ \\ \ A_{13}\ &\ A_{23}\ &\ A_{33}\ \end{bmatrix} $$

The adjugate is not recommended for computing inverses of matrices larger than $3\times 3$.

Example 1¶

Compute the inverse of matrix $A = \begin{bmatrix} -1\ &\ 3\ \\ \ 3\ &\ 2\ \end{bmatrix}$

$$ \begin{align*} A^{-1} &= \dfrac{1}{|A|}\,\text{adj}(A) \\ \\ &= \dfrac{1}{(-1)(2) - (3)(3)}\begin{bmatrix}\ 2\ & -3\ \\ -3\ & -1\ \end{bmatrix} \\ \\ &= -\dfrac{1}{11}\begin{bmatrix}\ 2\ & -3\ \\ -3\ & -1\ \end{bmatrix} \\ \\ &= \begin{bmatrix} -\frac{2}{11}\ &\ \frac{3}{11}\ \\ \ \frac{3}{11}\ &\ \frac{1}{11} \end{bmatrix} \end{align*} $$

Example 2¶

Compute the inverse of matrix $A = \begin{bmatrix}\ 2\ &\ 1\ &\ 3\ \\ \ 2\ &\ 2\ &\ 1\ \\ \ 1\ &\ 3\ &\ 2\ \end{bmatrix}$

$$ \begin{align*} A^{-1} &= \dfrac{1}{|A|}\,\text{adj}(A) \\ \\ &= \dfrac{1}{11}\begin{bmatrix}\ \ \ \begin{vmatrix}\ 2\ &\ 1\ \\ \ 3\ &\ 2\ \end{vmatrix}\ & -\begin{vmatrix}\ 2\ &\ 1\ \\ \ 1\ &\ 2\ \end{vmatrix}\ &\ \ \ \begin{vmatrix}\ 2\ &\ 2\ \\ \ 1\ &\ 3\ \end{vmatrix}\ \\ -\begin{vmatrix}\ 1\ &\ 3\ \\ \ 3\ &\ 2\ \end{vmatrix}\ &\ \ \ \begin{vmatrix}\ 2\ &\ 3\ \\ \ 1\ &\ 2\ \end{vmatrix}\ & -\begin{vmatrix}\ 2\ &\ 1\ \\ \ 1\ &\ 3\ \end{vmatrix}\ \\ \ \ \ \begin{vmatrix}\ 1\ &\ 3\ \\ \ 2\ &\ 1\ \end{vmatrix}\ &\ -\begin{vmatrix}\ 2\ &\ 3\ \\ \ 2\ &\ 1\ \end{vmatrix}\ &\ \ \ \begin{vmatrix}\ 2\ &\ 1\ \\ \ 2\ &\ 2\ \end{vmatrix}\ \end{bmatrix}^T \\ \\ &= \dfrac{1}{11}\begin{bmatrix}\ 1\ & -3\ &\ 4\ \\ \ 7\ &\ 1\ & -5\ \\ -5\ &\ 4\ &\ 2\ \end{bmatrix}^T \\ \\ &= \begin{bmatrix}\ \frac{1}{11}\ &\ \frac{7}{11}\ & -\frac{5}{11}\ \\ -\frac{3}{11}\ &\ \frac{1}{11}\ &\ \frac{4}{11}\ \\ \ \frac{4}{11}\ & -\frac{5}{11}\ &\ \frac{2}{11}\ \end{bmatrix} \end{align*} $$


3.3.5 Cramer's Rule¶

Video lesson about Cramer's Rule Cramer's Rule

For Cramer's Rule we need some more notation. Recall that for $n\times n$ matrix

$$ A = \begin{bmatrix}\ a_{11}\ &\ a_{12}\ &\ \cdots\ &\ a_{1n}\ \\ \ a_{21}\ &\ a_{22}\ &\ \cdots\ &\ a_{2n}\ \\ \ \vdots\ &\ \vdots\ &\ \ddots\ &\ \vdots\ \\ \ a_{n1}\ &\ a_{n2}\ &\ \cdots\ &\ a_{nn}\ \end{bmatrix} $$

we have

  • $a_{ij}$ is the element of matrix $A$ at position $(i,j)$
  • $\mathbf{a}^i$ is the $i^{\text{th}}$ row of matrix $A$
  • $\mathbf{a}_j$ is the $j^{\text{th}}$ column of matrix $A$
  • $A_{ij}$ is the cofactor of element $a_{ij}$

Let us consider $n\times n$ nonsingular matrix $A$, $n\times 1$ vector $\mathbf{b}$ and the linear system

$$ A\mathbf{x} = \mathbf{b} $$

For Cramer's Rule we need to replace the $j^{\text{th}}$ column of matrix $A$ with vector $\mathbf{b}$. For each column $1\le j\le n$, denote by $A_j$, the matrix obtained by replacing the $j^{\text{th}}$ column of matrix $A$ with vector $\mathbf{b}$. Confusing, isn't it?

Cramer's Rule give us a method of computing the solution $\mathbf{x}$ by calculating each element of vector $\mathbf{x}$ as follows:

$$ x_j = \dfrac{\text{det}(A_j)}{\text{det}(A)} = \dfrac{|A_j|}{|A|} $$

Example 3¶

Solve the linear system $\begin{bmatrix}\ 1\ &\ 1\ \\ \ 2\ &\ 4\ \end{bmatrix}\mathbf{x} = \begin{bmatrix}\ 4\ \\ \ 2\ \end{bmatrix}$

$$ \begin{align*} x_1 &= \dfrac{|A_1|}{|A|} = \dfrac{\begin{vmatrix}\ 4\ &\ 1\ \\ \ 2\ &\ 4\ \end{vmatrix}}{\begin{vmatrix}\ 1\ &\ 1\ \\ \ 2\ &\ 4\ \end{vmatrix}} = \dfrac{14}{2} = 7 \\ \\ x_2 &= \dfrac{|A_2|}{|A|} = \dfrac{\begin{vmatrix}\ 1\ &\ 4\ \\ \ 2\ &\ 2\ \end{vmatrix}}{\begin{vmatrix}\ 1\ &\ 1\ \\ \ 2\ &\ 4\ \end{vmatrix}} = \dfrac{-6}{2} = -3 \end{align*} $$

The solution $\mathbf{x} = \begin{bmatrix}\ \ 7\ \\ -3\ \end{bmatrix}$.


Your use of this self-initiated mediated course material is subject to our An international nonprofit organization that empowers people to grow and sustain the thriving commons of shared knowledge and culture. Creative Commons License 4.0