Math 511: Linear Algebra
5.7 Review for Test 2
5.7.1 Rubrics¶
$$ \require{color} \definecolor{brightblue}{rgb}{.267, .298, .812} \definecolor{darkblue}{rgb}{0.0, 0.0, 1.0} \definecolor{palepink}{rgb}{1, .73, .8} \definecolor{softmagenta}{rgb}{.99,.34,.86} \definecolor{blueviolet}{rgb}{.537,.192,.937} \definecolor{jonquil}{rgb}{.949,.792,.098} \definecolor{shockingpink}{rgb}{1, 0, .741} \definecolor{royalblue}{rgb}{0, .341, .914} \definecolor{alien}{rgb}{.529,.914,.067} \definecolor{crimson}{rgb}{1, .094, .271} \def\ihat{\mathbf{\hat{\unicode{x0131}}}} \def\jhat{\mathbf{\hat{\unicode{x0237}}}} \def\khat{\mathrm{\hat{k}}} \def\tombstone{\unicode{x220E}} \def\contradiction{\unicode{x2A33}} $$
All homework, quiz, and exam submissions must be in pdf format or they will get a grade of zero. If you take pictures with your phone, open a word document and drop each image into its own page and resize the image to fill the page. Word will allow you to export your word document as a pdf so that I don't have to do that for you. An Office 365 is purchased for you using your student fees each semester. You can use your WSU credentials to log onto Office 365 or download the office suite to your device.
Your homework must be complete and easy to read. Submissions that are too faint to easily read or unorganized will receive a zero score.
- Type or write clearly and make the progression of your work obvious. * Your job is to convince me that you understand the problem and know the solution.
- Present the exercises in numerical order, dictionary order with each exercise, and in the order questions are asked in the text of the question. Problems listed out of order will not be graded.
Many problems are short answer or True/False. There are examples of these questions at the end of each chapter.
For any numerical problem one must solve a linear system, compute a determinant, or construct a linear system.
To get any credit for solving a linear system one must¶
possibly set up the linear system
write out the augmented matrix
reduce the matrix to either
- upper triangular form
- row echelon form
- reduced row echelon form using row operations.
use backward substitution
display the solution as a column vector
- angle brackets are used to indicate that the list of numbers is really a column
- square or round brackets enclosing a vertical (column) list of numbers and/or variables
- square or round brackets enclosing a horizontal list of numbers with $^T$, transpose indicating that the row should be a column vector.
$(2,1)$ is a point in a two a dimensional space, not a vector in a vector space.
$x_1=2$, $x_2=1$ is not a vector.
The correct answer is
$$ \begin{bmatrix} 2 \\ 1 \end{bmatrix} = \left(\begin{matrix} 2 \\ 1 \end{matrix}\right) = \begin{bmatrix} 2 & 1 \end{bmatrix}^T = \left(\begin{matrix} 2 & 1 \end{matrix}\right)^T = \langle 2, 1 \rangle $$
Many of the linear systems that appear on a test are small, 2-8 equations and 2-8 unknowns. This will not be true after you leave this course. The techniques taught in this class are to be applied to large systems of thousands or millions of equations, and thousands or millions of unknowns. These problems can not be solved by a person in an entire lifetime.
I realize there are other methods of solving these problems because they are small outside of this course. They are only small to allow one to complete the exercise with a reasonable amount of effort.
You must display mastery of the techniques of this course using the methods taught in this course.
I am not looking for a numerically correct answer. I am grading mastery of the techniques taught in this course.
To get any credit computing a determinant¶
A $2\times 2$ matrix may be solved using a formula, $\begin{vmatrix} a & b \\ c & d \end{vmatrix} = ad - bc$.
To compute the determinant of a larger $n\times n$ matrix ($n>2$), one must
- Use the ten properties of determinants used in the course
- Use the Laplace Expansion, when a row or column has only one nonzero entry
5.7.2 True/False Questions¶
To receive all three points for a True/False question one must correctly respond whether the statement is
True, which means always true
If the statement is true, then one must additionally explain or prove that answer is true.
False, otherwise
If the statement is false, then one must additionally give a numerical answer (no variables) for which the statement is false.
For example,
1. If $S$ is a subspace of a vector space $V$, then $S$ is a vector space.¶
answer¶
TRUE¶
The definition of a subspace $S$ of a vector space $V$ is a subset of $V$ that satisfies all of the axioms of a vector space. This means that the set $S$ is closed under vector addition and scalar multiplication. Any collection of vectors satisfies the other 8 axioms of a vector space because they are elements of vector space $V$.
- $\mathbb{R}^2$ is a subspace of $\mathbb{R}^4$.
answer¶
FALSE¶
$\mathbb{R}^2 = \text{Span}\left\{ \begin{bmatrix} 1 \\ 0 \end{bmatrix},\ \begin{bmatrix} 0 \\ 1 \end{bmatrix} \right\}$. The elements of $\mathbb{R}^2$ are described using a list of two numbers that represent the scalar multiples of these basis vectors. For example $\mathbf{x} = \begin{bmatrix} 1 \\ 2 \end{bmatrix}$
$\mathbb{R}^4 = \text{Span}\left\{ \begin{bmatrix} 1 \\ 0 \\ 0 \\ 0 \end{bmatrix},\ \begin{bmatrix} 0 \\ 1 \\ 0 \\ 0 \end{bmatrix},\ \begin{bmatrix} 0 \\ 0 \\ 0 \\ 1 \end{bmatrix},\ \begin{bmatrix} 0 \\ 0 \\ 0 \\ 1 \end{bmatrix} \right\}$. The elements of $\mathbb{R}^4$ are described using a list of four numbers that represent the scalar multiples of these four basis vectors. $\mathbf{x}\notin\mathbb{R}^4$, so vectors in $\mathbb{R}^2$ cannot be a subspace of $\mathbb{R}^4$.
5.7.3 What Material is Covered by Test 2¶
Chapter One¶
- Any question from the WebAssign homework, Projects, Problem Sets, or Review Exercises at the end of the chapter in the textbook
- Any question similar to a question from the WebAssign homework, Projects, Problem Sets, or Review Exercises at the end of the chapter in the textbook
Chapter Two¶
- Any question from the WebAssign homework, Projects, Problem Sets, or Review Exercises at the end of the chapter in the textbook
- Any question similar to a question from the WebAssign homework, Projects, Problem Sets, or Review Exercises at the end of the chapter in the textbook
Chapter Three¶
- Any question from the WebAssign homework, Projects, Problem Sets, or Review Exercises at the end of the chapter in the textbook
- Any question similar to a question from the WebAssign homework, Projects, Problem Sets, or Review Exercises at the end of the chapter in the textbook
- Any question from the Cumulative Test at the end of chapter 3 in the textbook.
Chapter Four¶
- Any question from the WebAssign homework, Projects, Problem Sets, or Review Exercises at the end of the chapter in the textbook
- Any question similar to a question from the WebAssign homework, Projects, Problem Sets, or Review Exercises at the end of the chapter in the textbook
Chapter Five¶
- Any question from the WebAssign homework, Projects, Problem Sets, or Review Exercises at the end of the chapter in the textbook
- Any question similar to a question from the WebAssign homework, Projects, Problem Sets, or Review Exercises at the end of the chapter in the textbook
- Any question from the Cumulative Test at the end of chapter 5 in the textbook.
5.7.4 Test Information¶
There are two tests; a proctored test and at take-home test.
Proctored Test¶
- You will have 50 minutes to complete the proctored test.
- There are no notes, calculators, electronic devices, note cards, books, or any aids allowed during the proctored test.
Take-home Test¶
- The take-home test must be turned on time to receive credit.
- You can use technology to check your work. You must still show all of your work. I am grading the work you complete to compute the answer, not the solution. You must show all of the steps your performed using the techniques we learned in these chapters. "I computed it in my software" is a zero.
- There is no time limit other than the due date and time.
- You must do your own work. A 100% on the take-home test and a 20% on the proctored test averages to a 60% which is still not passing.
Well-prepared means:¶
- you have your own notes,
- You have memorized the basic formulas, definitions and properties
- You practiced the methods of computing
- Gaussian Elimination
- Elementary Row Operations and Elementary Matrices
- The LU decomposition
- The CR decomposition
- The solution to a linear system
- Determinants
- Cramer's Rule
- The inverse of a nonsingular matrix
- The inverse of a $2\times 2$ nonsingular matrix using the adjugate
- You recognize sets of vectors that form a
- A Vector Space
- A Subspace of a vector space
- A Spanning Set
- A Linear Independent or Dependent Set
- A Basis
- An Orthogonal Set
- An Orthonormal Set
- An Orthonormal Basis
- You practiced the methods of computing
- The Dimension of a subspace of a vector space
- The Rank of a matrix
- The Transition Matrix from One Basis to Another Basis
- The Coordinates of a vector with respect to a given basis
- The Wronskian of functions in a vector space
- The Vector Sum of subspaces of a vector space
- The Direct Sum of subspaces of a vector space
- The Dot Product and/or Inner Product of vectors
- The Norm of a vector
- The Projection of one vector onto another vector
- The Distance between two vectors
- The Angle between two vectors
- The Least Squares Solutions of a system of linear equations
- The Four Main Subspaces of a matrix
- The QR factorization of a matrix
- The Projection matrix on a vector space to a subspace
- The Orthogonal Complement of a subspace of a vector space
- You recognize and know the definitions of
- An Inner Product
- An Inner Product Space
- A Norm
- The Projection of one vector onto another
- The Inner Product of vectors
- The Cross Product of vectors
- Orthogonal Matrices
- Unit vectors
- Orthogonal Complements
- You can perform the processes of
- Conversion from one basis to another
- Orthogonal Projection of one vector onto another
- Orthogonalization
- Gram-Schmidt Orthogonalization (Orthonormalization)
5.7.5 Comments¶
We will be practicing the skills used in the first five chapters throughout the rest of the course. I expect your test scores to get much better.
Unlike your previous Mathematics courses, there are a lot of definitions and concepts to memorize. Give yourself some time. Be patient and keep up the good work.
5.7.6 Some True/False Questions¶
- It is not possible to find a pair of two-dimensional subspaces $S$ and $T$ of $\mathbb{R}^3$ such that $S\cap T=\left\{\mathbf{0}\right\}$.
Check Your Work
TRUE
If there are a pair of two-dimensional subspaces $S$ and $T$ of $\mathbb{R}^3$ such that $S\cap T=\left\{\mathbf{0}\right\}$, then their vector sum would be a direct sum $S\oplus T$, and the dimension, $\dim(S\oplus T)=2+2=4>3$.
- If $S$ and $T$ are subspaces of a vector space $V$, then $S\cup T$ is a subspace of $V$.
Check Your Work
FALSE
In $\mathbb{R}^2$, the $x$-axis is $S = \text{Span}\left\{\ihat\right\}$ and the $y$-axis is $T = \text{Span}\left\{\jhat\right\}$. However $S\cup T$ includes only the axes. $\ihat + \jhat = \langle 1,1\rangle\notin S\cup T$.
- If $S$ and $T$ are subspaces of a vector space $V$, then $S\cap T$ is a subspace of $V$.
Check Your Work
TRUE
If $S$ and $T$ are subspaces of a vector space $V$, then $S\cap T$ consists of all of the vectors in both subspaces. If $\mathbf{u},\mathbf{v}\in S\cap T$, then $\mathbf{u},\mathbf{v}\in S$, and $\mathbf{u},\mathbf{v}\in T$. If $\alpha,\beta\in\mathbb{R}$, then $\alpha\mathbf{u}+\beta\mathbf{v}\in S$ since $S$ is a subspace of $V$. Also $\alpha\mathbf{u}+\beta\mathbf{v}\in T$ since $T$ is a subspace of $V$. Since $\alpha\mathbf{u}+\beta\mathbf{v}$ is in both subspaces, $\alpha\mathbf{u}+\beta\mathbf{v}\in S\cap T$. We showed that $S\cap T$ is closed under linear combinations so $S\cap T$ is a subspace of $V$.
- If $\mathbf{x}_1$, $\mathbf{x}_2$, $\dots$, $\mathbf{x}_n$ are linearly independent, then they span $\mathbb{R}^n$.
Check Your Work
TRUE
$\mathbb{R}^n$ is an $n$ dimensional vector space. It can have at most $n$ linearly independent vectors, and any set of $n$ linearly independent vectors is a basis for $\mathbb{R}^n$. Since the vectors form a basis, the set of vectors spans $\mathbb{R}^n$.
- If $\mathbf{x}_1$, $\mathbf{x}_2$, $\dots$, $\mathbf{x}_n$ span a vector space $V$, then they are linearly independent.
Check Your Work
FALSE
The set of vectors $\left\{ \ihat, \jhat, \ihat+\jhat \right\}$ spans $\mathbb{R}^2$, however they are not a linearly independent set of vectors because the third is the sum of the first two vectors.
- If $\mathbf{x}_1$, $\mathbf{x}_2$, $\dots$, $\mathbf{x}_k$ are vectors in a vector space $V$ and $\text{Span}\left\{\mathbf{x}_1,\mathbf{x}_2,\dots,\mathbf{x}_k\right\} = \text{Span}\left\{\mathbf{x}_1,\mathbf{x}_2,\dots,\mathbf{x}_{k-1}\right\}$ then $\mathbf{x}_1$, $\mathbf{x}_2$, $\dots$, $\mathbf{x}_k$ are linearly dependent.
Check Your Work
TRUE
Since $\text{Span}\left\{\mathbf{x}_1,\mathbf{x}_2,\dots,\mathbf{x}_k\right\} = \text{Span}\left\{\mathbf{x}_1,\mathbf{x}_2,\dots,\mathbf{x}_{k-1}\right\}$, the last vector $\mathbf{x}_k$ must be a linear combination of the first $k-1$ vectors. Hence the set \left\{\mathbf{x}_1,\mathbf{x}_2,\dots,\mathbf{x}_k\right\} is linearly dependent.
- If $A$ is an $m\times n$ matrix, then $A$ and $A^T$ have the same rank.
Check Your Work
TRUE
The rank of a matrix is the dimension of the row space $C(A^T)$ and this equals the dimension of the column $C(A)$. Every pivot column yields a pivot row, and every pivot row yields a pivot column.
- If $A$ is an $m\times n$ matrix, then $A$ and $A^T$ have the same nullity.
Check Your Work
FALSE
Let $A = \begin{bmatrix} 1 & 0 \\ 0 & 1 \\ 0 & 0 \end{bmatrix}$. Then $A^T = \begin{bmatrix} 1 & 0 & 0 \\ 0 & 1 & 0 \end{bmatrix}$. Hence $N(A)$ is trivial because both columns of $A$ are pivot columns, and the nullity of $A$ is zero. However $A^T$ has a free column, so $N(A^T)$ is non-trivial and the nullity of $A^T=1$.
- If $A$ is row equivalent to $B$, then $A$ and $B$ have the same row space.
Check Your Work
TRUE
Matrix $A$ and matrix $B$ are row equivalent if and only if one can obtain matrix $B$ by performing a finite number of elementary row operations on matrix $A$. That is
$$ B = E_1E_2E_3\dots E_kA $$
Elementary row operations replace rows of matrix $A$ with linear combinations of the rows of matrix $A$. This does not change the span of the rows of matrix $A$, the row space $C(A^T)$.
- If $A$ is row equivalent to $B$, then $A$ and $B$ have the same column space.
Check Your Work
FALSE
Consider $A = \begin{bmatrix} 1 & 0 \\ -1 & 0 \end{bmatrix}$ $C(A) = \text{Span}\left\{ \begin{bmatrix} 1 \\ -1 \end{bmatrix} \right\}$. Now add row one to row two. The result is $B = \begin{bmatrix} 1 & 0 \\ 0 & 0 \end{bmatrix}$. $\begin{bmatrix} 1 \\ -1 \end{bmatrix}\notin C(B)=\text{Span}\left\{ \ihat \right\}$.
- Let $\mathbf{x}_1$, $\mathbf{x}_2$, $\dots$, $\mathbf{x}_k$ be linearly independent vectors in $\mathbb{R}^n$. If $k<n$ and $\mathbf{x}_{k+1}\notin\text{Span}\left\{\mathbf{x}_1, \mathbf{x}_2, \dots, \mathbf{x}_k \right\}$, then the set of vectors $\left\{\mathbf{x}_1, \mathbf{x}_2, \dots, \mathbf{x}_{k+1} \right\}$ is a linearly independent set of vectors.
Check Your Work
TRUE
If $\mathbf{x}_1$, $\mathbf{x}_2$, $\dots$, $\mathbf{x}_k$ be linearly independent vectors in $\mathbb{R}^n$, $k
This is true because $\mathbf{x}_{k+1}\notin\text{Span}\left\{\mathbf{x}_1, \mathbf{x}_2, \dots, \mathbf{x}_k \right\}$. Hence
$$ \left\{\mathbf{x}_1, \mathbf{x}_2, \dots, \mathbf{x}_{k+1} \right\} $$
is a linearly independent set of vectors.
- If $A$ and $B$ are $n\times n$ matrices with the same rank, then $A^2$ and $B^2$ have the same rank.
Check Your Work
FALSE
Consider $A = \begin{bmatrix} 1 & 0 \\ 0 & 0 \end{bmatrix}$ and $B = \begin{bmatrix} 0 & 0 \\ 1 & 0 \end{bmatrix}$. Both $A$ and $B$ have rank 1. However
$$ A^2 = A\text{, and }B^2 = \begin{bmatrix} 0 & 0 \\ 0 & 0 \end{bmatrix} $$
Hence $\text{rank}(A^2)=1$ is not equal to the $\text{rank}(B^2) = 0$.
- If $A$ and $B$ are $n\times n$ matrices with the same rank, then $AB$ and $BA$ have the same rank.
Check Your Work
FALSE
Consider $A = \begin{bmatrix} 1 & 0 \\ 0 & 0 \end{bmatrix}$ and $B = \begin{bmatrix} 0 & 0 \\ 1 & 0 \end{bmatrix}$. Both $A$ and $B$ have rank 1. However
$$ BA = B\text{, and }AB = \begin{bmatrix} 0 & 0 \\ 0 & 0 \end{bmatrix} $$
Hence $\text{rank}(BA)=1$ is not equal to the $\text{rank}(AB) = 0$.
5.7.7 More True/False Questions¶
- If $\mathbf{x}$ and $\mathbf{y}$ are nonzero vectors in $\mathbb{R}^n$, then the vector projection of $\mathbf{x}$ onto $\mathbf{y}$ is equal to the vector projection of $\mathbf{y}$ onto $\mathbf{x}$.
Check Your Work
FALSE
The vector projection of $\mathbf{x}$ onto $\mathbf{y}$ is a scalar multiple $\mathbf{y}$, and the vector projection of $\mathbf{y}$ onto $\mathbf{x}$ is a scalar multiple of $\mathbf{x}$. Let $\mathbf{x} = \langle 1, 0\rangle$, and $\mathbf{y}=\langle 1,1\rangle$.
$$ \begin{align*} \text{Proj}_{\mathbf{y}}\mathbf{x} &= \frac{\langle 1,0 \rangle\cdot\langle 1,1 \rangle}{\left\|\langle 1,0\rangle\right\|^2}\langle 1,0 \rangle = \langle 1,0 \rangle = \mathbf{x} \\ \\ \text{Proj}_{\mathbf{x}}\mathbf{y} &= \frac{\langle 1,0 \rangle\cdot\langle 1,1 \rangle}{\left\|\langle 1,1 \rangle\right\|^2} \langle 1,1 \rangle = \left\langle \frac{1}{2},\,\frac{1}{2} \right\rangle \neq \mathbf{x} \end{align*} $$
- If $\mathbf{x}$ and $\mathbf{y}$ are unit vectors in $\mathbb{R}^n$ and $\left|\mathbf{x}^T\mathbf{y}\right|=1$, then $\mathbf{x}$ and $\mathbf{y}$ are linearly independent.
Check Your Work
FALSE
If $\mathbf{x}$ and $\mathbf{y}$ are unit vectors in $\mathbb{R}^n$ and $\left|\mathbf{x}^T\mathbf{y}\right|=1$, then $\left\|\mathbf{x}\right\|=\left\|\mathbf{y}\right\|=1$. Hence
$$ 1 = \left|\mathbf{x}^T\mathbf{y}\right| = \frac{\left|\mathbf{x}^T\mathbf{y}\right|}{\left\|\mathbf{x}\right\|\,\left\|\mathbf{y}\right\|} = \frac{\left| \left\langle \mathbf{x},\mathbf{y} \right\rangle \right|}{\left\|\mathbf{x}\right\|\,\left\|\mathbf{y}\right\|} = \left| \cos(\theta) \right| $$
Since the cosine of the angle between vectors $\mathbf{x}$ and $\mathbf{y}$ are $\pm 1$, the angle $\theta$ is $0$ or $\pi$. This tells us that $\mathbf{x}$ and $\mathbf{y}$ are colinear so they are linearly dependent. For example let $\mathbf{x}=\langle 1,0 \rangle$ and $\mathbf{y}=\langle -1,0\rangle$. Then they are linearly dependent even though $\left|\mathbf{x}^T\mathbf{y}\right|= \left| \langle 1, 0 \rangle\cdot\langle -1, 0 \rangle \right| = \left|-1+0\right| = 1$
- If $U$, $V$, and $W$ are subspaces of $\mathbb{R}^3$ and if $U\perp V$ and $V\perp W$, then $U\perp W$.
Check Your Work
FALSE
Consider $U = \text{Span}\left\{\ihat,\jhat\right\}$, $V = \text{Span}\left\{\khat\right\}$, and $W = \text{Span}\left\{\ihat\right\}$. Then $U\perp V$ and $V\perp W$, however $W$ is a subspace of $U$, so $W$ cannot be orthogonal to $U$.
- It is possible to find a nonzero vector $\mathbf{y}$ in the column space of $A^T$ such that $A\mathbf{y}=\mathbf{0}$.
Check Your Work
FALSE
Let $\mathbf{y}\in C(A^T)$ and $A\mathbf{y}=\mathbf{0}$. Then $\mathbf{y}\in N(A^T)$. Since $\mathbf{y}$ is a vector in orthogonal complements, $\mathbf{y}=\mathbf{0}$. This is the contrapositive of the statement above. Consider $A=I_2$. Then $C(A^T)=\mathbb{R}^2$ and $N(A)$ is trivial. There are no nonzero vectors in $\mathbf{y}\in\mathbb{R}^2$ such that $A\mathbf{y}=\mathbf{0}$.
- If $A$ is and $m\times n$ matrix, then $AA^T$ and $A^TA$ have the same rank.
Check Your Work
TRUE
We proved in class that $N(A)=N(A^TA)$. We can switch the positions of $A$ and $A^T$ and get $N(A^T) = N\left((A^T)^TA^T\right) = N(AA^T)$. Now the dimension of the row space and the dimension of the column space of a matrix are always equal. That value is the rank. So
$$ \begin{align*} \text{rank}(A^TA) &= n - \text{nullity}(A^TA) = n - \text{nullity}(A) \\ &= \text{rank}(A) = m - \text{nullity}(A^T) = m - \text{nullity}(AA^T) \\ &= \text{rank}(AA^T) \end{align*} $$
- If and $m\times n$ matrix $A$ has linearly dependent columns and $\mathbf{b}$ is a vector in $\mathbb{R}^m$, $\mathbf{b}$ does not have a unique projection onto the column space of $A$.
Check Your Work
FALSE
We proved in class that $N(A)=N(A^TA)$. If matrix $A$ has linearly dependent columns, then $N(A^TA) = N(A)$ are nontrivial. The square matrix $A^TA$ is a singular matrix. So the linear system $A^TA\mathbf{\hat{x}}=A^T\mathbf{b}$ has infinitely many solutions.
Consider $A = \begin{bmatrix} 1 & 0 \\ 0 & 0 \end{bmatrix}$, and $A\mathbf{x} = \mathbf{b} = \begin{bmatrix} 1 \\ 1 \end{bmatrix}$. Now $\mathbf{b}\notin C(A)$. We have $A^TA = A^2 = A$ and $A^T\mathbf{b} = \begin{bmatrix} 1 \\ 0 \end{bmatrix}$. $\begin{bmatrix} 1 \\ 0 \end{bmatrix}$ is the unique projection of $\mathbf{b}$ onto $C(A)$, even though $A^TA\mathbf{x} = A\mathbf{x} = \begin{bmatrix} 1 \\ 0 \end{bmatrix}$ has infinitely solutions.
- If $A$ is and $m\times n$ matrix such that $C(A^T)=\mathbb{R}^n$, then the system $A\mathbf{x}=\mathbf{b}$ will have a unique least squares solution.
Check Your Work
TRUE
We proved in class that $N(A)=N(A^TA)$. If $C(A^T)=\mathbb{R}^n$, then $\text{nullity}(A)=n-n=0$. Therefore $N(A^TA)$ is also trivial. Hence $A^TA$ is a nonsingular matrix. The system $A^TA\mathbf{\hat{x}} = A^T\mathbf{b}$ will have a unique solution.
- If $Q$ is an orthogonal matrix,the $Q^T$ also is an orthogonal matrix.
Check Your Work
FALSE
Let $Q = \begin{bmatrix} 1 & 0 \\ 0 & 1 \\ 0 & 0 \end{bmatrix}$. Then $Q^TQ = \begin{bmatrix} 1 & 0 & 0 \\ 0 & 1 & 0 \end{bmatrix}\begin{bmatrix} 1 & 0 \\ 0 & 1 \\ 0 & 0 \end{bmatrix} = I_2$, and $Q$ is an orthogonal matrix. However $Q^T = \begin{bmatrix} 1 & 0 & 0 \\ 0 & 1 & 0 \end{bmatrix}$ has a free column and therefore cannot be an orthogonal matrix.
- If $\left\{\mathbf{u}_1,\mathbf{u}_2,\dots,\mathbf{u}_k\right\}$ is an orthonormal set of vectors in $\mathbb{R}^n$ and
$$ U = \begin{bmatrix} \mathbf{u}_1 & \mathbf{u}_2 & \dots & \mathbf{u}_k \end{bmatrix} $$
then $U^TU = I_k$, the $k\times k$ identity matrix.
Check Your Work
TRUE
If $\left\{\mathbf{u}_1,\mathbf{u}_2,\dots,\mathbf{u}_k\right\}$ is an orthonormal set of vectors in $\mathbb{R}^n$, then for $1\le i,j\le n$ we have $\langle \mathbf{u}_j,\mathbf{u}_k \rangle = \delta_{ij}$. Thus
$$ U^TU = \begin{bmatrix} (\mathbf{u}_j)^T\mathbf{u}_k \end{bmatrix} = \begin{bmatrix} \langle \mathbf{u}_j,\mathbf{u}_k \rangle \end{bmatrix} = \begin{bmatrix} \delta_{jk} \end{bmatrix} = I_k $$
- If $\left\{\mathbf{u}_1,\mathbf{u}_2,\dots,\mathbf{u}_k\right\}$ is an orthonormal set of vectors in $\mathbb{R}^n$ and
$$ U = \begin{bmatrix} \mathbf{u}_1 & \mathbf{u}_2 & \dots & \mathbf{u}_k \end{bmatrix} $$
then $UU^T = I_n$, the $n\times n$ identity matrix.
Check Your Work
FALSE
Let $\left\{\mathbf{u}_1,\mathbf{u}_2 \right\} = \left\{ \ihat, \jhat \right\}$ in $\mathbb{R}^3$. Then $U = \begin{bmatrix} 1 & 0 \\ 0 & 1 \\ 0 & 0 \end{bmatrix}$ and
$$ UU^T = \begin{bmatrix} 1 & 0 \\ 0 & 1 \\ 0 & 0 \end{bmatrix}\begin{bmatrix} 1 & 0 & 0 \\ 0 & 1 & 0 \end{bmatrix} = \begin{bmatrix} 1 & 0 & 0 \\ 0 & 1 & 0 \\ 0 & 0 & 0 \end{bmatrix} \neq I_3 $$
is the projection matrix of $\mathbb{R}^3$ onto the $\text{Span}\left\{\mathbf{u}_1,\mathbf{u}_2 \right\}$
If you have an orthonormal basis $\left\{\mathbf{u}_1,\mathbf{u}_2,\dots,\mathbf{u}_k\right\}$ for $C(A)$, the column space of $m\times n$ matrix $A$, then the matrix $P = UU^T$ is the projection matrix in the codomain $\mathbf{R}^m$ onto $C(A)$. Recall we also have that $P = A(A^TA)^{-1}A^T$.
It will also be true that if you have an orthonormal basis $\left\{\mathbf{u}_1,\mathbf{u}_2,\dots,\mathbf{u}_k\right\}$ for subspace $S$ in vector space $V$, then the matrix $P = UU^T$ is the projection matrix from $V$ onto $S$.
Your use of this self-initiated mediated course material is subject to our Creative Commons License 4.0