Math 511: Linear Algebra
6.2 Properties of a Linear Transformation
6.2.1 Image and Kernel¶
$$ \require{color} \definecolor{brightblue}{rgb}{.267, .298, .812} \definecolor{darkblue}{rgb}{0.0, 0.0, 1.0} \definecolor{palepink}{rgb}{1, .73, .8} \definecolor{softmagenta}{rgb}{.99,.34,.86} \definecolor{blueviolet}{rgb}{.537,.192,.937} \definecolor{jonquil}{rgb}{.949,.792,.098} \definecolor{shockingpink}{rgb}{1, 0, .741} \definecolor{royalblue}{rgb}{0, .341, .914} \definecolor{alien}{rgb}{.529,.914,.067} \definecolor{crimson}{rgb}{1, .094, .271} \def\ihat{\mathbf{\hat{\unicode{x0131}}}} \def\jhat{\mathbf{\hat{\unicode{x0237}}}} \def\khat{\mathrm{\hat{k}}} \def\tombstone{\unicode{x220E}} \def\contradiction{\unicode{x2A33}} $$
For a linear transformation $L: V\rightarrow W$, there are a few specific effects of $L$ that we want to identify. There are important subspaces associated with linear operators just as the four fundamental subspaces of a matrix in chapter 3.
In Grant Sanderson's videos we saw that any linear transformation $L:\mathbb{R}^m\rightarrow\mathbb{R}^n$ from finite n-dimensional vector space $V$ to finite m-dimensional vector space $W$ can be represented by an $m\times n$ matrix. We call the $m\times n$ matrix ${\color{shockingpink} \large{\text{a}}}$ matrix representation of operator $L$. We usually denote such a matrix $A_L$. This allows one to write the definition of $L$
$$ L(\mathbf{x}) = A\mathbf{x} $$
A linear operator whose domain or codomain is an abstract vector space or infinite dimensional vector space cannot be represented with a matrix. This means we will need separate vocabulary for linear operators and matrices. However when a linear transformation can be represented by a matrix, there is an obvious connection between the subspaces defined for both.
Definition of Kernel¶
The kernel of a linear transformation $L: V\rightarrow W$ is the set of all $\mathbf{v}\in V$ such that $L(\mathbf{v}) = \mathbf{0}_W$
$$ \ker (L) = \left\{\mathbf{v}\in V : L(\mathbf{v}) = \mathbf{0}_W\right\} $$
The kernel of a linear transformation that can be represented by a matrix is the null space of its matrix representation. For a matrix $A_L\in\mathbb{R}^{m\times n}$, $N(A_L) = \ker(L)$. The idea of the kernel is more general, as it applies to any linear transformation, not merely those represented by matrices.
In addition, we want to study the image of linear transformation on a subspace $S$ of $V$.
Definition of Image¶
The image of a subset $S$ of $V$ under the linear transformation $L: V\rightarrow W$ is the set of all $\mathbf{w}\in W$ such that $\mathbf{w} = L(\mathbf{v})$ for some $\mathbf{v}\in S$.
$$ L(S) = \left\{\mathbf{w}\in W : \mathbf{w} = L(\mathbf{v})\text{ for some } \mathbf{v}\in S\right\} $$
The image of a linear transformation that can be represented by a matrix is the column space of its matrix representation. For a matrix $A_L\in\mathbb{R}^{m\times n}$, $C(A_L) = L(V)$, the image of the entire domain.
Notice that the definition of image does not assume that one wants all of the images of the entire domain. If $S\subset V$ is any subset of the domain then, $L(S)$ is only the set of images of all of the vectors in $S$, not necessarily the entire domain $V$. The column space is the image of the entire domain.
The image of a subspace is important because it tells us where a particular collection of vectors get mapped to by the linear transformation. For instance, suppose $L: \mathbb{R}^2 \rightarrow \mathbb{R}^2$ and we want to know where the subspace represented by all vectors along the $x_1$-axis
$$ S = \left\{\mathbf{x}\in\mathbb{R}^2 : \mathbf{x} = x_1\mathbf{e}_1\right\} $$
is mapped to by $L$. $L(S)$ would tell us where these end up in after the linear transformation is applied.
Theorem 6.2.1¶
If $L: V\rightarrow W$ is a linear transformation and $S$ is a subspace of $V$, then
(i) $\ker(L)$ is a subspace of $V$
(ii) $L(S)$ is subspace of $W$
Proof:¶
(i) To prove that $\ker(L)$ is subspace, we must show that it is nonempty and closed under scalar multiplication and vector addition. $\ker(L)$ is nonempty since $\mathbf{0}_V\in\ker(L)$. For the closure properties, let $\mathbf{v}_1,\mathbf{v}_2\in\ker(L)$ and $\alpha$ be a scalar, then
$$ L(\alpha\mathbf{v}_1) = \alpha L(\mathbf{v}_1) = \alpha \mathbf{0}_W = \mathbf{0}_W\in W $$
and
$$ L(\mathbf{v}_1 + \mathbf{v}_2) = L(\mathbf{v}_1) + L(\mathbf{v}_2) = \mathbf{0}_W + \mathbf{0}_W = \mathbf{0}_W \in W $$
(ii) $L(S)$ is nonempty since $S$ must contain $\mathbf{0}_V$ and so $L(\mathbf{0}_V) = \mathbf{0}_W \in L(S)$. For the closure properties, let $\alpha$ be a scalar and $\mathbf{w}_1,\mathbf{w}_2\in L(S)$ for some $\mathbf{v}_1,\mathbf{v}_2\in S$ where $L(\mathbf{v}_1) = \mathbf{w}_1$ and $L(\mathbf{v}_2) = \mathbf{w}_2$. Then $\alpha\mathbf{v}_1\in S$ implies
$$ L(\alpha\mathbf{v}_1) = \alpha L(\mathbf{v}_1) = \alpha\mathbf{w}_1 \in L(S) $$
and we also have
$$ L(\mathbf{v}_1 + \mathbf{v}_2) = L(\mathbf{v}_1) + L(\mathbf{v}_2) = \mathbf{w}_1 + \mathbf{w}_2 \in L(S) $$
So $L(S)$ is a subspace of $W$.$\tombstone$
Note:¶
The image of all of $V$ under $L$, $L(V)$, is sometimes called the range of $L$. In these notes, we avoid this terminology to prevent confusion with the usage of the term range students may have encountered prior to this course. For $L: V\rightarrow W$, stick to using the terms domain for $V$, codomain for $W$, and image of $V$ for $L(V)$.
6.2.2 Examples for Image and Kernel¶
Consider the following exercises involving linear transformations, and determine the image and kernel of $L: V \rightarrow W$.
Exercise 1¶
Let $L$ be the linear operator on $\mathbb{R}^3$ defined by
$$ L(\mathbf{x}) = \begin{bmatrix} 0 \\ x_2 \\ 0 \end{bmatrix} $$
Find $\ker(L)$ and $L\left(\mathbb{R}^3\right)$.
Follow Along
To find the kernel one must solve the equation
$$ L(\mathbf{x}) = \mathbf{0}\qquad\qquad\text{Look familiar?} $$
This give us
$$ \begin{align*} L(\mathbf{x}) &= \begin{bmatrix} 0 \\ 0 \\ 0 \end{bmatrix} \\ \\ L\left(\begin{bmatrix} x_1 \\ x_2 \\ x_3 \end{bmatrix}\right) &= \begin{bmatrix} 0 \\ x_2 \\ 0 \end{bmatrix} = \begin{bmatrix} 0 \\ 0 \\ 0 \end{bmatrix} \\ \\ x_2 &= 0 \end{align*} $$
However the coordinates $x_1$ and $x_2$ can be anything. Thus our kernel has the form
$$ \begin{align*} \ker(L) &= \left\{\begin{bmatrix} x_1 \\ 0 \\ x_2 \end{bmatrix}\,:\,x_1,x_2\in\mathbb{R}\,\right\} \\ \\ &= \left\{\begin{bmatrix} x_1 \\ 0 \\ 0 \end{bmatrix} + \begin{bmatrix} 0 \\ 0 \\ x_2 \end{bmatrix}\,:\,x_1,x_2\in\mathbb{R}\,\right\} \\ \\ &= \left\{\,x_1\begin{bmatrix} 1 \\ 0 \\ 0 \end{bmatrix} + x_2\begin{bmatrix} 0 \\ 0 \\ 1 \end{bmatrix}\,:\,x_1,x_2\in\mathbb{R}\,\right\} \\ \\ &= \text{Span}\left\{\begin{bmatrix} 1 \\ 0 \\ 0 \end{bmatrix},\ \begin{bmatrix} 0 \\ 0 \\ 1 \end{bmatrix}\right\} \\ \\ &= \text{Span}\left\{\mathbf{e}_1,\ \mathbf{e}_3\right\} = \text{Span}\left\{\ihat, \khat\right\} \end{align*} $$
We will see in the next section how to create a matrix representation of a linear transformation between finite dimensional vector spaces. When one has such a matrix representation, one can use the methods of chapter 3 and chapter 1 to determine the null space of the matrix to obtain the kernel of the linear transformation.
To obtain the image of the linear transformation $L\left(\mathbb{R}^3\right)$ one must consider the images of all vectors in the domain. Notice that we can ask for
(i) the image of a set $S$, denoted $L(S)$.
(ii) the image of the linear transformation, denoted $L\left(\mathbb{R}^3\right)$
$$ L(\mathbf{x}) = L\left(\begin{bmatrix} x_1 \\ x_2 \\ x_3 \end{bmatrix}\right) = \begin{bmatrix} 0 \\ x_2 \\ 0 \end{bmatrix} $$
This linear operator squishes all of 3-dimensional space onto a line, the $x_2$-axis.
$$ L\left(\mathbb{R}^3\right) = \text{Span}\left\{\mathbf{e}_2\right\} = \text{Span}\left\{\jhat\right\} $$
Exercise 2¶
Let $L$ be the linear operator on $\mathbb{R}^{2\times 2}$ defined by
$$
L(A) = \begin{bmatrix} a_{11} & 0 \\ 0 & a_{22} \end{bmatrix}
$$
Find $\ker(L)$ and $L(\mathbb{R}^{2\times 2})$.
Follow Along
$$ \begin{align*} L(A) &= L\left(\begin{bmatrix} a_{11} & a_{12} \\ a_{21} & a_{22} \end{bmatrix}\right) = \begin{bmatrix} a_{11} & 0 \\ 0 & a_{22} \end{bmatrix} = \begin{bmatrix} 0 & 0 \\ 0 & 0 \end{bmatrix} \\ \\ a_{11} &= a_{22} = 0 \end{align*} $$
A vector $A\in\ker(L)$ if and only if $a_{11},a_{22} = 0$. However $a_{12}$ and $a_{21}$ can be anything so
$$ \begin{align*} \ker(L) &= \left\{\begin{bmatrix} 0 & a_{12} \\ a_{21} & 0 \end{bmatrix}\,:\,a_{12},a_{21}\in\mathbb{R}\,\right\} \\ \\ &= \left\{\begin{bmatrix} 0 & a_{12} \\ 0 & 0 \end{bmatrix} + \begin{bmatrix} 0 & 0 \\ a_{21} & 0 \end{bmatrix}\,:\,a_{12},a_{21}\in\mathbb{R}\,\right\} \\ \\ &= \left\{\,a_{12}\begin{bmatrix} 0 & 1 \\ 0 & 0 \end{bmatrix} + a_{21}\begin{bmatrix} 0 & 0 \\ 1 & 0 \end{bmatrix}\,:\,a_{12},a_{21}\in\mathbb{R}\,\right\} \\ \\ &= \text{Span}\left\{\begin{bmatrix} 0 & 1 \\ 0 & 0 \end{bmatrix},\ \begin{bmatrix} 0 & 0 \\ 1 & 0 \end{bmatrix}\right\} \end{align*} $$
A vector $B\in L(\mathbb{R}^{2\times 2})$ if and only if
$$ \begin{align*} B = L(A) &= \left\{\begin{bmatrix} a_{11} & 0 \\ 0 & a_{22} \end{bmatrix}\,:\,a_{11},a_{22}\in\mathbb{R}\,\right\} \\ \\ &= \left\{\begin{bmatrix} a_{11} & 0 \\ 0 & 0 \end{bmatrix} + \begin{bmatrix} 0 & 0 \\ 0 & a_{22} \end{bmatrix}\,:\,a_{11},a_{22}\in\mathbb{R}\,\right\} \\ \\ &= \left\{a_{11}\begin{bmatrix} 1 & 0 \\ 0 & 0 \end{bmatrix} + a_{22}\begin{bmatrix} 0 & 0 \\ 0 & 1 \end{bmatrix}\,:\,a_{11},a_{22}\in\mathbb{R}\,\right\} \\ \\ &= \text{Span}\left\{\begin{bmatrix} 1 & 0 \\ 0 & 0 \end{bmatrix},\ \begin{bmatrix} 0 & 0 \\ 0 & 1 \end{bmatrix}\right\} \end{align*} $$
Exercise 3¶
Let $\mathcal{I}: P_3 \rightarrow P_4$ be the antiderivative with the constant of integration set to zero. It is defined by
$$ \mathcal{I}\left(ax^2 + bx + c\right) = \frac{a}{3}x^3 + \frac{b}{2}x^2 + cx $$
Find $\ker\left(\mathcal{I}\right)$ and describe what the functions in $\mathcal{I}(P_3)$ looks like.
Follow Along
To find the $\ker\left(\mathcal{I}\right)$ we set
$$ \begin{align*} \mathcal{I}\left(ax^2 + bx + c\right) &= \dfrac{a}{3}x^3 + \dfrac{b}{2}x^2 + cx + 0 = 0x^3 + 0x^2 + 0x + 0 \\ \\ a &= 0 \\ b &= 0 \\ c &= 0 \end{align*} $$
A function $p\in \ker(\mathcal{I})$ if and only if $p = 0$; that is it is the zero function and $\ker(\mathcal{I}) = \left\{ 0 \right\}$ is trivial.
To obtain the image of the linear operator $\mathcal{I}$, we will take a new approach. We know that if $\mathcal{I}(p) = q\in P_4$, then the constant term is zero. Let us ask ourselves,
$$ \mathcal{I}(p) = q = ax^3 + bx^2 + cx + 0 $$
The fundamental theorem of calculus tells that
$$ \dfrac{dq}{dx} = 3ax^2 + 2bx + c $$
is the function whose antiderivative is polynomial $q$.
$$ \mathcal{I}\left(3ax^2 + 2bx + c\right) = ax^3 + bx^2 + cx $$
The polynomials in $\mathcal{I}\left(P_3\right)$ are the polynomials is $P_4$ with constant term zero.
The functions in $\mathcal{I}(P_3)$ are polynomials of degree less than 4 that pass through the origin.
6.2.2 Rank and Nullity of a Linear Transformation¶
Definition¶
The dimension of the kernel of a linear transformation is called the nullity of the linear transformation.
Definition¶
The dimension of the image of a linear transformation is called the rank of the linear transformation.
Lemma 6.2.2¶
The rank of a linear transformation $T:V\rightarrow W$ is less than or equal to the dimension of the domain.
Proof:¶
Suppose that $T:V\rightarrow W$ is a linear transformation from vector space $V$ into vector space $W$, and the domain is finite dimensional with dim$(V) = n$. Since dim$(V) = n < \infty$, let $\left\{ \mathbf{u}_1, \mathbf{u}_2, \dots, \mathbf{u}_n\right\}$ be a basis for $V$. The image of $V$ is a subspace of $W$ and
$$ \begin{align*} \text{Im}(T) &= \left\{ L(\mathbf{v})\in W\,:\,\mathbf{v}\in V\right\} \\ &= \left\{ L\left(\mathbf{v}\right)\in W\,:\, \mathbf{v} = \displaystyle\sum_{k=1}^n v_k\mathbf{u}_k\,\text{ with }v_1,v_2,\dots,v_k\in\mathbb{R}\,\right\} \\ &= \left\{ \displaystyle\sum_{k=1}^n v_kL(\mathbf{u}_k)\,:\,v_1, v_2, \dots, v_n\in\mathbb{R}\,\right\} \end{align*} $$
Now if the set of vectors $\left\{ L(\mathbf{u}_1), L(\mathbf{u}_2), \dots, L(\mathbf{u}_n) \right\}$ is linearly dependent, then the number of linearly independent vectors is smaller than $n$. Hence
$$ 0 \le \text{rank}(T) = \text{dim}\left(\text{Im}(V)\right)\le n $$ $\tombstone$
Theorem 6.2.3¶
The Rank-Nullity Theorem
If $T:V\rightarrow W$ is a linear transformation from an $n$-dimensional vector space $V$ into a vector space $W$, then the sum of the rank and nullity is $n$. That is
$$ \text{rank}(T) + \text{nullity}(T) = n $$
Proof:¶
Let $T:V\rightarrow W$ be a linear transformation from a finite dimensional vector space $V$ into vector space $W$. We conclude from lemma 6.2.2 that the image of $T$ is a finite dimensional subspace of $W$ with dimension less than or equal to $n$. Moreover the kernel of $T$ is a finite dimensional subspace of $V$ with dimension less than or equal to $n$. Notice that the image of the kernel of $T$ is the trivial subspace $\left\{\mathbf{0}\right\}$.
$$ \text{Im}\left(\text{ker}(T)\right) = \left\{ T(\mathbf{v})\,:\,\mathbf{v}\in\text{ker}(T) \,\right\} = \left\{\mathbf{0}\right\} $$
As in the lemma, let $\left\{\mathbf{u}_1,\mathbf{u}_2,\dots,\mathbf{u}_n\right\}$ be a basis for $V$. Since ker$(T)$ is a subspace of $V$, there are exactly nullity$(T)$ = dim(ker$(T)$) vectors from the basis that forms a basis for ker$(T)$. The rest of the $r = n - $nullity$(T)$ basis vectors have non-zero image. These basis vectors form a basis for the vector complement of ker$(V)$. Moreover the vectors $\left\{ L(\mathbf{v}_1), L(\mathbf{v}_2), \dots, L(\mathbf{v}_r) \right\}$ are linearly independent for if
$$ c_1L(\mathbf{v}_1) + c_2L(\mathbf{v}_2) + \dots + c_rL(\mathbf{v}_r) = \mathbf{0}, $$
then
$$ L\left( c_1\mathbf{v}_1 + c_2\mathbf{v}_2 + \dots + c_r\mathbf{v}_r \right) = \mathbf{0} $$
This implies that
$$ c_1\mathbf{v}_1 + c_2\mathbf{v}_2 + \dots + c_r\mathbf{v}_r \in\text{ker}(T) $$
However the only vector that ker$(T)$ and its vector complement have in common is the zero vector so
$$ c_1\mathbf{v}_1 + c_2\mathbf{v}_2 + \dots + c_r\mathbf{v}_r = \mathbf{0} $$
This implies that $c_1 = c_2 = \dots = c_n = 0$ because these basis vectors are linearly independent. Hence we have that
$$ r \ge\text{rank}(T) $$
Since Span$\left\{\mathbf{v}_1, \mathbf{v}_2, \dots, \mathbf{v}_r\right\}$ is a subspace of $V$, then we can conclude from lemma 6.2.2 that
$$ r = \text{dim}\left(\text{Span}\left\{\mathbf{v}_1, \mathbf{v}_2, \dots, \mathbf{v}_r\right\}\right)\le\text{rank}(T) $$
Thus $r = $rank$(T)$, and rank$(T) + $nullity$(T) = n$. $\tombstone$
6.2.3 One-To-One Transformations¶
Definition¶
If $T:V\rightarrow W$ is a transformation from $V$ into $W$, the preimage of a vector or a set of vectors $S$ in the codomain $W$ is the set of vectors in the domain mapped to each element of $S$.
$$ T^{-1}(S) := \left\{ \mathbf{v}\in V\,:\, T(\mathbf{v})\in S\right\} $$
In more formal mathematics, the pre-image of only one element in the codomain is the set of elements of the domain whose image is given. Clearly if $\mathbf{w}$ is a vector in codomain $W$ that is not in the Im$(T)$, then $T^{-1}(\mathbf{w}) = \emptyset$.
So the pre-image of an element of the codomain, or the pre-image of a subset of the codomain is always a well-defined subset of the domain.
Definition¶
If $T:V\rightarrow W$ is a transformation from $V$ into $W$ and the pre-image of every element $T(V)$ consists of exactly one element of the domain, then we say that the transformation is one-to-one.
If $T:V\rightarrow W$ is a one-to-one transformation, then we can define a new transformation $T^{-1}:\text{Im}(T)\rightarrow V$ so that for every $\mathbf{w}\in\text{Im}(T)$,
$$
T^{-1}(\mathbf{w}) := \mathbf{v}
$$
where $T(\mathbf{v}) = \mathbf{w}$.
Theorem 6.2.4¶
Let $T:V\rightarrow W$ be a linear transformation. Then $T$ is one-to-one if and only if ker$(T) = \left\{\mathbf{0}\right\}$.
Proof:¶
Suppose that $T:V\rightarrow W$ is a linear transformation.
If $T$ is one-to-one, then $T^{-1}\left(\left\{\mathbf{0}\right\}\right)$ has only one element in it and $T\left(\mathbf{0}\right)=\mathbf{0}$, so ker$(T) = T^{-1}\left(\left\{\mathbf{0}\right\}\right) = \left\{\mathbf{0}\right\}$.
If ker$(T) = \left\{\mathbf{0}\right\}$, and $\mathbf{u}$ and $\mathbf{v}$ are vectors in $V$ so that $T(\mathbf{u}) = T(\mathbf{v})$. Then $\mathbf{0} = T(\mathbf{u}) - T(\mathbf{v}) = T(\mathbf{u} - \mathbf{v})$. Thus $\mathbf{u}-\mathbf{v}\in\text{ker}(T)$. This implies that $\mathbf{u}-\mathbf{v}=\mathbf{0}$, or $\mathbf{u}=\mathbf{v}$. This establishes that the pre-image of $T(\mathbf{u})$ has only one element in it for any element of the $T(V)$. Therefore $T$ is one-to-one. $\tombstone$
6.2.4 Onto Transformations¶
Definition¶
If $T:V\rightarrow W$ is a transformation from $V$ into $W$ and $T(V)=W$, the we call transformation $T$ onto, and write $T:V\rightarrow W$ is a transformation from $V$ onto $W$.
Lemma 6.2.5¶
Let $T:V\rightarrow W$ be a linear transformation where vector space $W$ is a finite dimensional vector space. Then $T$ is onto if and only if rank$(T) = $dim$(W)$.
Proof:¶
This is evident from the definition of onto. $T$ is onto if and only if $T(V) = W$, and $T(V) = W$ if and only if dim$(T(V)) = $dim$(W)$. $\tombstone$
Theorem 6.2.6¶
Suppose that $T:V\rightarrow W$ is a linear transformation from $V$ into $W$, and vector spaces $V$ and $W$ are both finite dimensional vector spaces with dimension $n$. Then $T$ is one-to-one if and only if it is onto.
Proof:¶
Suppose that $T:V\rightarrow W$ is a linear transformation from $V$ into $W$, and vector spaces $V$ and $W$ are both finite dimensional vector spaces with dimension $n$.
If $T$ is one-to-one, then by theorem 6.2.4, ker$(T) = \left\{\mathbf{0}\right\}$. Thus nullity$(T) = $dim$($ker$(T)) = 0$. Using theorem 6.2.3,
$$ \text{rank}(T) = n - \text{nullity}(T) = n - 0 = n = \text{dim}(W) $$
Hence the image of $T$ is a subspace of vector space $W$ with the same dimension as $W$. Thus $T(V) = W$ and $T$ is onto.
If $T$ is onto, then $W = $ Im$(T)$. Using theorem 6.2.3, $n = $rank$(T) + $nullity$(T) = n +$ nullity$(T)$. Hence nullity$(T) = 0$. We can conclude from theorem 6.2.4 that $T$ is one-to-one. $\tombstone$
6.2.5 Isomorphic Vector Spaces¶
Definition¶
A linear transformation $T:V\rightarrow W$ that is one-to-one and onto is called an isomorphism.
Moreover, if $V$ and $W$ are vector spaces and there exists an isomorphism $T:V\rightarrow W$, then we call these vectos spaces isomorphic to each other.
Theorem 6.2.7¶
Two finite dimensional vector spaces $V$ and $W$ are isomorhic if and only if they have the same dimension.
Proof:¶
If finite dimensional vector spaces $V$ and $W$ are isomorphic, then there is an isomorphism $T:V\rightarrow W$. From our previous theorems and lemmas, nullity$(T)=0$ and $W = T(V)$. Using the same argument from Theorem 6.2.3 we have that since $V$ is a finite dimensional vector space,
$$ \text{dim}(V) = \text{dim}(\text{Im}(T)) + \text{dim}(\text{ker}(T)) = \text{dim}(\text{Im}(T)) = \text{dim}(W) $$
If vector spaces $V$ and $W$ are finite dimensional vector spaces with dimension $n$, then let $\left\{\mathbf{v}_1, \mathbf{v}_2, \dots, \mathbf{v}_n\right\}$ be a basis for $V$, and let $\left\{\mathbf{w}_1, \mathbf{w}_2, \dots, \mathbf{w}_n\right\}$ be a basis for $W$. Define $T:V\rightarrow W$ for every vector $\mathbf{v} = c_1\mathbf{v}_1 + c_2\mathbf{v}_2 + \dots c_n\mathbf{v}_n\in V$ by
$$ T( \mathbf{v} ) = c_1\mathbf{w}_1 + c_2\mathbf{w}_2 + \dots + c_n\mathbf{w}_n $$
Since each vector $\mathbf{v}_k = 0\mathbf{v}_1 + \dots + \mathbf{v}_k + \dots + 0\mathbf{v}_n$, we have that $T(\mathbf{v}_k) = \mathbf{w}_k$. Hence $T$ is well-defined for each basis vector in $V$ and clearly a linear transformation. It is onto because Im$(T) = $ Span$\left\{ L(\mathbf{v}_k)\,:\,1\le k\le n\right\} = $ Span$\left\{\mathbf{w}_k\,:\,1\le k\le n\right\} = W$. Since it is onto, theorem 6.2.6 implies it is one-to-one. Thus $T$ is an isomorphism from $V$ onto $W$, and the vector spaces are isomorphic.$\tombstone$
Example 1¶
The following vector spaces are isomorphic:
- $\mathbb{R}^4 = $ 4-space, the space of vectors in 4-dimensional Euclidean space
- $\mathbb{R}^{4\times 1} = $ the vector space of $4\times 1$ matrices, or column vectors of numbers on $\mathbb{R}$
- $M_{4,1} = \mathcal{L}\left(\mathbb{R},\mathbb{R}^4\right) = $ the vector space of linear transformations from $\mathbb{R}$ to $\mathbb{R}^4$
- $M_{2,2} = \mathcal{L}\left(\mathbb{R}^2,\mathbb{R}^2\right) = $ the vector space of linear transformations form $\mathbb{R}^2$ to $\mathbb{R}^2$
- $\mathbb{R}^{2\times 2} = $ the vector space of $2\times 2$ matrices
- $M_{1,4} = \mathcal{L}\left(\mathbb{R}^4,\mathbb{R}\right) = $ the vector space of linear transformations from $\mathbb{R}^4$ to $\mathbb{R} = $ the vector space of linear functionals defined on $\mathbb{R}^4$.
- $\mathbb{R}^{1\times 4} = $ the vector space of $1\times 4$ matrices
- $P_4 = $ the vector space of polynomials of degree less than 4.
- $V = \left\{\langle x_1, x_2, x_3, x_4, 0 \rangle\,:\, x_1, x_2, x_3, x_4\in\mathbb{R}\right\} = $ a four dimensional subspace of $\mathbb{R}^5$
Example 2¶
The following vector spaces are isomorphic
- $\mathbb{R}^n = $ $n$-dimensional Euclidean space
- $\mathbb{R}^{n\times 1} = $ the vector space of $n\times 1$ matrices, or column vectors of numbers on $\mathbb{R}$.
- $M_{m,n} = \mathcal{L}\left(\mathbb{R}^n,\mathbb{R}^m\right) = $ the vector space of linear transformations from $\mathbb{R}^n$ to $\mathbb{R}^m$
- $\mathbb{R}^{m\times n} = $ the vector space of $m\times n$ matrices
- $P_n = $ the vector space of polynomials of degree less than $n$
- $M_{1,n} = $ the vector space of linear functionals defined on $\mathbb{R}^n$
- $\mathbb{R}^{1\times 4} = $ the vector space of $1\times 4$ matrices
Your use of this self-initiated mediated course material is subject to our Creative Commons License 4.0