Math 511: Linear Algebra
4.2 Vector Spaces
4.2.1 Vector Spaces¶
$$ \require{color} \definecolor{brightblue}{rgb}{.267, .298, .812} \definecolor{darkblue}{rgb}{0.0, 0.0, 1.0} \definecolor{palepink}{rgb}{1, .73, .8} \definecolor{softmagenta}{rgb}{.99,.34,.86} \definecolor{blueviolet}{rgb}{.537,.192,.937} \definecolor{jonquil}{rgb}{.949,.792,.098} \definecolor{shockingpink}{rgb}{1, 0, .741} \definecolor{royalblue}{rgb}{0, .341, .914} \definecolor{alien}{rgb}{.529,.914,.067} \definecolor{crimson}{rgb}{1, .094, .271} \def\ihat{\mathbf{\hat{\unicode{x0131}}}} \def\jhat{\mathbf{\hat{\unicode{x0237}}}} \def\khat{\mathrm{\hat{k}}} \def\tombstone{\unicode{x220E}} \def\contradiction{\unicode{x2A33}} $$
In this section, we will formally define one of the central concepts to linear algebra, a vector spaces. Everything we have worked with so far and will work with in this course is directly related to vector spaces. We study
- vectors in a vector space,
- a linear transformation from one vector space to another vector space,
- vector spaces as mathematical objects
Vector spaces are best understood first from the perspective of examples we are familiar with, before we expand that understanding to a broader class of vector spaces.
We are going to discuss several vector space examples, formally define vector spaces, then discuss more general examples that fit the definition of a vector space. Recall what Sanderson calls the physics student perspective, the mathematician's perspective, and the computer science student perspective of a vector. We'll be referring to these ideas, but using the mathematician's perspective in this section.
Start at time mark 20:20 in the following video:
4.2.2 The Definition of a Vector Space¶
Any set of things that has a reasonable definition for adding the things in the set to get another thing in the set, and multiplying things in the set by scalars to get more things in the set is said to have the closure properties of vector addition and scalar multiplication. Moreover we can create linear combinations of the things to get more things in our set.
For this set to be a Vector Space the operations of vector addition and scalar multiplication must satisfy the same ten properties as Theorem 4.2 for $\mathbb{R}^n$. These additional properties do not need to be proved because they are the definition of a vector space.
We call this type of definition an abstraction because you are using properties from a familiar mathematical object, the vectors in $\mathbb{R}^n$, and using them as the definition or axioms for a similar set mathematical objects.
If these new things (mathematical objects) can be added together and multiplied by scalars so that these axioms are true, then we call the set of objects with their operations of addition and scalar multiplication a Vector Space. In the mathematician perspective, a vector space is both simpler and more abstract than the vector space $\mathbb{R}^n$.
We begin with a collection or set $V$ of some mathematical objects $\mathbf{v}$.
Then we define a reasonable idea of scalar multiplication $\alpha\mathbf{v}$ and addition $\mathbf{v}+\mathbf{w}$ for these mathematical objects in $V$.
If this set of objects $V$, scalar multiplication of these objects, and addition of these objects satisfy the Vector Space Axioms, then we say that $(V,\cdot,+)$ is a vector space.
Notice that the mathematician requires three definitions:
- a collection of objects $V$
- a way to multiply elements of $V$ by scalars and get another of element of $V$
- a way to add two elements of $V$ and get another element of $V$.
The last two definitions are called the closure properties of a vector space $V$.
Definition 4.1 The Axioms of a Vector Space¶
Let $V$ be a set on which two operation, vector addition and scalar multiplication are defined. If the listed axioms are satisfied for every element $\mathbf{u}\in V$, $\mathbf{v}\in V$, and $\mathbf{w}\in V$, and every scalar $c$ and $d$, then $(V,+,\cdot)$ is a Vector Space
1. $\mathbf{u} + \mathbf{v}\in V$
2. $\mathbf{u} + \mathbf{v} = \mathbf{v} + \mathbf{u}$
3. $\mathbf{u} + (\mathbf{v} + \mathbf{w}) = (\mathbf{u} + \mathbf{v}) + \mathbf{w}$
4. $V$ has a vector $\mathbf{0}$ so that $\mathbf{u} + \mathbf{0} = \mathbf{u}$
5. For ever vector $\mathbf{u}\in V$, there is a vector $-\mathbf{u}\in V$ so that $\mathbf{u} + (-\mathbf{u}) = \mathbf{0}$
6. $c\mathbf{u}\in V$
7. $c(\mathbf{u} + \mathbf{v}) = c\mathbf{u} + c\mathbf{v}$
8. $(c + d)\mathbf{u} = c\mathbf{u} + d\mathbf{u}$
9. $c(d\mathbf{u}) = (cd)\mathbf{u}$
10. $1\mathbf{u} = \mathbf{u}$
We will use boldface characters like $\mathbf{x}$, $\mathbf{y}$, $\mathbf{u}$, and $\mathbf{v}$ to represent vectors and typically use Greek letters such as $\alpha$, $\beta$, and $\gamma$ to represent scalars. Typically, scalar quantities will be real numbers, but they may also be complex numbers. A vector space whose associated set of scalars is the real numbers is referred to as a vector space over the real numbers or a real vector space. A complex vector space would feature the complex numbers as its scalars. Also, the $\mathbf{0}$ symbol is used to represent the zero vector, which is distinct from the scalar $0$. For example, in $\mathbb{R}^2$
$$ \mathbf{0} = \begin{bmatrix} 0 \\ 0 \end{bmatrix} \neq 0. $$
The goal of the mathematician perspective is to take advantage of abstraction and study the properties of vector spaces without worrying about the specific details of a particular vector space. If we can show that something is true for a general vector space, then it must be true for ALL vector spaces. Additionally, if we can show that something is closed under scalar multiplication and vector addition and satisfies the above axioms, it is a vector space and we inherit ALL of the vast and powerful mathematical machinery of vector spaces. No extra work required.
4.2.3 The Vector Space $\mathbb{R}^{m\times n}$¶
We spent much time discussing the operations and properties of $m\times n$ matrices in Chapters 1 and 2. The reason for this is that $\mathbb{R}^{m\times n}$ matrices with real values form a vector space. In Section 1.4, theorem 1.4.1, the details of both scalar multiplication and vector addition are explained in detail for these matrices. In fact, the operations as defined there work for $\mathbb{R}^n$ as well, if we treat each of the vectors as $n\times 1$ matrices.
We can scale $m\times n$ matrices multiplying a matrix by a scalar. Like vectors, this results in multiplying every element of the matrix by the scalar. $$ \begin{align*} \alpha A &= \alpha \begin{bmatrix} a_{11} & a_{12} & a_{13} & \cdots & a_{1n} \\ a_{21} & a_{22} & a_{23} & \cdots & a_{2n} \\ a_{31} & a_{32} & a_{33} & \cdots & a_{3n} \\ \vdots & \vdots & \ddots & & \vdots \\ a_{m1} & a_{m2} & a_{m3} & \cdots & a_{mn} \end{bmatrix} \\ \\ &= \begin{bmatrix} \alpha a_{11} & \alpha a_{12} & \alpha a_{13} & \cdots & \alpha a_{1n} \\ \alpha a_{21} & \alpha a_{22} & \alpha a_{23} & \cdots & \alpha a_{2n} \\ \alpha a_{31} & \alpha a_{32} & \alpha a_{33} & \cdots & \alpha a_{3n} \\ \vdots & \vdots & \ddots & & \vdots \\ \alpha a_{m1} & \alpha a_{m2} & \alpha a_{m3} & \cdots & \alpha a_{mn} \end{bmatrix} \\ \\ &= \alpha\begin{bmatrix} a_{ij} \end{bmatrix} = \begin{bmatrix} \alpha a_{ij} \end{bmatrix} \end{align*} $$
By the way the first two equations is what I was required to write for all problems. You should write the same algebra using the last equation. That's so much better!
We also add $m\times n$ matrices element-by-element.
$$
\begin{align*}
A + B &= \begin{bmatrix} a_{11} & a_{12} & a_{13} & \cdots & a_{1n} \\
a_{21} & a_{22} & a_{23} & \cdots & a_{2n} \\
a_{31} & a_{32} & a_{33} & \cdots & a_{3n} \\
\vdots & \vdots & \vdots & \ddots & \vdots \\
a_{m1} & a_{m2} & a_{m3} & \cdots & a_{mn} \end{bmatrix} +
\begin{bmatrix} b_{11} & b_{12} & b_{13} & \cdots & b_{1n} \\
b_{21} & b_{22} & b_{23} & \cdots & b_{2n} \\
b_{31} & b_{32} & b_{33} & \cdots & b_{3n} \\
\vdots & \vdots & \vdots & \ddots & \vdots \\
b_{m1} & b_{m2} & b_{m3} & \cdots & b_{mn} \end{bmatrix} \\
\\
&= \begin{bmatrix} a_{11} + b_{11} & a_{12} + b_{12} & a_{13} + b_{13} & \cdots & a_{1n} + b_{1n} \\
a_{21} + b_{21} & a_{22} + b_{22} & a_{23} + b_{23} & \cdots & a_{2n} + b_{2n} \\
a_{31} + b_{31} & a_{32} + b_{32} & a_{33} + b_{33} & \cdots & a_{3n} + b_{3n} \\
\vdots & \vdots & \vdots & \ddots & \vdots \\
a_{m1} + b_{m1} & a_{m2} + b_{m2} & a_{m3} + b_{m3} & \cdots & a_{mn} + b_{mn} \end{bmatrix} \\
\\
&= \begin{bmatrix} a_{ij} \end{bmatrix} + \begin{bmatrix} b_{ij} \end{bmatrix} = \begin{bmatrix} a_{ij} + b_{ij} \end{bmatrix} \\
\end{align*}
$$
Again, isn't the last line so much nicer?
A linear combination of two $m\times n$ matrices yields $$ \alpha A + \beta B = \alpha\begin{bmatrix} a_{ij} \end{bmatrix} + \beta\begin{bmatrix} b_{ij} \end{bmatrix} = \begin{bmatrix} \alpha a_{ij} \end{bmatrix} + \begin{bmatrix} \beta\ b_{ij} \end{bmatrix} = \begin{bmatrix} \alpha a_{ij} + \beta\ b_{ij} \end{bmatrix} $$
We can't draw arrows in 47-dimensional Euclidean space $\mathbb{R}^{47}$. Nor can we effectively draw the graph of linear transformation from $\mathbb{R}^4\rightarrow\mathbb{R}^4$; that is a $4\times 4$ matrix. The best we can do is use our intuition from our examples of arrows in two and three dimensional vector space $\mathbb{R}^2$ and $\mathbb{R}^3$.
We can also draw regions like a unit cube or unit sphere in $\mathbb{R}^2$ and $\mathbb{R}^3$ and graph the change in shape and volume caused by a linear transformation in the vector spaces of $2\times 2$ and $3\times 3$ matrices.
Exercise 1¶
Consider the $2\times 2$ matrix $A = \begin{bmatrix} 3 & 1 \\ -1 & 2 \end{bmatrix}$. Draw the unit square $S$ on a plane $\mathbb{R}^2$. Then on another plane draw the image of the unit square $A(S)$.
View Solution
4.2.4 The Vector Spaces of Polynomials¶
Sanderson shows how a linear transformation works in the context of an abstract vector space, in particular the vector space formed by polynomials. What he doesn't do in that video is show that the polynomials form a vector space. Let's do that as an example. It is sufficient to show algebraically that the closures and axioms are satisfied.
Example 1¶
Show that the set $P$ of all polynomials is a vector space.
Solution¶
We begin by showing that both scalar multiplication and vector addition satisfy the closure properties.
Suppose that vector $\mathbf{p}\in P$ is polynomial
$$
p(x) = p_m x^m + p_{m-1} x^{m-1} + \ldots + p_2 x^2 + p_1 x + p_0
$$
and vector $\mathbf{q}\in P$ is polynomial
$$
q(x) = q_n x^n + q_{n-1} x^{n-1} + \ldots + q_2 x^2 + q_1 x + q_0
$$
where $m = n$.
We can assume this true without loss of generality for our proof. All computations would follow almost exactly if $m \le n$ or $m > n$ instead. In these cases we will simply pad the smaller degree polynomial with higher order zero terms so that $m=n$.
Proof:¶
Let $\mathbf{p}$, $\mathbf{q}$, and $\mathbf{r}$ be polynomials in $P$, and let $c$ and $d$ be any scalars. Then for every $x\in\mathbb{R}$, we have
$(p + q)(x) = (p_m + q_m)x^m + \ldots + (p_2 + q_2)x^2 + (p_1 + q_1)x + (p_0 + q_0)$ so $\mathbf{p} + \mathbf{q}\in P$.
$\mathbf{p} + \mathbf{q} = \mathbf{q} + \mathbf{p}$
$$ \begin{align*} p(x) + q(x) &= \left(p_m x^m + \ldots + p_1 x + p_0\right) + \left( q_m x^m + \ldots + q_1 x + q_0 \right) \\ \\ &= \left(p_m + q_m\right)x^m + \ldots + \left(p_1 + q_1\right)x + \left(p_0 + q_0\right) \\ \\ &= \left(q_m + p_m\right)x^m + \ldots + \left(q_1 + p_1\right)x + \left(q_0 + p_0\right) \\ \\ &= \left( q_m x^m + \ldots + q_1 x + q_0 \right) + \left(p_m x^m + \ldots + p_1 x + p_0\right) \\ \\ &= q(x) + p(x) \end{align*} $$$\mathbf{p} + (\mathbf{q} + \mathbf{r}) = (\mathbf{p} + \mathbf{q}) + \mathbf{r}$
$$ \begin{align*} p(x) + (q(x) + r(x)) &= \left(p_m x^m + \ldots + p_1 x + p_0\right) + \\ &\qquad \left(\left( q_m x^m + \ldots + q_1 x + q_0 \right) + \left( r_m x^m + \ldots + r_1 x + r_0 \right)\right) \\ &= \left(p_m x^m + \ldots + p_1 x + p_0\right) + \left( (q_m + r_m) x^m + \ldots + (q_1 + r_1)x + (q_0 + r_0) \right) \\ &= \left(p_m + (q_m + r_m)\right) x^m + \ldots + \left(p_1 + (q_1 + r_1)\right)x + \left(p_0 + (q_0 + r_0) \right) \\ &= \left((p_m + q_m) + r_m\right) x^m + \ldots + \left((p_1 + q_1) + r_1\right)x + \left((p_0 + q_0) + r_0 \right) \\ &= \left((p_m + q_m) x^m + \ldots + (p_1 + q_1)x + (p_0 + q_0)\right) + \left( r_m x^m + \ldots + r_1 x + r_0 \right) \\ &= \left(\left(p_m x^m + \ldots + p_1 x + p_0\right) + \left( q_m x^m + \ldots + q_1 x + q_0 \right)\right) \\ &\qquad + \left( r_m x^m + \ldots + r_1 x + r_0 \right) \\ &= (p(x) + q(x)) + r(x) \end{align*} $$The zero polynomial $0(x) = 0$ is the zero vector $\mathbf{0}\in P$, and $\mathbf{p}+ \mathbf{0} = \mathbf{p}$.
$$ p(x) + 0(x) = \left(p_m x^m + \ldots + p_1 x + p_0\right) + 0 = p_m x^m + \ldots + p_1 x + p_0 = p(x) $$Every polynomial $\mathbf{p}\in P$ has a vector $-\mathbf{p}$ so that $\mathbf{p} + (-\mathbf{p}) = \mathbf{0}$.
$$ \begin{align*} p(x) + (-p(x)) &= \left(p_m x^m + \ldots + p_1 x + p_0\right) + \left(-p_m x^m - \ldots - p_1 x - p_0\right) \\ &= \left(p_m - p_m\right) x^m + \ldots + \left(p_1 - p_1\right)x + \left(p_0 - p_0\right) \\ &= 0x^m + \ldots + 0x + 0 = 0 = 0(x) \end{align*} $$
The rest of these axioms are true and the proof is an exercise for the reader.
This means that many of the useful notions of linear algebra, such as linear transformations (operations), dot (inner) products, and eigenvectors (eigenfunctions) apply to the vector space of polynomials $P$.
This is the value of the mathematician picture of vector spaces. While showing that a particular mathematical object obeys the axioms may be tedious (as we see above), it is not difficult as long as you have a feel for how the algebraic manipulations of scalar multiplication and vector addition on a set of objects work.
You need only to ever prove that a particular set of objects is a vector space once, and you then inherit all of your knowledge of linear algebra for that set of objects. The inclusion of the example of polynomials here was deliberate because this one is a lot of work, but that work is informative.
4.2.5 Other Examples of Vector Spaces¶
There are many vector spaces out there, which is why studying them is so important. Of course, we have focused on linear transformations represented by $\mathbb{R}^{m\times n}$ matrices, but keep reminding yourself that other objects that satisfy the vector space axioms also vector spaces and have the same properties.
To that end, familiarize yourself with some other vector spaces by showing that they follow the axioms of a vector space.
Polynomials of Degree Less Than $n$¶
The set of polynomials $P_n$ is the set of polynomials $p\in P$ that have the form
$$ p(x) = p_{n-1} x^{n-1} + p_{n_2} x^{n-2} + \ldots + p_2 x^2 + p_1 x + p_0, $$
Notice that any polynomial in $P_n$ can be completely described using its coefficients. Since we only need to know the $n$ coefficients, we can think of an element of the vector space $P_n$ as a list of numbers.
$$ p = \begin{bmatrix} p_0 \\ p_1 \\ p_2 \\ \vdots \\ p_{n-1} \end{bmatrix} $$
Exercise 2¶
Show that $P_n$, the set of polynomials of degree less than $n$, is a vector space.
View Solution
Let $\mathbf{p}$, $\mathbf{q}$, and $\mathbf{r}$ be an polynomials in $P_n$, and $c$ and $d$ be any scalars.
1. $\mathbf{p} + \mathbf{q}\in P_n$
$$ \begin{align*} p(x) + q(x) &= ( p_{n-1}x^{n-1} + \ldots + p_2 x^2 + p_1 x + p_0 ) + ( q_{n-1}x^{n-1} + \ldots + q_2 x^2 + q_1 x + q_0 ) \\ &= ( p_{n-1} + q_{n-1} )x^{n-1} + \ldots + (p_2 + q_2)x^2 + (p_1 + q_1)x + (p_0 + q_0) \\ &= (p + q)(x) \end{align*} $$
2. $\mathbf{p} + \mathbf{q} = \mathbf{q} + \mathbf{p}$
$$ \begin{align*} p(x) + q(x) &= \begin{bmatrix} p_0 \\ p_1 \\ \ddots \\ p_{n-1} \end{bmatrix} + \begin{bmatrix} q_0 \\ q_1 \\ \ddots \\ q_{n-1} \end{bmatrix} = \begin{bmatrix} p_0 + q_0 \\ p_1 + p_1 \\ \ddots \\ p_{n-1} + q_{n-1} \end{bmatrix} \\ &= \begin{bmatrix} q_0 + p_0 \\ q_1 + p_1 \\ \ddots \\ q_{n-1} + p_{n-1} \end{bmatrix} = \begin{bmatrix} q_0 \\ q_1 \\ \ddots \\ q_{n-1} \end{bmatrix} + \begin{bmatrix} p_0 \\ p_1 \\ \ddots \\ p_{n-1} \end{bmatrix} \\ &= q(x) + p(x). \end{align*} $$
3. $\mathbf{p} + (\mathbf{q} + \mathbf{r}) = (\mathbf{p} + \mathbf{q}) + \mathbf{r}$
$$ \begin{align*} p(x) + (q(x) + r(x)) &= \begin{bmatrix} p_i \end{bmatrix} + \left(\begin{bmatrix} q_i \end{bmatrix} + \begin{bmatrix} r_i \end{bmatrix} \right) \\ &= \begin{bmatrix} p_i \end{bmatrix} + \begin{bmatrix} q_i + r_i \end{bmatrix} \\ &= \begin{bmatrix} p_i + (q_i + r_i) \end{bmatrix} \\ &= \begin{bmatrix} (p_i + q_i) + r_i \end{bmatrix} \\ &= \begin{bmatrix} p_i + q_i \end{bmatrix} + \begin{bmatrix} r_i \end{bmatrix} \\ &= \left( \begin{bmatrix} p_i \end{bmatrix} + \begin{bmatrix} q_i \end{bmatrix} \right) + \begin{bmatrix} r_i \end{bmatrix} \\ &= \left( p(x) + q(x) \right) + r(x) \end{align*} $$
The reader should finish the rest of the proof.
4.2.6 Continuous Functions on a Closed Interval¶
We use the notation $C[a,b]$ to denote the set of functions that are continuous on the closed interval $[a,b]$. If we suppose, for simplicity, that the functions $f,g\in C[a,b]$ are real-valued and $\alpha$ is a real scalar, then we may talk about the operations of scalar multiplication and vector addition on this space. They are defined in the typical way.
Vector addition:
$$
(f + g)(x) := f(x) + g(x),\ \text{for every $x\in[a,b]$}
$$
Scalar multiplication:
$$
(\alpha f)(x) := \alpha\,f(x),\ \text{for every $x\in[a,b]$}
$$
These definitions clearly give us the required closure properties that we learned in our differential calculus class
- the sum of two continuous functions is a continuous function
- the product of a scalar and a continuous function is a continuous function
Exercise 3¶
Show that $C[a,b]$ is a vector space.
View Solution
Let $f$, $g$, and $h$ be functions continuous on the interval $[a,b]$, and $c$ and $d$ be any scalars. Then
1. $f + g\in C[a,b]$
The function $f+g$ is continuous on the interval $[a,b]$ because functions $f$ and $g$ are continuous on the interval $[a,b]$.
2. $f + g = g + f$
$$ \begin{align*} (f + g)(x) &= f(x) + g(x) = g(x) + f(x) = (g + f)(x) \end{align*} $$
Since this equation is true for every $x\in[a,b]$, $f + g = g + f$ in $C[a,b]$.
3. $f + (g + h) = (f + g) + h$
$$ \begin{align*} \Big(f + (g + h)\Big)(x) &= f(x) + \Big( g(x) + h(x) \Big) \\ &= \Big( f(x) + g(x) \Big) + h(x) \\ &= \Big( (f + g) + h \Big)(x) \end{align*} $$
Since this equation is true for every $x\in[a,b]$, $(f + g) + h = f + (g + h)$ in $C[a,b]$.
The reader should finish this proof.
4.2.7 Complex Numbers $\mathbb{C}$¶
Complex numbers $\mathbb{C}$ also form a vector space. We've not discussed it explicitly, but so do the real numbers $\mathbb{R}$. The fact that real numbers are a vector space should not be surprising, since we can think of $\mathbb{R}$ as $\mathbb{R}^1$ and we've shown that all $\mathbb{R}^n$ (even $n=1$) form vector spaces.
The mechanics for showing that $\mathbb{C}$ is a vector space are pretty straightforward, with one twist. Instead of using the real numbers as our scalars, we use the complex numbers. Let $x,y,\zeta\in\mathbb{C}$ be given by
$$
x = a + bi \qquad\qquad y = c + di \qquad\qquad \zeta = \alpha + \beta i
$$
where $a,b,c,d,\alpha,\beta$ are all real numbers and $i = \sqrt{-1}$ is the imaginary unit. (We'll be using Greek letter $\zeta$ (zeta) as our scalar.) Our operation of scalar multiplication
$$
\begin{align*}
\zeta x &= (\alpha + \beta i)(a + bi) \\
\\
&= \alpha (a + bi) + \beta i (a + bi) \\
\\
&= \alpha a + \alpha b i + \beta a i + \beta b i^2 \\
\\
&= \alpha a + \alpha b i + \beta a i + \beta b (-1) \\
\\
&= \alpha a + \beta b + \alpha b i - \beta a i \\
\\
&= \left( \alpha a + \beta b \right) + \left( \alpha b - \beta a \right)i \in\mathbb{C}
\end{align*}
$$
is closed. It is much easier to show closure for vector addition.
$$
\begin{align*}
x + y &= (a + bi) + (c + di) \\
\\
&= a + bi + c + di \\
\\
&= a + c + bi + di \\
\\
&= (a + c) + (b + d)i\in\mathbb{C}
\end{align*}
$$
Exercise 4¶
Show that $\mathbb{C}$ is a vector space.
4.2.8 Properties of Scalar Multiplication¶
Theorem 4.2.1¶
If $\mathbf{v}\in V$ is a vector in vector space $V$, and $c$ is any scalar, then
$0\mathbf{v} = \mathbf{0}$
$c\mathbf{0} = \mathbf{0}$
If $c\mathbf{v} = \mathbf{0}$, then $c=0$ or $\mathbf{v}=\mathbf{0}$
$(-1)\mathbf{v} = -\mathbf{v}$
Proof:¶
We can use the fact that $\mathbf{0} + \mathbf{0} = \mathbf{0}$ by axion 4 to show that
$$ \begin{align*} 0\mathbf{v} &= (0 + 0)\mathbf{v} \qquad &\text{property of real numbers} \\ 0\mathbf{v} &= 0\mathbf{v} + 0\mathbf{v} \qquad &\text{Axiom 8} \\ 0\mathbf{v} + (-0\mathbf{v}) &= \left( 0\mathbf{v} + 0\mathbf{v}\right) + (-0\mathbf{v}) \qquad &\text{Axiom 1} \\ \mathbf{0} &= \left( 0\mathbf{v} + 0\mathbf{v}\right) + (-0\mathbf{v}) \qquad &\text{Axiom 5} \\ \mathbf{0} &= 0\mathbf{v} + \left( 0\mathbf{v} + (-0\mathbf{v})\right) \qquad &\text{Axiom 3} \\ \mathbf{0} &= 0\mathbf{v} + \mathbf{0} \qquad &\text{Axiom 5} \\ \mathbf{0} &= 0\mathbf{v} \qquad &\text{Axiom 4} \end{align*} $$The proof for identity 2 is found in the textbook and the reader should prove identities 3 and 4.
4.2.9 Abstract Vector Spaces¶
We study linear algebra because of the abstract vector spaces. They are the important vector spaces to understand. We draw our intuition about linear combinations, vector spaces, and linear transformation from our understanding of the vector spaces $\mathbb{R}^2$, $\mathbb{R}^3$, and $\mathbb{R}^n$. These examples are only the launching point of a subject that allows us to consider many mathematical models to be vector spaces. This gives us the ability to
- perform algebraic manipulations of the objects in our mathematical models
- let computers perform all of the tedious arithmetic necessary for modern applications
4.2.10 Examples of Abstract Vector Spaces¶
We need to recognize vector spaces in our mathematical models and create mathematical models utilizing vector spaces. This chapter is a lot of work, and worth every minute. Mastery of vector spaces and subspaces will make the rest of the course much easier to understand.
Summary¶
Some of the important vector spaces to know
$\mathbb{R} =\,$ the vector space of real numbers
$\mathbb{R}^2 =\,$ the vector space of 2-tuples
$\mathbb{R}^3 =\,$ the vector space of 3-tuples
$\mathbb{R}^n =\,$ the vector space of $n$-tuples
$P =\,$ the vector space of all polynomials
$P_n=\,$ the vector space of polynomials of degree less than $\ <\ n$
$\mathbb{R}^{m\times n} = M_{m,n} = M_{m,n}(\mathbb{R}) =\,$ the vector space of $m\times n$ real-valued matrices
$\mathbb{R}^{n\times n} = M_{n,n} = M_{n,n}(\mathbb{R}) =\,$ the vector space of all square $n\times n$ real-valued matrices
$\mathbb{C}^{m\times n} = M_{m,n}(\mathbb{C}) =\,$ the vector space of all $m\times n$ complex-valued matrices
$\mathbb{C}^{n\times n} = M_{n,n}(\mathbb{C}) =\,$ the vector space of all square $n\times n$, complex-valued matrices
$C(-\infty, \infty) =\,$ the vector space of all continuous functions defined on the real number line
$C[a,b] =\,$ the vector space of continuous functions defined on the closed interval $[a,b]$, where $a\neq b$
$C^1[a,b] =\,$ the vector space of continuously differentiable functions on the interval $[a,b]$, where $a\neq b$
$C^2[a,b] =\,$ the vector space of twice continuously differentiable functions on the interval $[a,b]$, where $a\neq b$
$C^n[a,b] =\,$ the vector space of $n$ times, continuously differentiable functions on the interval $[a,b]$, where $a\neq b$
$C^{\infty}[a,b] =\,$ the vector space functions on the interval $[a,b]$, where $a\neq b$, whose $k^{\text{th}}$ derivative exists and is continuous for every positive integer $k$
$\mathbb{R}^{2n} =\,$ the vector space of $\,$ states of an object in orbit about a gravity source (typically $\mathbb{R}^6$ in our 3D-world)
$\mathcal{H} =\,$ the vector space of $\,$ states of a quantum system
$\mathcal{L}\left(\mathscr{H}\right) =\,$ the vector space of linear operators on a Hilbert Space.
Your use of this self-initiated mediated course material is subject to our Creative Commons License 4.0