Wichita State University Logo

Math 511: Linear Algebra

5.5 Applications


5.5.1 Cross Product


5.5.2 The Linear Transformation Cross Product


5.5.3 Derivation of Cross Product

$\require{color}$ $\definecolor{brightblue}{rgb}{.267, .298, .812}$ $\definecolor{darkblue}{rgb}{.08, .18, .28}$ $\definecolor{palepink}{rgb}{1, .73, .8}$ $\definecolor{softmagenta}{rgb}{.99,.34,.86}$ $\def\ihat{\mathbf{\hat{\mmlToken{mi}[mathvariant="bold"]{ı}}}}$ $\def\jhat{\mathbf{\hat{\mmlToken{mi}[mathvariant="bold"]{ȷ}}}}$ $\def\khat{\mathbf{\hat{k}}}$

Definition 5.5.1 Cross Product

Let $\mathbf{u} = \langle u_1, u_2, u_3 \rangle$ and $\mathbf{v} = \langle v_1, v_2, v_3 \rangle$ be two vectors in $\mathbb{R}^3$ in standard coordinates. The cross product of these two vectors is the vector

$$ \mathbf{u}\times\mathbf{v} := (u_2v_3 - u_3v_2)\ihat - (u_1v_3 - u_3v_1)\jhat + (u_1v_2 - u_3v_1)\khat $$

Given two vector $\color{brightblue}{\mathbf{v}}$ and $\color{softmagenta}{\mathbf{w}}$ in $\mathbb{R}^3$, we define a linear transformation from $\mathbb{R}^3\longrightarrow\mathbb{R}$ using the determinant. For every vector $\mathbf{u}\in\mathbb{R}^3$,

$$ L[\mathbf{u}] = \begin{vmatrix} u_1 & \color{brightblue}{v_1} & \color{softmagenta}{w_1} \\ u_2 & \color{brightblue}{v_2} & \color{softmagenta}{w_2} \\ u_3 & \color{brightblue}{v_3} & \color{softmagenta}{w_3} \end{vmatrix}\in\mathbb{R} $$

This is a linear transformation or linear functional from $\mathbb{R}^3$ to $\mathbb{R}$ because of properties 3(a) and 3(b) of determinants in chapter 3. If vectors $\mathbf{x}$, and $\mathbf{y}$ are in $\mathbb{R}^3$, and $\alpha$ and $\beta$ are scalars in $\mathbb{R}$ then

$$ \begin{align*} L\left[\alpha\mathbf{x} + \beta\mathbf{y}\right] &= \begin{vmatrix} \alpha x_1 + \beta y_1 & \color{brightblue}{v_1} & \color{softmagenta}{w_1} \\ \alpha x_2 + \beta y_2 & \color{brightblue}{v_2} & \color{softmagenta}{w_2} \\ \alpha x_3 + \beta y_3 & \color{brightblue}{v_3} & \color{softmagenta}{w_3} \end{vmatrix} \\ \\ &= \begin{vmatrix} \alpha x_1 & \color{brightblue}{v_1} & \color{softmagenta}{w_1} \\ \alpha x_2 & \color{brightblue}{v_2} & \color{softmagenta}{w_2} \\ \alpha x_3 & \color{brightblue}{v_3} & \color{softmagenta}{w_3} \end{vmatrix} + \begin{vmatrix} \beta y_1 & \color{brightblue}{v_1} & \color{softmagenta}{w_1} \\ \beta y_2 & \color{brightblue}{v_2} & \color{softmagenta}{w_2} \\ \beta y_3 & \color{brightblue}{v_3} & \color{softmagenta}{w_3} \end{vmatrix} \\ \\ &= \alpha\begin{vmatrix} x_1 & \color{brightblue}{v_1} & \color{softmagenta}{w_1} \\ x_2 & \color{brightblue}{v_2} & \color{softmagenta}{w_2} \\ x_3 & \color{brightblue}{v_3} & \color{softmagenta}{w_3} \end{vmatrix} + \beta\begin{vmatrix} y_1 & \color{brightblue}{v_1} & \color{softmagenta}{w_1} \\ y_2 & \color{brightblue}{v_2} & \color{softmagenta}{w_2} \\ y_3 & \color{brightblue}{v_3} & \color{softmagenta}{w_3} \end{vmatrix} \\ \\ &= \alpha\,L[\mathbf{x}] + \beta\,L[\mathbf{y}] \end{align*} $$
That is because determinant is multilinear . Moreover, this will map every vector $\mathbf{u}\in\mathbb{R}^3$ to the volume of the parallelepiped with sides $\mathbf{u}$, $\mathbf{v}$, and $\mathbf{w}$

Parallelepiped

Figure 1

The sign of this volume will be determined by the right-hand rule, and the size of this volume will be determined by the three vectors. As we viewed in the video above, the amount and sign of the volume of the parallelepiped is a scalar value that will be larger with vector $\mathbf{u}$ is more perpendicular to the $\color{brightblue}{\mathbf{v}}\color{purple}{\mathbf{w}}$-plane. The value of the volume will be smaller when vector $\mathbf{u}$ is closer to parallel to the $\color{brightblue}{\mathbf{v}}\color{purple}{\mathbf{w}}$-plane.

Since this is a linear transformation from $\mathbb{R}^3$ to $\mathbb{R}$ it can be represented by a $1\times 3$ matrix. To find the columns of this matrix representation of our linear transformation, we merely need to determine where $\ihat$, $\jhat$, and $\khat$ land on the number line. The coordinate for the image of each is given by

$$ \begin{align*} L[\ihat] &= \begin{vmatrix} 1 & \color{brightblue}{v_1} & \color{softmagenta}{w_1} \\ 0 & \color{brightblue}{v_2} & \color{softmagenta}{w_2} \\ 0 & \color{brightblue}{v_3} & \color{softmagenta}{w_3} \end{vmatrix} = \begin{vmatrix} \color{brightblue}{v_2} & \color{softmagenta}{w_2} \\ \color{brightblue}{v_3} & \color{softmagenta}{w_3} \end{vmatrix} = a_1 \\ \\ L[\jhat] &= \begin{vmatrix} 0 & \color{brightblue}{v_1} & \color{softmagenta}{w_1} \\ 1 & \color{brightblue}{v_2} & \color{softmagenta}{w_2} \\ 0 & \color{brightblue}{v_3} & \color{softmagenta}{w_3} \end{vmatrix} = -\begin{vmatrix} \color{brightblue}{v_1} & \color{softmagenta}{w_1} \\ \color{brightblue}{v_3} & \color{softmagenta}{w_3} \end{vmatrix} = a_2 \\ \\ L[\khat] &= \begin{vmatrix} 0 & \color{brightblue}{v_1} & \color{softmagenta}{w_1} \\ 0 & \color{brightblue}{v_2} & \color{softmagenta}{w_2} \\ 1 & \color{brightblue}{v_3} & \color{softmagenta}{w_3} \end{vmatrix} = \begin{vmatrix} \color{brightblue}{v_1} & \color{softmagenta}{w_1} \\ \color{brightblue}{v_2} & \color{softmagenta}{w_2} \end{vmatrix} = a_3 \end{align*} $$
Hence the columns of our matrix become

$$ A = \begin{bmatrix} a_1 & a_2 & a_3 \end{bmatrix} $$
Notice that this is the transpose or dual of the cross product vector because it points the correct direction, has magnitude equal to the area of the parallelogram or base of the parallelepiped, and has the numerical coordinates given by the formula for cross product. In other words, the cross product vector

$$ \begin{align*} {\color{brightblue}{\mathbf{v}}}\times{\color{purple}{\mathbf{w}}} &= \begin{bmatrix} a_1 \\ a_2 \\ a_3 \end{bmatrix} = a_1\ihat + a_2\jhat + a_3\khat \\ \\ &= \begin{vmatrix} \color{brightblue}{v_2} & \color{softmagenta}{w_2} \\ \color{brightblue}{v_3} & \color{softmagenta}{w_3} \end{vmatrix}\ihat - \begin{vmatrix} \color{brightblue}{v_1} & \color{softmagenta}{w_1} \\ \color{brightblue}{v_3} & \color{softmagenta}{w_3} \end{vmatrix}\jhat + \begin{vmatrix} \color{brightblue}{v_1} & \color{softmagenta}{w_1} \\ \color{brightblue}{v_2} & \color{softmagenta}{w_2} \end{vmatrix}\khat \\ \\ &= \begin{vmatrix} \ihat & \color{brightblue}{v_1} & \color{softmagenta}{w_1} \\ \jhat & \color{brightblue}{v_2} & \color{softmagenta}{w_2} \\ \khat & \color{brightblue}{v_3} & \color{softmagenta}{w_3} \end{vmatrix} \end{align*} $$
Although the last expression breaks several rules for determinants and is only used to shorten our notation. The really interesting part of this derivation is the appearance of linear functionals and duality.

5.5.4 Properties of Cross Product

The properties of cross product are derived from the properties of determinant.

Theorem 5.5.2

Algebraic Properties of Cross Product If $\mathbf{u}$, $\mathbf{v}$, and $\mathbf{w}$ are in $\mathbb{R}^3$, and $c$ is a scalar, then

  1. $\mathbf{u}\times\mathbf{v} = -\mathbf{v}\times\mathbf{w}$
  2. $\mathbf{u}\times(\mathbf{v} + \mathbf{w}) = \mathbf{u}\times\mathbf{v} + \mathbf{u}\times\mathbf{w}$
  3. $c\mathbf{u}\times\mathbf{v} = \mathbf{cu}\times\mathbf{v} = \mathbf{u}\times\mathbf{cv}$
  4. $\mathbf{u}\times\mathbf{0} = \mathbf{0}\times\mathbf{u} = \mathbf{0}$
  5. $\mathbf{u}\times\mathbf{u} = \mathbf{0}$
  6. $\mathbf{u}\cdot(\mathbf{v}\times\mathbf{w}) = (\mathbf{u}\times\mathbf{v})\cdot\mathbf{w}$

Proof:

1.

$$ \mathbf{u}\times\mathbf{v} = \begin{vmatrix} \ihat & \color{brightblue}{u_1} & \color{softmagenta}{v_1} \\ \jhat & \color{brightblue}{u_2} & \color{softmagenta}{v_2} \\ \khat & \color{brightblue}{u_3} & \color{softmagenta}{v_3} \end{vmatrix} = -\begin{vmatrix} \ihat & \color{brightblue}{v_1} & \color{softmagenta}{u_1} \\ \jhat & \color{brightblue}{v_2} & \color{softmagenta}{u_2} \\ \khat & \color{brightblue}{v_3} & \color{softmagenta}{u_3} \end{vmatrix} = -\mathbf{v}\times\mathbf{w} $$


because switching one pair of rows changes the sign of the determinant.

4.
$$ \mathbf{u}\times\mathbf{v} = \begin{vmatrix} \ihat & \color{brightblue}{u_1} & \color{softmagenta}{0} \\ \jhat & \color{brightblue}{u_2} & \color{softmagenta}{0} \\ \khat & \color{brightblue}{u_3} & \color{softmagenta}{0} \end{vmatrix} = 0 = \begin{vmatrix} \ihat & \color{brightblue}{0} & \color{softmagenta}{u_1} \\ \jhat & \color{brightblue}{0} & \color{softmagenta}{u_2} \\ \khat & \color{brightblue}{0} & \color{softmagenta}{u_3} \end{vmatrix} $$


because each matrix has a column of zeros.

6.

This dot product of the cross product of two vectors in $\mathbb{R}^3$ is called the triple product . The triple product is the volume of the parallelepiped described by the three vectors.

$$ \begin{align*} \mathbf{u}\cdot({\color{brightblue}{\mathbf{v}}}\times{\color{softmagenta}{\mathbf{w}}}) &= \mathbf{u}\cdot\begin{vmatrix} \ihat & \color{brightblue}{v_1} & \color{softmagenta}{w_1} \\ \jhat & \color{brightblue}{v_2} & \color{softmagenta}{w_2} \\ \khat & \color{brightblue}{v_3} & \color{softmagenta}{w_3} \end{vmatrix} = \mathbf{u}\cdot\left(\,\begin{vmatrix} \color{brightblue}{v_2} & \color{softmagenta}{w_2} \\ \color{brightblue}{v_3} & \color{softmagenta}{w_3} \end{vmatrix}\ihat - \begin{vmatrix} \color{brightblue}{v_1} & \color{softmagenta}{w_1} \\ \color{brightblue}{v_3} & \color{softmagenta}{w_3} \end{vmatrix}\jhat + \begin{vmatrix} \color{brightblue}{v_1} & \color{softmagenta}{w_1} \\ \color{brightblue}{v_2} & \color{softmagenta}{w_2} \end{vmatrix}\khat\, \right) \\ \\ &= u_1\begin{vmatrix} \color{brightblue}{v_2} & \color{softmagenta}{w_2} \\ \color{brightblue}{v_3} & \color{softmagenta}{w_3} \end{vmatrix} - u_2\begin{vmatrix} \color{brightblue}{v_1} & \color{softmagenta}{w_1} \\ \color{brightblue}{v_3} & \color{softmagenta}{w_3} \end{vmatrix} + u_3\begin{vmatrix} \color{brightblue}{v_1} & \color{softmagenta}{w_1} \\ \color{brightblue}{v_2} & \color{softmagenta}{w_2} \end{vmatrix} \\ \\ &= \begin{vmatrix} u_1 & \color{brightblue}{v_1} & \color{softmagenta}{w_1} \\ u_2 & \color{brightblue}{v_2} & \color{softmagenta}{w_2} \\ u_3 & \color{brightblue}{v_3} & \color{softmagenta}{w_3} \end{vmatrix} \\ \\ &= {\color{softmagenta}{w_1}}\begin{vmatrix} u_2 & \color{brightblue}{v_2} \\ u_3 & \color{brightblue}{v_3} \end{vmatrix} - {\color{softmagenta}{w_2}}\begin{vmatrix} u_1 & \color{brightblue}{v_1} \\ u_3 & \color{brightblue}{v_3} \end{vmatrix} + {\color{softmagenta}{w_3}}\begin{vmatrix} u_1 & \color{brightblue}{v_1} \\ u_2 & \color{brightblue}{v_2} \end{vmatrix} \\ \\ &= {\color{softmagenta}{\mathbf{w}}}\left(\,\begin{vmatrix} u_2 & \color{brightblue}{v_2} \\ u_3 & \color{brightblue}{v_3} \end{vmatrix}\ihat - \begin{vmatrix} u_1 & \color{brightblue}{v_1} \\ u_3 & \color{brightblue}{v_3} \end{vmatrix}\jhat + \begin{vmatrix} u_1 & \color{brightblue}{v_1} \\ u_2 & \color{brightblue}{v_2} \end{vmatrix}\khat\,\right) \\ \\ &= {\color{softmagenta}{\mathbf{w}}}\cdot\begin{vmatrix} \ihat & u_1 & \color{brightblue}{v_1} \\ \jhat & u_2 & \color{brightblue}{v_2} \\ \khat & u_3 & \color{brightblue}{v_3} \end{vmatrix} = \begin{vmatrix} \ihat & u_1 & \color{brightblue}{v_1} \\ \jhat & u_2 & \color{brightblue}{v_2} \\ \khat & u_3 & \color{brightblue}{v_3} \end{vmatrix}\cdot{\color{softmagenta}{\mathbf{w}}} = \left(\,\mathbf{u}\times{\color{brightblue}{\mathbf{v}}}\,\right)\cdot{\color{softmagenta}{\mathbf{w}}} \end{align*} $$
Identities 2, 3 and 5 are left to the reader to prove.

Theorem 5.5.3

The Geometric Properties of Cross Product If $\mathbf{u}$ and $\mathbf{v}$ are nonzero vectors in $\mathbb{R}^3$, then

  1. $\mathbf{u}\times\mathbf{v}$ is orthogonal to both $\mathbf{u}$ and $\mathbf{v}$
  2. The angle $\theta$ between $\mathbf{u}$ and $\mathbf{v}$ is given by $\left\|\mathbf{u}\times\mathbf{v}\right\| = \left\|\mathbf{u}\right\|\,\left\|\mathbf{v}\right\|\sin(\theta)$
  3. Vectors $\mathbf{u}$ and $\mathbf{v}$ are parallel if and only if $\mathbf{u}\times\mathbf{v}=\mathbf{0}$.
  4. The parallelogram with adjacent sides $\mathbf{u}$ and $\mathbf{v}$ has area $\left\|\mathbf{u}\times\mathbf{v}\right\|$.

5.5.5 Orthonormal Sets

The primary benefits of using orthonormal bases are

  • finding a coordinate vector $\mathbf{v}$ with respect to an orthonormal basis is simpler
  • projections of $\mathbf{v}$ onto subsets spanned by vectors in that orthonormal set are immediate
  • inner product and norm may be computed directly from the coordinates of a vector

As such, we tend to do linear algebra using orthonormal sets whenever possible. The following example feature particularly useful orthonormal sets.

Example 1

$\left\{\,\ihat,\,\jhat,\,\khat\,\right\}$ is an orthonormal set in $\mathbb{R}^3$ endowed with the Euclidean inner product; dot product.

Example 2

The canonical basis $\left\{\,\mathbf{e}_1,\,\mathbf{e}_2,\,\dots,\,\mathbf{e}_n\,\right\}$ is an orthonormal set in $\mathbb{R}^n$ endowed with the Euclidean inner product; dot product.

Example 3

Consider $C[-\pi,\pi]$ with the inner product

$$ \langle f,g \rangle = \frac{1}{\pi}\int_{-\pi}^\pi f(x)g(x)\, dx $$
and the set $\left\{1,\cos x,\sin x,\cos 2x\,\sin 2x,\ldots,\cos nx,\sin nx\right\}$. This set is orthogonal since for any positive integers $j$ and $k$

$$ \begin{align*} \langle 1,\cos jx \rangle &= \frac{1}{\pi}\int_{-\pi}^\pi \cos jx\, dx = 0 \\ \\ \langle 1,\sin kx \rangle &= \frac{1}{\pi}\int_{-\pi}^\pi \sin kx\, dx = 0 \\ \\ \langle \cos jx,\cos kx \rangle &= \frac{1}{\pi}\int_{-\pi}^\pi \cos jx \cos kx\, dx = 0 \qquad j\neq k \\ \\ \langle \cos jx,\sin kx \rangle &= \frac{1}{\pi}\int_{-\pi}^\pi \cos jx \sin kx\, dx = 0 \\ \\ \langle \sin jx,\sin kx \rangle &= \frac{1}{\pi}\int_{-\pi}^\pi \sin jx \sin kx\, dx = 0 \qquad j\neq k \end{align*} $$
The $\cos jx$ and $\sin kx$ terms for positive integers $j$ and $k$ are already units vectors because

$$ \begin{align*} \langle \cos jx,\cos jx \rangle &= \frac{1}{\pi}\int_{-\pi}^\pi \cos^2 jx \, dx = 1 \\ \\ \langle \sin kx,\sin kx \rangle &= \frac{1}{\pi}\int_{-\pi}^\pi \sin^2 kx \, dx = 1 \end{align*} $$
but $1$ is not, as

$$ \| 1 \|^2 = \langle 1,1 \rangle = \frac{1}{\pi}\int_{-\pi}^\pi 1 \, dx = 2 $$
We divide $1$ by its length to form an orthonormal set

$$ \left\{\frac{1}{\sqrt{2}},\cos x,\sin x,\cos 2x\,\sin 2x,\ldots,\cos nx,\sin nx\right\} $$
in $C[-\pi,\pi]$.

(These integrals may be verified by using the product-to-sum trigonometric identities.)

5.5.6 Orthonormal Bases and Coordinates

Always choose an orthonormal basis for your coordinate system. This will make computations far easier. Recall Theorem 5.3.8

Theorem 5.3.8

The Inner Product of a Vector onto an Orthonormal Basis Vector is its Coordinate

Let $\left\{\mathbf{u}_1,\mathbf{u}_2,\ldots,\mathbf{u}_n\right\}$ be an orthonormal basis for an inner product space $V$. If

$$ \mathbf{v} = \sum_{i=1}^n c_i\mathbf{u}_i = c_1\mathbf{u}_1 + c_2\mathbf{u}_2 + \ldots + c_n\mathbf{u}_n $$
then

$$ c_i = \langle \mathbf{v},\mathbf{u}_i \rangle $$

Proof:

$$ \langle \mathbf{v},\mathbf{u}_i \rangle = \left\langle \sum_{i=1}^n c_i\mathbf{u}_i, \mathbf{u}_i \right\rangle = \sum_{j=1}^n c_j\left\langle \mathbf{u}_j,\mathbf{u}_i \right\rangle = \sum_{j=1}^n c_j\delta_{ij} = c_i $$



This means that the coordinates of a vector in the coordinate system defined by an orthonormal basis is exactly the inner product of the vector with each basis vector.

5.5.7 Invariance of the Inner Product

We have already seen that if we choose a different orientation (basis or coordinate system), then the coordinates (list of numbers) that describes each vector are also different.


However the value of the inner product of two vectors in an inner product space does not change . When one defines an inner product on a vector space, the value of the inner product does NOT depend on the choice of orientation. We say that the inner product of two vectors is invariant to choice of basis. That means that in Jennifer's language

$$ \left\langle\left[\mathbf{x}\right]_J,\,\left[\mathbf{y}\right]_J\right\rangle_J = \langle\mathbf{x},\,\mathbf{y}\rangle $$
This results in a problem. Since the coordinates with respect to Jennifer's basis are different, they may not use the same formula for computing the inner product of two vectors. If Jennifer choose just any basis in $\mathbb{R}^n$, then the value of the Euclidean Inner Product may not be the sum of the products of the corresponding elements!

However if Jennifer chooses an orthonormal basis, computation is much simpler.

Corollary 5.5.4

The Euclidean Inner Product of Coordinate Vectors in $\mathbb{R}^n$ with respect to an Orthonormal Basis is the Sum of the Component-wise Products

Let $\left\{\mathbf{u}_1,\mathbf{u}_2,\ldots,\mathbf{u}_n\right\}$ be an orthonormal basis for a finite dimensional inner product space $V$. If

$$ \mathbf{x} = \sum_{i=1}^n x_i\mathbf{u}_i \qquad\qquad \mathbf{y} = \sum_{i=1}^n y_i\mathbf{u}_i $$
then

$$ \langle \mathbf{x},\mathbf{y} \rangle = \sum_{i=1}^n x_i y_i $$

Proof:

From Theorem 5.3.8,


$$ \begin{align*} \langle \mathbf{x},\mathbf{u}_i \rangle = x_i \qquad i=1,\dots,n \\ \langle \mathbf{y},\mathbf{u}_i \rangle = y_i \qquad i=1,\dots,n \\ \end{align*} $$
so using the properties of inner product


$$ \langle \mathbf{x},\mathbf{y} \rangle = \left\langle \sum_{i=1}^n x_i\mathbf{u}_i, \mathbf{y} \right\rangle = \sum_{i=1}^n x_i \left\langle\mathbf{u}_i, \mathbf{y} \right\rangle = \sum_{i=1}^n x_i \left\langle\mathbf{y}, \mathbf{u}_i \right\rangle = \sum_{i=1}^n x_i y_i $$


5.5.8 Parseval's Identity

Corollary 5.5.3

Parseval's Identity

If $\left\{\mathbf{u}_1,\mathbf{u}_2,\ldots,\mathbf{u}_n\right\}$ is an orthonormal basis for an inner product space $V$ and

$$ \mathbf{x} = \sum_{i=1}^n x_i\mathbf{u}_i $$
then

$$ \|\mathbf{x}\|^2 = \langle \mathbf{x},\mathbf{x} \rangle = \sum_{i=1}^n x_i^2 $$

Proof:

For $\mathbf{x} = \sum_{i=1}^n x_i\mathbf{u}_i$, we have from Corollary 5.5.3

$$ \|\mathbf{x}\|^2 = \langle \mathbf{x},\mathbf{x} \rangle = \sum_{i=1}^n x_i^2 $$

Exercise 1

Consider again orthonormal basis $\left\{\frac{1}{\sqrt{2}},\cos x,\sin x,\cos 2x\,\sin 2x,\ldots,\cos nx,\sin nx\right\}$ in $C[-\pi,\pi]$ and the finite dimensional subspace

$$ V = \text{Span}\left\{\frac{1}{\sqrt{2}},\cos x,\sin x,\cos 2x\,\sin 2x,\ldots,\cos nx,\sin nx\right\} $$
Compute the following:

  1. $ \|\sin^2 x\|^2 $
  2. $ \|2\sin 3x + \sin^2 x\|^2$
  3. $ \|2 + \cos^4 x\|^2 $

Check Your Work

$$ \begin{align*} \|\sin^2 x\|^2 &= \frac{3}{4} \\ \\ \|2\sin 3x + \sin^2 x\|^2 &= \frac{19}{4} \\ \\ \|2 + \cos^4 x\|^2 &= \frac{739}{64} \end{align*} $$

Follow Along
We could integrate these directly, but that would not take advantage of our new powerful tool Parseval's identity . Instead, we shall employ trigonometric identities to write each of these functions in terms of our orthonormal basis and then apply Parseval's identity to find the (square of the) norm.

$$ \begin{align*} \|\sin^2 x\|^2 &= \left\|\frac{1}{2}\left(1-\cos 2x\right)\right\|^2 \\ \\ &= \left\|\frac{1}{\sqrt{2}}\cdot\frac{1}{\sqrt{2}} - \frac{1}{2}\cos 2x\right\|^2 \\ \\ &= \left(\frac{1}{\sqrt{2}}\right)^2 + \left(-\frac{1}{2}\right)^2 = \frac{3}{4} \\ \\ \|2\sin 3x + \sin^2 x\|^2 &= \left\|\frac{1}{\sqrt{2}}\cdot\frac{1}{\sqrt{2}} - \frac{1}{2}\cos 2x + 2\sin 3x\right\|^2 \\ \\ &= \left(\frac{1}{\sqrt{2}}\right)^2 + \left(-\frac{1}{2}\right)^2 + 2^2 = \frac{19}{4} \\ \\ \|2 + \cos^4 x\|^2 &= \left\| 2 + \left[\frac{1}{2}\left(1+\cos 2x\right)\right]^2\right\|^2 \\ \\ &= \left\| 2 + \frac{1}{4}\left(1+2\cos 2x + \cos^2 2x \right)\right\|^2 \\ \\ &= \left\| \frac{9}{4} + \frac{1}{2}\cos 2x + \frac{1}{8}\left(1+\cos 4x \right)\right\|^2 \\ \\ &= \left\| \frac{19}{8} + \frac{1}{2}\cos 2x + \frac{1}{8}\cos 4x\right\|^2 \\ \\ &= \left\| \frac{19\sqrt{2}}{8}\cdot\frac{1}{\sqrt{2}} + \frac{1}{2}\cos 2x + \frac{1}{8}\cos 4x\right\|^2 \\ \\ &= \left(\frac{19\sqrt{2}}{8}\right)^2 + \left(\frac{1}{2}\right)^2 + \left(\frac{1}{8}\right)^2 = \frac{739}{64} \end{align*} $$


Example 4

Compute the following integrals using Parseval's identity.

$$ \begin{align*} \displaystyle\int_{-\pi}^{\pi}\sin^4(x)\,dx &= \dfrac{\pi}{\pi}\displaystyle\int_{-\pi}^{\pi}\sin^2(x)\sin^2(x)\,dx \\ \\ &= \pi\,\left\|\sin^2(x)\right\| = \dfrac{3\pi}{4} \end{align*} $$

$$ \begin{align*} \displaystyle\int_{-\pi}^{\pi}\left(2\sin 3x + \sin^2 x\right)^2\, dx &= \dfrac{\pi}{\pi}\displaystyle\int_{-\pi}^{\pi}\left(2\sin 3x + \sin^2 x\right)^2\, dx \\ \\ &= \pi\,\left\| 2\sin 3x + \sin^2 x \right\|^2 = \frac{19\pi}{4} \end{align*} $$

$$ \begin{align*} \displaystyle\int_{-\pi}^{\pi}\left(2 + \cos^4 x\right)^2\, dx &= \dfrac{\pi}{\pi}\displaystyle\int_{-\pi}^{\pi}\left(2 + \cos^4 x\right)^2\, dx \\ \\ &= \pi\,\left\| 2 + \cos^4 x \right\|^2 = \frac{739\pi}{64} \end{align*} $$


Exercise 2

Compute the following integral using subspace $V$

$$ \displaystyle\int_0^{\pi} \cos^3(t)\,dt $$

Check Your Work
$$ \displaystyle\int_0^{\pi} \cos^3(t)\,dt = 0 $$

Follow Along
The function $\cos^3(t)$ is an *even* function, that is $\cos^3(-t) = \cos^3(t)$, so its graph is symmetric with respect to the $y$-axis. Thus the area on the left and right of the $y$-axis are mirror images. Thus $$ \begin{align*} \displaystyle\int_0^{\pi} \cos^3(t)\,dt &= \dfrac{1}{2}\displaystyle\int_{-\pi}^{\pi} \cos^3(t)\,dt \\ \\ &= \dfrac{1}{2}\displaystyle\int_{-\pi}^{\pi} \cos(t)\cos^2(t)\,dt \\ \\ &= \dfrac{1}{2}\displaystyle\int_{-\pi}^{\pi} \cos(t)\dfrac{1 + \cos(2t)}{2}\,dt \\ \\ &= \dfrac{1}{4}\displaystyle\int_{-\pi}^{\pi} \left(\cos(t) + \cos(t)\cos(2t)\right)\,dt \\ \\ &= \dfrac{1}{4}\displaystyle\int_{-\pi}^{\pi} \cos(t)\,dt + \dfrac{1}{4}\displaystyle\int_{-\pi}^{\pi} \cos(t)\cos(2t)\,dt \\ \\ &= \dfrac{\pi}{4}\,\dfrac{1}{\pi}\displaystyle\int_{-\pi}^{\pi} 1\cdot\cos(t)\,dt + \dfrac{\pi}{4}\,\dfrac{1}{\pi}\displaystyle\int_{-\pi}^{\pi} \cos(t)\cos(2t)\,dt \\ \\ &= \dfrac{\pi}{4}\,\left\langle 1,\,\cos(t)\,\right\rangle + \dfrac{\pi}{4}\,\left\langle\, \cos(t),\,\cos(2t)\,\right\rangle = 0 \end{align*} $$

5.5.9 Least Squares Approximation of Functions

Consider the norm on the vector space $C[a,b]$ defined for any function $f$ and $g\in C[a,b]$ by

$$ \langle f, g\rangle = \displaystyle\int_a^b f(x)g(x)\,dx $$
This would result in the Hilbert norm of function $f$,

$$ \left\|f\right\|^2 = \displaystyle\int_a^b f(x)^2\,dx $$
Thus the distance between functions $f$ and $g$ will be given by

$$ \left\|f - g\right\| = \left(\displaystyle\int_a^b \left|f(x) - g(x)\right|^2\,dx\right)^{\frac{1}{2}} $$

A least squares approximation of a function $f\in C[a,b]$ is the projection of $f$ onto a subspace $W$ of $C[a,b]$ using an inner product. If the inner product is the one chosen above, then the projection onto subspace $W$ is the function that minimizes the distance between $f(x)$ and its projection $p(x)$ onto $W$, $\left\|f-p\right\|$. However, one may also minimize square of this non-negative distance and obtain the same projection. Hence we often minimize

$$ I = \|f - p\|^2 = \displaystyle\int_a^b \left|f(x) - p(x)\right|^2\,dx $$
This is done just to simplify computations.

Example 5

Find the least squares approximation of $f(x) = e^x$, $0\le x\le 1$ with respect to the subspace $W = P_3$ of polynomials of degree less than 3.

We must perform the Gram-Schmidt orthogonalization process first on the canonical basis $\left\{ 1, x, x^2 \right\}$ to obtain the orthonormal basis

$$ \begin{align*} \left\|1\right\|^2 &= \displaystyle\int_0^1\left(1\right)^2\,dx = 1 \\ \\ x - \text{Proj}_1x &= x - \displaystyle\int_0^1 x\,dx = x - \left[ \dfrac{x^2}{2} \right]_0^1 = x - \frac{1}{2} \\ \\ x^2 - \text{Proj}_1x^2 - \text{Proj}_{x-\frac{1}{2}}x^2 &= x^2 - \displaystyle\int_0^1 x^2\,dx - \dfrac{\displaystyle\int_0^1x^2\left(x - \frac{1}{2}\right)\,dx}{\displaystyle\int_0^1\left(x - \frac{1}{2}\right)^2\,dx}\left(x - \frac{1}{2}\right) \\ \\ &= x^2 - \displaystyle\int_0^1 x^2\,dx - \dfrac{\displaystyle\int_0^1 x^3 - \frac{x^2}{2}\,dx}{\displaystyle\int_0^1 x^2 - x + \frac{1}{4} \,dx}\left(x - \frac{1}{2}\right) \\ \\ &= x^2 - \left[ \frac{x^3}{3} \right]_0^1 - \dfrac{\left[ \frac{x^4}{4} - \frac{x^3}{6} \right]_0^1}{\left[ \frac{x^3}{3} - \frac{x^2}{2} + \frac{x}{4} \right]_0^1}\left(x - \frac{1}{2}\right) \\ \\ &= x^2 - \left[ \frac{1}{3} - 0 \right] - \dfrac{\left[ \frac{1}{4} - \frac{1}{6} \right]}{\left[ \frac{1}{3} - \frac{1}{2} + \frac{1}{4} \right]}\left(x - \frac{1}{2}\right) \\ \\ &= x^2 - \frac{1}{3} - \left(x - \frac{1}{2}\right) \\ \\ &= x^2 - x + \frac{1}{6} \\ \\ \mathbf{u}_1 &= 1 \\ \\ \mathbf{u}_2 &= \dfrac{x-\frac{1}{2}}{\left\|x-\frac{1}{2}\right\|} = \dfrac{x-\frac{1}{2}}{\sqrt{\frac{1}{12}}} = 2\sqrt{3}\left(x - \frac{1}{2}\right) \\ \\ \mathbf{u}_3 &= \dfrac{x^2 - x + \frac{1}{6}}{\left\|x^2 - x + \frac{1}{6}\right\|} = \dfrac{x^2 - x + \frac{1}{6}}{\sqrt{\frac{1}{180}}} = 6\sqrt{5}\left(x^2 - x + \frac{1}{6}\right) \end{align*} $$
Now we can compute the projection of $e^x$ onto $P_3$ or the least squares quadratic polynomial approximation to $e^x$ on the interval $[0,1]$.

$$ \begin{align*} c_1 &= \langle e^x, u_1 \rangle = \displaystyle\int_0^1 e^x\,dx = e - 1 \\ \\ c_2 &= \langle e^x, u_2 \rangle = \displaystyle\int_0^1 2\sqrt{3}\left( x - \frac{1}{2}\right)e^x\,dx = \sqrt{3}(e - 3) \\ \\ c_3 &= \langle e^x, u_3 \rangle = \displaystyle\int_0^1 6\sqrt{5}\left( x^2 - x + \frac{1}{6} \right)e^x\,dx = \sqrt{5}(7e - 19) \end{align*} $$
The projection is given by

$$ \begin{align*} p(x) &= c_1\cdot 1 + c_2\left(x - \frac{1}{2}\right) + c_3\left( 6\sqrt{5}\left( x^2 - x + \frac{1}{6} \right)\right) \\ &= (210e - 570)x^2 + (588-216e)x + (39e - 105) \\ \\ &\approx 0.839184x^2 + 0.851125x + 1.0129913 \end{align*} $$
Quadratic Approximation of Exponential Function

Figure 2

The blue curve is the exponential function on the inteval $[0, 1]$, and the red curve is the quadratic polynomial approximation $p(x)\in P_3$.

Creative Commons Logo - White


Your use of this self-initiated mediated course material is subject to our Creative Commons License .


Creative Commons Attribution-NonCommercial-ShareAlike 4.0

Creative Commons Logo - Black
Attribution
You must give appropriate credit, provide a link to the license, and indicate if changes were made. You may do so in any reasonable manner, but not in any way that suggests the licensor endorses you or your use.

Creative Commons Logo - Black
Noncommercial
You may not use the material for commercial purposes.

Creative Commons Logo - Black
Share Alike
You are free to share, copy and redistribute the material in any medium or format. If you adapt, remix, transform, or build upon the material, you must distribute your contributions under the same license as the original.