Wichita State University Logo

Math 555: Differential Equations

3.2 Solutions of Linear Differential Equations


3.2.1 Vector Spaces of Differentiable Functions

If we multiply vectors by scalars and add them together we get a linear combination .

$$ \alpha\mathbf{v}_1 + \beta\mathbf{v}_2 + \gamma\mathbf{v}_3 $$
is an example of a linear combination of three vectors $\mathbf{v}_1$, $\mathbf{v}_2$ and $\mathbf{v}_3$. This is the basic and most important concept in linear algebra. This is so very important because vectors are much more than just arrows in two- or three-dimensional space. The vectors we learn about first are the vectors on the plane

Several random vector arrows are plotted on a standard two-dimensional coordinate plane

and vectors in three-dimensional space

Several random vector arrows are plotted on a three-dimensional coordinate plane

The Important Vector Spaces (for this course)

The vector spaces we work with in this class are $C^1[a,b]$, the vector space of all functions with a continuous derivative; and $C^2[a,b]$, the vector space of all functions with two continuous derivatives; and with domain $[a,b]$. This includes $C^2(-\infty,\infty)$.

Why are $C^1[a,b]$ and $C^2[a,b]$ Vector Spaces?

Continuous functions behave like vectors on the plane when with respect to linear combinations

  • If one multiplies a continuous function by a scalar, the result is a continuous function.
  • If one adds two continuous functions, the result is a continuous function

If $f(x)$ and $g(x)$ are continuous functions and $\alpha$ is a scalar then,

  • $\left(\alpha\cdot f\right)(x) = \alpha\cdot f(x)$ is a continuous function
  • $\left(f + g\right)(x) = f(x) + g(x)$ is a continuous function

Differential functions behave like vectors on the plane when with respect to linear combinations

  • If one multiplies a differentiable function by a scalar, the result is a differentiable function.
  • If one adds two differentiable functions, the result is a differentiable function

If $f(x)$ and $g(x)$ are differentiable functions and $\alpha$ is a scalar then,

  • $\left(\alpha\cdot f\right)(x) = \alpha\cdot f(x)$ is a differentiable function
  • $\left(f + g\right)(x) = f(x) + g(x)$ is a differentiable function

A mathematician will say that $C^2[a,b]$ is a vector space because if $f,g\in C^2[a,b]$ and $\alpha\in\mathbb{R}$ then,

  • $\alpha\cdot f\in C^2[a,b]$ and
  • $f + g\in C^2[a,b]$.

So if we have several functions with two continuous derivatives, $f_1$, $f_2$, $f_3$, ... , $f_n$, and several scalars, $\alpha_1$, $\alpha_2$, $\alpha_3$, ... , $\alpha_n$, then their linear combination

$$ \alpha_1f_1(x) + \alpha_2f_2(x) + \alpha_3f_3(x)\ +\ ...\ +\ \alpha_nf_n(x) $$
is a function with two continuous derivatives. There is more to the study of vector spaces and there are eight more algebraic properties that the vector space $C^2[a,b]$ must have.

Eight Properties of a Vector Space

Properties of a Vector Space

If $f$, $g$, and $h$ are vectors (functions) in $C^2[a,b]$, $\alpha$ and $\beta$ are scalars in $\mathbb{R}$, and $0 = 0(x)$ is the zero function that returns zero for every $x\in[a,b]$ then,
  1. Commutative Property of Vector Addition
    $(f + g)(x) = (g + f)(x)$
  2. Associative Property of Vector Addition
    $\left((f + g) + h\right)(x) = \left(f + (g + h)\right)(x)$
  3. Additive Identity Property (for Vector Addition)
    $(f + 0)(x) = (0 + f)(x) = f(x)$
  4. Additive Inverse Property (for Vector Addition)
    $(f + -f)(x) = (-f + f)(x) = 0(x)$
  5. Distributive Property of Scalar Multiplication Over Vector Addition
    $\alpha\cdot(f + g)(x) = \alpha\cdot f(x) + \alpha\cdot g(x)$
  6. Distributive Property of Scalar Multiplication Over Scalar Addition
    $(\alpha + \beta)\cdot f(x) = \alpha\cdot f(x) + \beta\cdot f(x)$
  7. Associative Property of Scalar Multiplication
    $(\alpha\beta)\,f(x) = \alpha(\beta\,f(x))$
  8. Multiplicative Identity Property (for Scalar Multiplication)
    $1f(x) = f(x)$

These rules are just our normal rules for algebra. Notice that division is missing because you can't "divide" by a vector!

3.2.2 Linear Differential Operators

The differential operators we study in this chapter are linear transformations . That is they behave like linear transformations of vectors. A linear transformation observes the same algebra rules we are familiar with for matrices. If $A$ is an $n\times n$ matrix, $\alpha$ is a scalar in $\mathbb{R}$, and $\mathbf{x}$ and $\mathbf{y}$ are vectors in $\mathbb{R}^n$ then,

  1. $A(\mathbf{x} + \mathbf{y}) = A\mathbf{x} + A\mathbf{y}$
  2. $\alpha A\mathbf{x} = A(\alpha\mathbf{x})$

If $L$ is a linear transformation from vector space $\mathbb{R}^n\longrightarrow\mathbb{R}^m$, $\alpha$ is a scalar in $\mathbb{R}$ and $\mathbf{x}$, $\mathbf{y}$ are vectors in $\mathbb{R}^n$ then,

  1. $L(\mathbf{x} + \mathbf{y}) = L(\mathbf{x}) + L(\mathbf{y})$
  2. $L(\alpha\mathbf{x}) = \alpha\,L(\mathbf{x})$

If $L[y] = y'' + 2y' + y$ is a differential operator from the vector space $C^2[a,b]\rightarrow C^0[a,b]$, $u$ and $v$ are vectors with two continuous derivatives $\left(u,v\in C^2[a,b]\right)$, and $\alpha$ is a real number $\left(\alpha\in\mathbb{R}\right)$ then

$\begin{align*} \text{ 1. }L[u + v] &= (u + v)'' + 2(u + v)' + (u + v) \\ &= u'' + v'' + 2u' + 2v' + u + v \\ &= u'' + 2u' + u + v'' + 2v' + v = L[u] + L[v] \end{align*}$

$\begin{align*} \text{ 2. }L\left(\alpha u\right) &= (\alpha u)'' + 2(\alpha u)' + (\alpha u) \\ &= \alpha u'' + 2\alpha u' + \alpha u \\ &= \alpha\left(u'' + 2u' + u\right) \\ &= \alpha\left(L[u]\right) \end{align*}$

This is what makes a differential operator linear

A differential operator is a linear differential operator if it follows the same algebra rules for any linear transformation. If $u,\ v\in C^2[a,b]$ and $\alpha\in\mathbb{R}$ then

  1. $L(u + v) = L(u) + L(v)$

  2. $L(\alpha u) = \alpha L(u).$

We can write this succinctly as

  • $L(\alpha u + \beta v) = \alpha L(u) + \beta L(v)$

where $u$ and $v$ are function in $C^2[a,b]$, and $\alpha$ and $\beta$ are scalars in $\mathbb{C}$. This version of the expression is called the bilinearity property . We will need to work with complex-valued scalars as well are real-valued scalars.

3.2.3 Linear Second-Order Differential Operators with Constant Coefficients

$$L[y] = ay'' + by' + c$$


This is the standard form of a second-order linear operator with constant coefficients where $a$, $b$, and $c$ are real (or complex) constants and $y\in C^2(-\infty,\infty)$, that is $y$ has two continuous derivatives. We talked about the "degree" of the terms in chapter 1 and decided that the differential operator is linear because the "degree" (exponent) of each term is 1; the dependent variable or its derivatives only appear once in each term. Now we understand that what makes the differential operator linear is that it is a linear transformation from the vector space of functions with two continuous derivatives $C^2(-\infty,\infty)$ to the vector space of continuous functions $C^0(-\infty,\infty)$. Clearly if $y$ has two continuous derivatives, then $y''$ is continuous.

In our study of second-order differential equations we start with second-order homogeneous differential equations with constant coefficients

$$ ay'' + by' + c = 0.$$
Here is linear differential operator is

$$ L[y] = ay'' + by' + c,$$
and we can denote the homogeneous differential equation

$$ L[y] = 0.$$

Example 3.2.1


$$ y'' + 5y' + 6y = 0$$

We can show that the two functions $y_1(t) = e^{-3t}$ and $y_2(t) = e^{-2t}$ are solutions to this differential equation.


$$\begin{align*} L\left[y_1(t)\right] &= L\left[e^{-3t}\right] \\ \\ &= \left(e^{-3t}\right)'' + 5\left(e^{-3t}\right)' + 6\left(e^{-3t}\right) \\ \\ &= 9e^{-3t} + 5\left(-3e^{-3t}\right) + 6e^{-3t} \\ \\ &= 9e^{-3t} - 15e^{-3t} + 6e^{-3t} = 0\ \huge{\color{#307fe2} \checkmark} \\ \\ L\left[y_2(t)\right] &= L\left[e^{-2t}\right] \\ \\ &= \left(e^{-2t}\right)'' + 5\left(e^{-2t}\right)' + 6\left(e^{-2t}\right) \\ \\ &= 4e^{-2t} + 5\left(-2e^{-2t}\right) + 6e^{-2t} \\ \\ &= 4e^{-2t} - 10e^{-2t} + 6e^{-2t} = 0\ \huge{\color{#307fe2} \checkmark} \end{align*}$$

If $c_1$ and $c_2$ are two arbitrary constants, then we know infinitely many solutions

$$ c_1y_1(t) + c_2y_2(t) = c_1e^{-3t} + c_2e^{-2t}.$$
We know this because $L$ is a linear operator


$$\begin{align*} L\left[ c_1e^{-3t} + c_2e^{-2t} \right] &= \left(c_1e^{-3t} + c_2e^{-2t}\right)'' + 5\left(c_1e^{-3t} + c_2e^{-2t}\right) + 6\left(c_1e^{-3t} + c_2e^{-2t}\right) \\ \\ &= \left(9c_1e^{-3t} + 4c_2e^{-2t}\right) + 5\left(-3c_1e^{-3t}-2c_2e^{-2t}\right) + 6c_1e^{-3t} + 6c_2e^{-2t} \\ \\ &= 9c_1e^{-3t} + 4c_2e^{-2t} - 15c_1e^{-3t} - 10c_2e^{-2t} + 6c_1e^{-3t} + 6c_2e^{-2t} \\ \\ &= c_1\left(9e^{-3t} -15e^{-3t} + 6e^{-3t}\right) + c_2\left(e^{-2t} - 10e^{-2t} + 6e^{-2t}\right) \\ \\ &= c_1\cdot 0 + c_2\cdot 0 = 0\ \huge{\color{#307fe2} \checkmark} \end{align*}$$

In our three-dimensional world if two vectors generally point in different dimensions then their span , the set of all possible linear combinations of the two vectors forms a plane.

Two arrow vectors are plotted on a plane embedded in three-dimensional space. The plane represents the span of a linear combination of these two vector arrows.

A plane passing through the origin is a two-dimensional subspace of $\mathbb{R}^3$.

Likewise the span of two linearly independent functions in $C^2(-\infty,\infty)$ is a two-dimensional subspace of $C^2(-\infty,\infty)$. This "plane" in $C^2(-\infty,\infty)$ would be all possible linear combinations of our linearly independent vectors (functions)

$$c_1e^{-3t} + c_2e^{-2t}.$$

If you need to review some linear algebra material, please be sure to refresh your memory with these videos before proceeding.

3.2.4 Checking Functions for Linear Independence

How do we know $e^{-3t}$ and $e^{-2t}$ are linearly independent?

There are several ways to find out if two-dimensional vectors are linearly independent. Let us start with two two-dimensional vectors

$$ v_1 = \left[\begin{array}{c} 1 \\ 2\end{array}\right]\text{,}\ \ \ \text{and}\ \ \ v_2 = \left[\begin{array}{c} 3 \\ 1\end{array}\right].$$
We create a $2\times 2$ matrix with columns $v_1$ and $v_2$. If the determinant of this matrix is zero then the vectors are linearly dependent and their span will be only a line. If the determinant is non-zero then the vectors are linearly independent and their span will be the plane we want.

$$\det\left(\left[\begin{array}{cc} 1 & 3 \\ 2 & 1\end{array}\right]\right) = \left|\begin{array}{cc} 1 & 3 \\ 2 & 1\end{array}\right| = 1(1) - 3(2) = 1 - 6 = -5 \ne 0.$$

The Wronskian

To determine if two continuously differentiable functions $y_1$ and $y_2$ are linearly independent we create a $2\times 2$ matrix and calculate its determinant

$$ \det\left(\left[\begin{array}{cc} y_1(t) & y_2(t) \\ y_1'(t) & y_2'(t)\end{array}\right]\right) = \left|\begin{array}{cc} y_1(t) & y_2(t) \\ y_1'(t) & y_2'(t)\end{array}\right| = y_1(t)y_2'(t) - y_2(t)y_1'(t).$$
If the determinant is non-zero for any value t in the domain of $y_1$ and $y_2$, then the functions are linearly independent in $C^1(-\infty,\infty)$.

The Wronskian of $e^{-3t}$ and $e^{-2t}$

To determine if $e^{-3t}$ and $e^{-2t}$ are linearly independent in the vector space $C^2(-\infty,\infty)$ we compute the Wronskian or determinant

$$\begin{align*} W\left(e^{-3t},e^{-2t}\right)(t) &= \left|\begin{array}{cc} e^{-3t} & e^{-2t} \\ -3e^{-3t} & -2e^{-2t}\end{array}\right| \\ \\ &= -2e^{-3t}e^{-2t} + 3e^{-3t}e^{-2t} \\ \\ &= -2e^{-5t} + 3e^{-5t} \\ \\ &= e^{-5t} \neq 0 \end{align*}$$

Since $e^{-5t}$ is never equal to $0$, we have that $e^{-3t}$ and $e^{-2t}$ are linearly independent.

3.2.5 Forming a Vector Space

When do the solutions of a differential equation form a vector space?

We know that if our differential equation is a homogeneous linear differential equation, then the set of solutions will be a vector space.

If we have a linear homogeneous differential equation, then we have a differential operator $L[y]$ and the right-hand side is zero so we can express our differential equation

$$L[y] = 0.$$
If we have any two solutions $y_1(t)$ and $y_2(t)$, then $L[y_1(t)] = 0$ and $L[y_2(t)]=0$. If we also have two scalars $\alpha$ and $\beta$, then the linear combination of $y_1$ and $y_2$ is also a solution

$$L\left[\alpha y_1(t) + \beta y_2(t)\right] = \alpha L[y_1] + \beta L[y_2] = \alpha\cdot 0 + \beta\cdot 0 = 0.$$
This tells us that the set of all possible solutions to our differential equation is the span of those solutions. Therefore, it is also a vector space. Specifically, it is a subspace of $C^2(-\infty,\infty)$.

3.2.6 Dimension of a Vector Space of Solutions

How do we know the dimension of the vector space of solutions is two?

This is a deep question about mathematics and the answer lies in a way of describing 2nd-order differential equations using 1st-order differential equations. There are two approaches to this. One answer requires even more linear algebra and is discussed in chapter 7 of our textbook. The one we will use requires the algebra of functions and we will use it to describe a 2nd-order differential equation as a composition of two first-order differential equations. We already know that the general solution of a 1st-order differential equation requires one arbitrary constant that we obtain when we integrate.

Let us look at our example again.

$$y'' + 5y' + 6 = 0$$
and the differential operator $y'' + 5y' + 6y$. This differential operator is in fact a linear combination of three linear operators

$$\begin{align*} D(D[y]) &= D^2[y] = y'' \\ \\ D[y] &= y' \\ \\ y &= 1\cdot y \end{align*}$$
I prefer lower-case d's to upper-case d's so I will write our differential operator

$$L[y] = d^2[y] + 5d[y] + 6y.$$
Is it possible that this is the composition of two 1st-order differential operators

$$L_1[y] = d[y] + 3y = (d + 3)[y]\text{,}\qquad\text{and}\qquad L_2[y] = d[y] + 2y = (d + 2)[y]\ ?$$
Let us compute the composition of these two operators

$$\begin{align*} \left(L_1\circ L_2\right)[y] &= L_1\left[L_2[y]\right] \\ \\ &= L_1\left[ (d + 3)y \right] \\ \\ &= L_1\left[ y' + 3y \right] \\ \\ &= (d + 2)(y' + 3y) \\ \\ &= d[y' + 3y] + 2[y' + 3y] \\ \\ &= y'' + 3y' + 2y' + 6y \\ \\ &= y'' + 5y' + 6y \\ \\ &= L[y]\ \ \huge{\color{#307fe2} \checkmark} \end{align*}$$
This allows us an interesting way to solve a second-order differential equation, even if it is not homogeneous.

$$\begin{align*} y'' + 5y' + 6y &= 0 \\ \\ (d^2 + 5d + 6)y &= 0 \\ \\ (d + 2)(d + 3)y &= 0 \\ \\ \text{Let }\ v(t) &= (d + 3)y(t) \\ \\ (d + 2)v &= 0 \\ \\ v' + 2v &= 0 \\ \\ \dfrac{dv}{dt} &= -2v \\ \\ \dfrac{1}{v} \dfrac{dv}{dt} &= -2\qquad\text{Separable}\\ \\ \log(v) &= -2t + c_1 \\ \\ v(t) &= c_1e^{-2t} \\ \\ (d + 3)y &= c_1e^{-2t} \\ \\ y' + 3y &= c_1e^{-2t} \\ \\ u(t) &= e^{\int 3\,dt} = e^{3t}\qquad\text{Integrating factor}\\ \\ e^{3t}y' + 3e^{3t}y &= c_1e^t \\ \\ \left[e^{3t}y\right]' &= c_1e^t \\ \\ e^{3t}y &= \displaystyle\int c_1e^t\,dt = c_1e^t + c_2 \\ \\ y(t) &= c_1e^{-2t} + c_2e^{-3t} \end{align*}$$
Notice we used methods we learned in chapter 1 to find all possible solutions to the composition of two first-order differential equations and the solutions are already a linear combination of two linearly independent solutions. This technique is not so useful for homogeneous linear differential equations. You should however keep it in mind when we solve non-homogeneous linear 2nd-order differential equations starting in Section 3.6 .

3.2.7 Solving Second-Order Equations

How can we solve a 2nd-order differential equation without solving two 1st-order equations?

We will use operator algebra again. If we write our example in operator form, we have

$$\begin{align*} y'' + 5y' + 6y &= 0 \\ \\ (d^2 + 5d + 6)y &= 0 \\ \\ (d + 3)(d + 2)y &= 0 \\ \\ (d + 3)(d + 2) &= 0 \end{align*}$$
We get the last line because of the composition of the operator $d^2 + 5d + 6$ and $y$ is zero then the differential function $y(t)=0$ is certainly a solution. We want the non-zero solutions though. So the other "factor" in the composition must be the zero. This equation is called the characteristic equation and the polynomial of differential operators is call the characteristic polynomial .
$$\begin{align*} (d + 3)(d + 2) &= 0 \\ \\ d + 3 = 0\qquad &\text{ or }\qquad d + 2 = 0 \\ d = -3\qquad &\text{ or }\qquad d = -2 \\ \end{align*}$$
Notice that these two values are the coefficients in our exponential functions $y_1(t) = e^{-3t}$ and $y_2(t) = e^{-2t}$.

Example 3.2.2

Find the fundamental solutions of the differential equation

$$y'' - y = 0.$$

If we write our differential operator in operator form we have

$$\begin{align*} (d^2 - 1)y &= 0 \\ \\ (d + 1)(d - 1)y &= 0 \\ \\ d + 1 = 0\qquad &\text{ or }\qquad d - 1 = 0 \\ d = -1\qquad &\text{ or }\qquad d = 1 \\ \\ y_1(t) = e^{-t}\qquad &\text{ and }\qquad y_2(t) = e^{t} \\ \end{align*}$$

Are the linearly independent?

$$\begin{align*} W\left(e^{-t},e^t\right)(t) &= \left|\begin{array}{cc} e^{-t} & e^t \\ -e^{-t} & e^{t}\end{array}\right| \\ \\ &= e^{-t}e{t} + e^{-t}e^{t} = e^0 + e^0 = 2 \neq 0. \end{align*}$$

Our two solutions are linearly independent and so their span will be the entire two-dimensional subspace of solutions to our differential equation.

$$y(t) = c_1e^{-t} + c_2e^{t}$$
This makes our two solutions $e^{-t}$ and $e^{t}$ a fundamental set of solutions because they span our space of solutions.

Remark

Our textbook requires us to solve two initial value problems. One with the initial conditions

$$y(0) = 1,\qquad y'(0) = 0,$$
and one with the initial conditions
$$y(0) = 0,\qquad y'(0) = 1.$$
However, this is not the classical definition of fundamental solutions.

Any two linearly independent solutions to a 2nd-order homogeneous linear differential equation form a basis for the vector space of all solutions, and therefore they form a fundamental set of solutions.

3.2.8 Practice Computing the Wronskian

Example 3.2.3

Show that $y_1(t) = t^{1/2}$ and $y_2(t) = t^{-1}$ form a fundamental set of solutions of the 2nd-order homogeneous linear differential equation

$$2t^2y'' + 3ty' - y = 0,\qquad t > 0. $$
Notice now that our domain is $(0,\infty)$. Thus we are looking for solutions in the vector space $C^2(0,\infty)$. We only need for our two solutions to be linearly independent. If that is the case then they form a basis for the two-dimensional subspace of solutions. That is they are a fundamental set of solutions.

$$W\left(t^{1/2},t^{-1}\right) = \left|\begin{array}{cc} t^{1/2} & t^{-1} \\ \frac{1}{2}t^{-1/2} & -t^{-2} \end{array}\right| = -t^{1/2}t^{-2} - \frac{1}{2}t^{-1/2}t^{-1} = -\frac{3}{2}t^{-3/2}.$$

Since our domain is $t > 0$ we have that $W\left(t^{1/2},t^{-1}\right) = -\frac{3}{2}t^{-5/2} < 0$. In any case it is never equal to zero. Therefore our solutions are linearly independent and form a fundamental set of solutions to our differential equation and the vector space of all solutions to the differential equation has the form

$$ y(t) = c_1t^{1/2} + c_2t^{-1}. $$

Exercise 3.2.1

Find the Wronskian of the functions $\cos(t)$ and $\sin(t)$. State whether they are linearly independent or linearly dependent. State the largest interval over which the functions are linearly independent.

View Solution
$$\begin{align*} W\left(\cos(t),\sin(t)\right) &= \left|\begin{array}{cc} \cos(t) & \sin(t) \\ -\sin(t) & \cos(t) \end{array}\right| \\ \\ &= \cos^t(t) + \sin^2(t) = 1 \neq 0. \end{align*}$$
The functions $\cos(t)$ and $\sin(t)$ are linearly independent for all real number $t\in(-\infty,\infty)$.

3.2.9 Existence and Uniqueness of Solutions

We will student 2nd-order differential equations with constant coefficients in chapter 3. We need to remember that all of this work will also apply to any linear differential equation. The general form of a 2nd-order linear differential equation is

$$ P(t)y'' + Q(t)y' + R(t)y = G(t).$$
We need the differential equation in standard form . $P(t) =0$ then we don't have a second order differential equation so dividing both sides by $P(t)$ we obtain

$$y'' + \dfrac{Q(t)}{P(t}y' + \dfrac{R(t)}{P(t)}y = \dfrac{G(t)}{P(t)}.$$
We set $p(t) = \dfrac{Q(t)}{P(t}$, $q(t) = \dfrac{R(t)}{P(t)}$, and $g(t) = \dfrac{G(t)}{P(t)}$ and write the standard form of a 2nd-order linear differential equation

$$y'' + p(t)y' + q(t)y = g(t).$$
Like chapter two, we need an interval on the real line where all three, $p(t)$, $q(t)$ and $g(t)$ are continuous.

Theorem 3.2.1

Consider the initial value problem

$$y'' + p(t)y' + q(t)y = g(t),\qquad y(t_0) = y_0,\qquad y'(t_0) = y_1,$$

If $p(t)$, $q(t)$ and $g(t)$ are all continuous on the same interval $I =[a,b]$ containing $t_0$, that is $a < t_0 < b$, then there is exactly one solution $y=\phi(t)$, and the solution exists on the entire interval $I = [a,b]$.

Example 3.2.4

Consider the differential equation

$$(t^2 - 3t)y'' + ty' - (t + 3)y = 0\qquad y(1)=2,\qquad y'(1)=1.$$
Dividing both sides by $t^2 - 3t$ and simplifying we get the differential equation in standard form

$$y'' + \dfrac{1}{t-3}y' - \dfrac{1}{t}y = 0.$$
This gives us that $p(t)=\frac{1}{t-3}$, $q(t)=\frac{1}{t}$ and $g(t)=0$, when $t\neq 3$ and $t\neq 0$. This divides the real line into three intervals,

$$(-\infty, 0),\qquad(0,3),\qquad(3,\infty)$$
Only one of these intervals has the value $t_0=1$ in it so the interval over which our theorem guarantees us a solution is $(0,3)$.

Exercise 3.2.2

Consider the initial value problem

$$(x-2)y'' + y' + (x-2)\tan(x)y = 0,\quad y(3) = 1,\quad y'(3)=2.$$
Determine the longest interval in which the initial value problem is certain to have a unique twice-differentiable solution.

View Solution
In standard form our initial value problem is

$$y'' + \dfrac{1}{x-2}y' + \tan(x)y = 0,\quad y(3) = 1,\quad y'(3)=2.$$
This give us that $p(t) = \frac{1}{x-2}$, $q(t) = \tan(x)$ and $g(t)=0$. Therefore $p(t)$ has a vertical asymptote at $x=2$. and $q(t)$ has a vertical asymptote at every odd multiple of $\frac{\pi}{2}$.
$$4 \lt 6 \lt 9 \lt 3\pi$$
so
$$ 2 \lt 3 \lt \dfrac{3\pi}{2}.$$
The largest interval that Theorem 3.2.1 guarantees a unique solution is the interval $\left(2, \frac{3\pi}{2}\right)$.

3.2.9 Abel's Theorem

There is another way to compute the Wronskian of any two fundamental (linearly independent) solutions without even knowing what the two fundamental solutions are yet.

Abel's Theorem

If $y_1$ and $y_2$ are two solutions of the 2nd-order linear homogeneous differential equation

$$L[y] = y'' + p(t)y' + q(t)y = g(t) = 0,$$
where $p$ and $q$ are both continuous on the same interval $I$, then the Wronskian is given by

$$W[y_1,y_2](t) = C\,exp\left(-\displaystyle\int p(t)\,dt\right), $$
where the constant $C$ depends on $y_1$ and $y_2$ but not on $t$.

This means that if $y_1$ and $y_2$ are linearly dependent then their Wronskian is zero and $C = 0$. If $C\neq 0$, then the two solutions are linearly independent on the entire interval $I$, and they form a fundamental set of solutions to the differential equation.

We will use this in a method called reduction of order later in this chapter.

Example 3.2.5

Verify Abel's theorem for Example 3.2.3 .

$$2t^2y'' + 3ty' - y = 0\qquad t > 0$$
in standard form is
$$y'' + \dfrac{3}{2t}y' - \dfrac{1}{2t^2}y = 0.$$
Therefore $p(t)=\dfrac{3}{2t}$. Abel's theorem states that the Wronskian of our two solutions $y_1(t)=t^{1/2}$ and $y_2(t) = t^{-1}$ has the form

$$W\left[t^{1/2},t^{-1}\right] = C\,\exp\left(-\displaystyle\int \dfrac{3}{2t}\,dt = \right) = C\,\exp\left(-\dfrac{3}{2}\log(t)\right) = C\,\exp\left(\log\left(t^{-3/2}\right)\right) = C\,t^{-3/2}.$$
For $y_1(t)=t^{1/2}$ and $y_2(t)=t^{-1}$, $C = -\frac{3}{2}$.

Exercise 3.2.3

Find the Wronskian of any two solutions to the Legendre's equation without solving the differential equation

$$(1-x^2)y'' - 2xy' + \alpha(\alpha + 1)y = 0.$$


View Solution In standard form our initial value problem is

$$y'' - \dfrac{2x}{1-x^2}y' + \dfrac{\alpha(\alpha+1)}{1-x^2}y = 0.$$
This give us that $p(t) = - \dfrac{2x}{1-x^2}$, $q(t) = \dfrac{\alpha(\alpha+1)}{1-x^2}$ and $g(t)=0$. Therefore $p(t)$ and $q(t)$ have discontinuities at $x=1$ and $x=-1$. On the interval $(-1,1)$ we have the Wronskian

$$\begin{align*} W[y_1,y_2](t) &= C\,\exp\left(-\displaystyle\int -\dfrac{2x}{1-x^2}\,dx\right) \\ \\ \text{Let } u &= 1 - x^2\qquad\text{Substitution} \\ \\ du &= -2x\,dx \\ \\ W[y_1,y_2](t) &= C\,\exp\left(-\displaystyle\int \dfrac{1}{u}\,du \right) \\ \\ &= C\,\exp\left(-\log(u)\right) = C\,\exp\left(\log(u^{-1})\right) \\ \\ &= C\,\dfrac{1}{1-x^2} \end{align*}$$

Creative Commons Logo - White


Your use of this self-initiated mediated course material is subject to our Creative Commons License .


Creative Commons Attribution-NonCommercial-ShareAlike 4.0

Creative Commons Logo - Black
Attribution
You must give appropriate credit, provide a link to the license, and indicate if changes were made. You may do so in any reasonable manner, but not in any way that suggests the licensor endorses you or your use.

Creative Commons Logo - Black
Noncommercial
You may not use the material for commercial purposes.

Creative Commons Logo - Black
Share Alike
You are free to share, copy and redistribute the material in any medium or format. If you adapt, remix, transform, or build upon the material, you must distribute your contributions under the same license as the original.