So far, we have only considered second-order equations with constant coefficients. Now, we turn our focus to equations where the coefficients of the equation are functions of the independent variable $x$
$$ P(x)y''(x) + Q(x)y'(x) + R(x)y(x) = 0. $$
For now, studying homogeneous equations is sufficient because finding particular solutions for nonhomogeneous equations in these cases follows a similar procedure.
In this section, $P$, $Q$, and $R$ are going to be polynomials. Many important problems in physics may be represented in this form, such as the
Bessel equation
$$ x^2 y'' + xy' + \left(x^2 - \nu^2\right)y = 0 $$
and the
Legendre equation
$$ \left( 1-x^2 \right)y'' - 2xy' + \alpha\left(\alpha + 1\right)y = 0 $$
where $\nu$ and $\alpha$ are both constants. These equations' solutions both have applications to wave propagation, static potentials, and many other situations.
The importance of the solutions to these equations comes from their expression as Taylor series. They both represent functions that cannot be written down using elementary functions, but have a myriad of important applications. As such, we are going to develop a procedure that allows us to express the series form of the solution to these and similar equations.
The functions $P$, $Q$, and $R$ are going to be polynomials that do not all share a common factor $(x-c)$. If such a factor is present, divide it out. We seek to find a solution to the ODE
$$ P(x)y'' + Q(x)y' + R(x)y = 0 $$
in an interval $I$ containing a point $x_0$. The solution of the ODE will depend greatly on the nature of $P$ in the interval.
If $P(x_0)\neq 0$, then we say that $x_0$ is an
ordinary point
. Since polynomials are continuous everywhere, in the interval $I$ containing $x_0$ we may divide by $P$ to obtain an equation in standard form
$$ y'' + p(x)y' + q(x)y = 0 $$
where $p = Q/P$ and $q = R/P$ are continuous on $I$. Under these conditions,
Theorem 3.2.1
guarantees that there exists a unique solution of the equation on $I$ that satisfies the initial conditions
$$ y(x_0) = y_0,\qquad y'(x_0) = y_0'. $$
If $P(x_0) = 0$, then $x_0$ is a singular point . In this case, because $(x-x_0)$ does not evenly divide all of $P$, $Q$, and $R$, at least one of $Q(x_0)$ or $R(x_0)$ is not zero. This means that either $p$ or $q$ grows without bound as $x\rightarrow x_0$, making Theorem 3.2.1 not applicable. Later sections will handle finding solutions near singular points.
We are going to be searching for solutions of the form
$$ y = a_0 + a_1(x-x_0) +\ldots + a_n (x-x_n)^n + \ldots = \sum_{n=0}^\infty a_n (x-x_0)^n $$
to our differential equation, assuming that the series converges in an interval $I = \left\{ x\in\mathbb{R}: \vert x-x_0\vert\lt\rho\right\}$ for some positive $\rho$ (these are the
interval
and
radius of convergence
, respectively). Series solutions to differential equations, despite their initial appearance, are actually very convenient. As long as you stay within in the interval of convergence, they act like polynomials. This means that they are easy to differentiate, evaluate numerically, and plot.
To search for a series solution, the method is simple:
plug it in
. Due to our assumption that the solution is within the interval of convergence, we can differentiate the series term by term and begin algebraic manipulations to determine the form of the series solution to the ODE. Before we get into a specific example, let's remind ourselves what the first and second derivative of a generic power series look like, since we will be using these often:
\begin{aligned}
y' &= \sum_{n=1}^\infty n a_n (x-x_0)^{n-1} \\
\\
y'' &= \sum_{n=2}^\infty n(n-1) a_n (x-x_0)^{n-2}.
\end{aligned}
Note that the because the derivative of a constant is zero, the first term of the series "vanishes" and the index to denote the beginning of the series changes. This is going to be extremely important as we manipulate our equations.
Find a series solution to the equation
$$ y'' + y = 0,\qquad -\infty\lt x\lt\infty. $$
We know from chapter 3 that $\sin x$ and $\cos x$ are a fundamental set of solutions to this equation, but we wish to examine it from the perspective of power series. In this equation, the coefficient functions are $P = 1$, $Q = 0$, and $R = 1$ (all polynomials, albeit simple ones). This means that every point is an ordinary point.
Since any point will do, we choose $x_0 = 0$ and look for a solution in the form
$$ y = \sum_{n=0}^\infty a_n x^n $$
by plugging this and the appropriate derivatives directly into the ODE
$$ \sum_{n=2}^\infty n(n-1)a_n x^{n-2} + \sum_{n=0}^\infty a_n x^n = 0. $$
To find a usable solution, we seek to match the form of the two infinite sums. One of the sums features an $x^{n-2}$ in its general term, while the other has $x^n$. If we manipulate the indices of the summation notation and "peel off" extra terms when necessary, it is possible to condense this expression into a single infinite sum.
When manipulating the indices, a change in the starting point changes the rest of the appearances of the index in the opposite way. Here, we are going to reduce the starting point of the first term from $n=2$ to $n=0$, so every appearance of $n$ in the general term must be replaced with $n+2$
$$ \sum_{n=2}^\infty n(n-1)a_n x^{n-2} = \sum_{n=0}^\infty (n+2)(n+1)a_{n+2} x^n. $$
We can now rewrite the ODE as
$$\begin{aligned}
\sum_{n=0}^\infty (n+2)(n+1)a_{n+2} x^n + \sum_{n=0}^\infty a_n x^n = 0 \\
\\
\sum_{n=0}^\infty \Big((n+2)(n+1)a_{n+2} + a_n \Big)x^n = 0.
\end{aligned}$$
The left and right hand sides of this equation can both be thought of as polynomials, and polynomials are equivalent if and only if all of their coefficients match. That means that for every $n\ge 0$ the factor times $x^n$ must be zero:
$$ (n+2)(n+1)a_{n+2} + a_n = 0. $$
This expression is a
recurrence relation
, and defines the relationship between the coefficients on the terms in the power series. It works by setting $n=0$ and evaluating, then setting $n=1$, and so on. This generates a sequence which in turn allows us to express the terms in the power series.
Our recurrence relation specifies the term two away from the current value, so it makes sense to examine the even $(a_0,\ a_2,\ a_4,\ldots )$ and odd $(a_1,\ a_3,\ a_5,\ldots )$ coefficients separately.
For the evens,
$$\begin{aligned}
a_2 &= -\dfrac{a_0}{2\cdot 1}=-\dfrac{a_0}{2!}\\
\\
a_4 &= -\dfrac{a_2}{4\cdot 3} = -\dfrac{1}{4\cdot 3}\left( -\dfrac{a_0}{2!} \right) = \dfrac{a_0}{4!} \\
\\
a_6 &= -\dfrac{a_4}{6\cdot 5} = -\dfrac{1}{6\cdot 5}\left( -\dfrac{a_0}{4!} \right) = -\dfrac{a_0}{6!} \\
&\ \ \vdots \\
a_{2k} &= \dfrac{(-1)^k}{(2k)!}a_0,\quad k = 1,\ 2,\ 3,\ldots,
\end{aligned}$$
and the odds,
$$\begin{aligned}
a_3 &= -\dfrac{a_1}{3\cdot 2}=-\dfrac{a_1}{3!}\\
\\
a_5 &= -\dfrac{a_3}{5\cdot 4} = -\dfrac{1}{5\cdot 4}\left( -\dfrac{a_1}{3!} \right) = \dfrac{a_1}{5!} \\
\\
a_7 &= -\dfrac{a_5}{7\cdot 6} = -\dfrac{1}{7\cdot 6}\left( -\dfrac{a_1}{5!} \right) = -\dfrac{a_1}{7!} \\
&\ \ \vdots \\
a_{2k+1} &= \dfrac{(-1)^k}{(2k+1)!}a_1,\quad k = 1,\ 2,\ 3,\ldots.
\end{aligned}$$
Armed with these expressions for our coefficients $a_n$, we can substitute them into the generic power series formula to get an expression for the solution $y$ by splitting the series in two pieces, one for the even terms and one for the odd
$$\begin{aligned}
y &= \sum_{n=0}^\infty \dfrac{(-1)^n}{(2n)!}x^{2n} a_0 + \sum_{n=0}^\infty \dfrac{(-1)^n}{(2n+1)!} x^{2n+1}a_1 \\
\\
&= a_0 \underbrace{\sum_{n=0}^\infty \dfrac{(-1)^n}{(2n)!}x^{2n}}_{y_1(x)} + a_1 \underbrace{\sum_{n=0}^\infty \dfrac{(-1)^n}{(2n+1)!} x^{2n+1}}_{y_2(x)}.
\end{aligned}$$
This arrangement allows us to identify two separate series solutions $y_1$ and $y_2$ for the equation, one that depends upon $a_0$ and the other on $a_1$. The values $a_0$ and $a_1$ will be set by initial conditions and of course $y_1$ and $y_2$ are the familiar Maclaurin series for $\cos x$ and $\sin x$.
A primary utility of series solutions for differential equations is that a partial sum may be used in place of the full solution to given an approximation to the answer near the point $x_0$. Depending on the radius of convergence for the series, adding more terms to the partial sum may allow the approximation to be valid a fair distance away from where the power series expansion is centered. Here we see several partial sums for the series used to approximate our solutions $y_1$ and $y_2$
This is, of course, an important issue to keep in mind for applications. If we care about the solution being accurate at a value that is not close to zero, say five, it would make more sense to compute the solution of the problem using that value as $x_0$.
To further demonstrate the power of series solutions, we're now going to look at
Airy's equation
$$ y'' - xy = 0,\qquad -\infty\lt x\lt\infty, $$
which has applications to optics.
For this equation, $P = 1$, $Q = 0$, and $R = x$, so every point is an ordinary point. Again, we will choose $x_0 = 0$ for simplicity and hence our solution will be of the form
$$ y = \sum_{n=0}^\infty a_n x^n. $$
Substituting this into Airy's equation,
$$ \sum_{n=2}^\infty n(n-1)a_n x^{n-2} - x\sum_{n=0}^\infty a_n x^n = 0. $$
Our goal is to find the recurrence relation for this equation. First, we will absorb the $x$ coefficient in the second term into the sum.
$$ \sum_{n=2}^\infty n(n-1)a_n x^{n-2} - \sum_{n=0}^\infty a_n x^{n+1} = 0. $$
Next, we will adjust the indices so that $x^n$ is present in each sum,
$$ \sum_{n=0}^\infty (n+2)(n+1)a_{n+2} x^n - \sum_{n=1}^\infty a_{n-1} x^n = 0. $$
These sums are still incompatible because the starting points for the sums do not match. This means we must "peel off" a term from the first sum so that both start at $n=1$,
$$ (2)(1)a_2 + \sum_{n=1}^\infty (n+2)(n+1)a_{n+2} x^n - \sum_{n=1}^\infty a_{n-1} x^n = 0. $$
This yields the recurrence relation
$$ (n+2)(n+1)a_{n+2} - a_{n-1} = 0. $$
First, note that because each coefficient of the polynomial represented on the left hand side must be zero, that $a_2 = 0$. Also, because the relation is between $a_{n-1}$ and $a_{n+2}$, $a_0$ determines $a_3$, $a_1$ determines $a_4$, and $a_2$ determines $a_5$. This means that our recurrence relation tells us that $a_5,\ a_8,\ldots = 0$. We'll now turn our focus to finding the sets of coefficients related to $a_0$ and $a_1$.
Starting with the first set,
$$ \begin{aligned}
a_3 &= \dfrac{a_0}{3\cdot 2} \\
\\
a_6 &= \dfrac{a_3}{6\cdot 5} = \dfrac{a_0}{6\cdot 5\cdot 3\cdot 2} \\
\\
a_9 &= \dfrac{a_6}{9\cdot 8} = \dfrac{a_0}{9\cdot 8\cdot 6\cdot 5\cdot 3\cdot 2} \\
&\ \ \vdots \\
a_{3n} &= \dfrac{a_0}{(3n)(3n-1)\cdot\ldots\cdot 6\cdot 5\cdot 3\cdot 2},\quad n\ge 4.
\end{aligned} $$
The second is given by
$$ \begin{aligned}
a_4 &= \dfrac{a_1}{4\cdot 3} \\
\\
a_7 &= \dfrac{a_4}{7\cdot 6} = \dfrac{a_1}{7\cdot 6\cdot 4\cdot 3} \\
\\
a_{10} &= \dfrac{a_7}{10\cdot 9} = \dfrac{a_1}{10\cdot 9\cdot 7\cdot 6\cdot 4\cdot 3} \\
&\ \ \vdots \\
a_{3n+1} &= \dfrac{a_1}{(3n+1)(3n)\cdot\ldots\cdot 7\cdot 6\cdot 4\cdot 3},\quad n\ge 4.
\end{aligned} $$
Unfortunately, there is not a more convenient way to express these coefficients.
If we let $b_{3n}$ and $b_{3n+1}$ be placeholders for these patterns divided by $a_0$ and $a_1$ respectively, then the series solution to Airy's equation may be written as
$$ y = a_0 \underbrace{\sum_{n=0}^\infty b_{3n} x^{3n}}_{y_1(x)} + a_1 \underbrace{ \sum_{n=0}^\infty b_{3n+1} x^{3n+1}}_{y_2(x)} $$
with solutions $y_1$ and $y_2$. To verify that $y_1$ is a solution, first set $a_0 = 1$ and $a_1 = 0$, then set $a_0=0$ and $a_1=1$ for $y_2$. To show that these solutions form a fundamental set, we need to show that their Wronskian is zero. This is most easily done by noting that $y_1$ is the solution for the initial values $y(0) = 1$ and $y'(0) = 0$ and $y_2$ is the solution for initial values $y(0) = 0$ and $y'(0) = 1$. Hence,
$$ W[y_1,y_2] = \begin{vmatrix} 1 & 0 \\ 0 & 1 \end{vmatrix} = 1 \neq 0, $$
and the solutions are a fundamental set.
Unlike the power series found in Example 1, these functions are not elementary functions. They may only be expressed using a power series. To understand these functions better, let's use the ration test to determine the radius and interval of convergence. Testing $y_2$,
$$\begin{aligned}
\lim_{n\rightarrow\infty} \left\vert \dfrac{a_{n+1}}{a_n} \right\vert &= \lim_{n\rightarrow\infty} \left\vert \dfrac{\dfrac{x^{3(n+1)+1}}{(3(n+1)+1)(3(n+1))\cdot\ldots\cdot 7\cdot 6\cdot 4\cdot 3}}{\dfrac{x^{3n+1}}{(3n+1)(3n)\cdot\ldots\cdot 7\cdot 6\cdot 4\cdot 3}}\right\vert \\
\\
&= \vert x \vert^3 \lim_{n\rightarrow\infty} \dfrac{1}{(3n+4)(3n+3)} \\
\\
&= 0 \lt 1,\quad \forall x\in\mathbb{R}.
\end{aligned} $$
The radius of convergence $\rho = \infty$, so the interval of convergence is all of $\mathbb{R}$.
Knowing that the series converges for all $x$ is very nice, but it does not mean that it will be simple to find the value of our functions far from $x=0$. To illustrate this, here are plots of many members of the sequence of partial sums for $y_1$ and $y_2$:
We see that as our distance from $0$ increases, the validity of the approximation diminishes rapidly even for large degree polynomials from the sequence of partial sums.
Note the drastic difference in behavior of the function for negative $x$ and positive $x$. For negative inputs, it looks oscillatory like sine and cosine, but has exponential or hyperbolic behavior for positive inputs. This is due to the presence of a turning point at zero, where it changes from one behavior to another.
Given the differential equation
$$ y'' - xy' - y = 0, $$
find the series solution about $x_0 = 0$ by doing the following:
Given the differential equation
$$ (3-x^2)y'' - 3xy' - y = 0, $$
find the series solution about $x_0 = 0$ by doing the following:
Creative Commons Attribution-NonCommercial-ShareAlike 4.0
Attribution
You must give appropriate credit, provide a link to the license, and indicate if changes were made. You may do so in any reasonable manner, but not in any way that suggests the licensor endorses you or your use.
Noncommercial
You may not use the material for commercial purposes.
Share Alike
You are free to share, copy and redistribute the material in any medium or format. If you adapt, remix, transform, or build upon the material, you must distribute your contributions under the
same license
as the original.