In Section 2.4 , we introduced the existence and uniqueness theorem for first order nonlinear differential equations. In this section, we will discuss the proof of that theorem in some (but not all) detail. To begin, the statement of the theorem is
Theorem 2.4.2 ¶
Existence and Uniqueness for First Order Nonlinear Differential Equations
Consider the possibly nonlinear first order initial value problem
$$ y' = f(t,y),\qquad y(t_0) = y_0. $$ If the function $f$ and $\dfrac{\partial f}{\partial y}$ are both continuous in some rectangle $a < t < b$, $c < y < d$ containing the point $(t_0,y_0)$, then in some interval $t_0 - h < t < t_0 + h$ contained in $(a,b)$, there is a unique solution $y = \phi(t)$ of the initial value problem.
Proving this theorem takes some care because unlike in the case of linear equations or some special nonlinear cases such as separable or exact, it is not possible to produce a general formula for the solution. If this were possible, proving that all initial value problems have a solution would just involve deriving the general formula and demonstrating under what conditions it holds.
That approach does not work for all first order ODEs, so we must find another way. The technique we will use is attributed to Charles-Émile Picard and is known as Picard's
iteration method
or the
method of successive approximations
. To use it, we must first rewrite our differential equation
$$ y' = f(t,y) $$
as an equivalent
integral equation
$$ \phi(t) = \int_0^t f(s,\phi(s))\, ds, $$
in which we are looking for a solution $\phi(t)$ to the stated ODE.
This conversion to an integral equation works as follows:
The equation $\displaystyle\ \phi(\tau) = \int_0^\tau g(s,\phi(s))\, ds$ is not a solution to the initial value problem.
What it provides is another formula that must hold for any solution $\phi(\tau)$ of the initial value problem. Suppose $w = \phi(\tau)$ is a continuous function that satisfies the integral equation. Now set $\tau=0$ in the integral equation, and $\phi(0)=0$ implies that we have met the initial condition. Also, the fundamental theorem of calculus and continuity of $g(\tau,\phi)$ guarantee that $\phi'(\tau) = g(\tau,\phi(\tau))$ and hence $\phi$ is also a solution to the initial value problem.
We desire, through the utilization of the integral equation, to demonstrate a procedure that will allow us to create a sequence of solutions that converges to a solution of the integral equation.
We begin by choosing an initial function $\phi_0$. Any choice may be made, but it is best to choose
$$ \phi_0(t) = 0 $$
since it is the simplest solution to initial value problem
$$ \dfrac{dy}{dt} = f(t,y),\qquad y(0) = 0. $$
We return to typical variables to avoid confusion. Most importantly, we ensure that the initial value is the origin.
Taking the initial function $\phi_0$, we seek to create a sequence of functions
$$ \{\phi_n\} = \{\phi_0, \phi_1, \phi_2,\ldots,\phi_n,\ldots \} $$
that converges to the actual solution $\phi$. The process is straightforward, we use the current element in the sequence to compute the next element by plugging that value into the integral equation
$$ \phi_1(t) = \int_0^t f(s,\phi_0(s))\, ds $$
Setting $t=0$ shows that this also satisfies the initial condition. We can continue generating members of the sequence by using the previous element to compute the next
$$ \phi_{n+1}(t) = \int_0^t f(s,\phi_n(s))\, ds. $$
Each element of the sequence $\phi_n(t)$ is a solution to the initial value problem, but not necessarily the differential equation. It is the case that if we ever find that $\phi_{k+1}(t) = \phi_k(t)$, then $\phi_k$ is a solution of the integral equation and the sequence is stopped. This is a rare occurrence, so we need to be prepared to look at the entire sequence of iterants.
Several questions need to be answered in order to guarantee that this process works, including
Instead answering all of these questions now, let's employ the method in an example and see some these addressed during the demonstration.
Let's begin with the initial value problem
$$ y' = 2(y+1),\qquad y(0) = 0. $$
Choosing $\phi_0(t) = 0$, we begin to compute the iterants in our sequence:
$$ \begin{aligned}
\phi_1(t) &= \int_0^t f(s,\phi_0(s))\, ds \\
\\
&= \int_0^t 2\Big( \phi_0(s) + 1\Big)\, ds \\
\\
&= \int_0^t 2(0 + 1)\, ds \\
\\
&= \int_0^t 2\, ds \\
\\
&= 2s \Big\rvert_{\,0}^{\,t} \\
\\
&= 2t.
\end{aligned} $$
Using the formula for $\phi_1$, we compute $\phi_2$
$$ \begin{aligned}
\phi_2(t) &= \int_0^t 2\Big( \phi_1(s) + 1\Big)\, ds \\
\\
&= \int_0^t 2(2s + 1)\, ds \\
\\
&= \int_0^t 4s + 2\, ds \\
\\
&= \dfrac{4s^2}{2} + 2s \Big\rvert_{\,0}^{\,t} \\
\\
&= \dfrac{4t^2}{2} + 2t.
\end{aligned} $$
Continuing the process to find $\phi_3$,
$$ \begin{aligned}
\phi_3(t) &= \int_0^t 2\Big( \phi_2(s) + 1\Big)\, ds \\
\\
&= \int_0^t 2\left(\dfrac{4s^2}{2} + 2s + 1\right)\, ds \\
\\
&= \int_0^t \dfrac{8s^2}{2} + 4s + 2\, ds \\
\\
&= \dfrac{8s^3}{6} + \dfrac{4s^2}{2} + 2s \Big\rvert_{\,0}^{\,t} \\
\\
&= \dfrac{8t^3}{6} + \dfrac{4t^2}{2} + 2t.
\end{aligned} $$
At this point, let's pause and look at a couple of things. First, you may have noticed that each iteration contains the previous element of the sequence. This is evidence that the sequence construction may be converging (and ought to remind you of a Taylor series ). Secondly, you should be wondering why I have not been simplifying the coefficients on these expressions. The reason is that simplification makes it harder to see patterns, which we'll be using... to find the Taylor series.
Our expression for $\phi_3(t)$ can we rewritten as
$$ \begin{aligned}
\phi_3(t) &= \dfrac{2^3 t^3}{3\cdot 2\cdot 1} + \dfrac{2^2 t^2}{2\cdot 1} + \dfrac{2^1 t^1}{1} \\
\\
&= \dfrac{(2t)^3}{3!} + \dfrac{(2t)^2}{2!} + \dfrac{(2t)^1}{1!} \\
\\
&= \sum_{k=1}^{3} \dfrac{(2t)^k}{k!}
\end{aligned} $$
This suggests that the general term $\phi_n(t)$ is likely
$$ \phi_n(t) = \sum_{k=1}^{n} \dfrac{(2t)^k}{k!} $$
A fact which we may confirm using mathematical induction.
We want to showMathematical Induction
The principle of mathematical induction is employed to demonstrate that a fact holds for each case $n\geq 1$.
A proof requires:
- That the fact holds for $n=1$.
- That if the fact holds for $n=k$, then it holds for $n=k+1$.
In addition, this form for the general term is very close to the Taylor series for $e^{2t}$, since
$$ e^t = \sum_{k=0}^\infty \dfrac{t^k}{k!}\quad\Rightarrow\quad e^{2t} = \sum_{k=0}^\infty \dfrac{(2t)^k}{k!}. $$
The key difference here being that the sum starts at $k=0$ for $e^{2t}$ and at $k=1$ for $\phi_n(t)$. To get them to match, it is necessary to "peel off" this first term from the infinite sum
$$ e^{2t} = 1 + \sum_{k= 1}^\infty \dfrac{(2t)^k}{k!}\quad\Rightarrow\quad e^{2t} -1 = \sum_{k=1}^\infty \dfrac{(2t)^k}{k!}, $$
and take the limit as $n\rightarrow\infty$ for $\phi_n(t)$. Hence our sequence converges to
$$ \phi(t) = \lim_{n\rightarrow\infty} \phi_n(t) = \sum_{k=1}^{\infty} \dfrac{(2t)^k}{k!} = e^{2t}-1 $$
Hence, the solution to this initial value problem is $\phi(t) = e^{2t}-1$.
Typically, it is not going to possible to rewrite the the limit of the sequence of successive approximations in terms of elementary functions, but it is something that you should be looking to try. What is important is to try to find emerging patterns in the terms to see if it is possible to write a general term for the sequence. Sometimes, these general terms are close to the Taylor series of well-known functions. Other times, they form the basis of special classes of functions that were developed specifically to solve differential equations, such as Bessel functions or Legendre polynomials.
Consider the initial value problem
$$ \dfrac{dy}{dt} = t^2 + y^2,\qquad y(0) = 0. $$
Take $\phi_0(t) = 0$ and compute the first three iterants $\phi_1$, $\phi_2$, and $\phi_3$ and try to determine if the sequence would converge.
Using the method of successive approximations, determine the solution to the initial value problem
$$ y' = t - y\,\qquad y(0) = 0. $$
Using the method of successive approximations, determine the solution to the initial value problem
$$ y' = ty + 1\,\qquad y(0) = 0. $$
Creative Commons Attribution-NonCommercial-ShareAlike 4.0
Attribution
You must give appropriate credit, provide a link to the license, and indicate if changes were made. You may do so in any reasonable manner, but not in any way that suggests the licensor endorses you or your use.
Noncommercial
You may not use the material for commercial purposes.
Share Alike
You are free to share, copy and redistribute the material in any medium or format. If you adapt, remix, transform, or build upon the material, you must distribute your contributions under the
same license
as the original.