A
Taylor Series
is a series with positive integer powers of an independent variable $x$ in the definition of the terms of the series
$$\displaystyle\sum_{k=0}^{\infty} a_k(x-x_0)^k.$$
The partial sums
$$\begin{align*}
s_0(x) &= a_0 \\
\\
s_1(x) &= a_0 + a_1(x - x_0) = \color{#307fe2}{(a_0 - a_1x_0)} + \color{#ec008c}{a_1}x \\
\\
s_2(x) &= a_0 + a_1(x + x_0) + a_2(x-x_0)^2 \\
&= \color{#307fe2}{(a_0 - a_1x_0 + a_2x_0^2)} + \color{#ec008c}{(a_1 - 2a_2x_0)}x + \color{#d17300}{a_2}x^2 \\
\\
s_3(x) &= a_0 + a_1(x + x_0) + a_2(x-x_0)^2 + a_3(x-x_0)^3 \\
&= \color{#307fe2}{(a_0 - a_1x_0 + a_2x_0^2 - a_3x_0^3)} + \color{#ec008c}{(a_1 - 2a_2x_0 + 3a_3x_0^2)}x + \color{#d17300}{(a_2 - 3a_3x_0)}x^2 + \color{#f3ad1c}{a_3}x^3 \\
&\ddots \\
s_n(x) &= a_0 + a_1(x + x_0) + a_2(x-x_0)^2 +\ \cdots\ + a_n(x-x_0)^n. \\
\end{align*}$$
are called
Taylor polynomials
and $s_n(x)$ is a Taylor polynomial of degree $n$. The right shift $x_0$ is called the
base point
. One gets a better view of the Taylor Series and Taylor polynomial if one first looks at examples where the base point $x_0=0$
$$\displaystyle\sum_{k=0}^{\infty} a_k(x-0)^k = \displaystyle\sum_{k=0}^{\infty} a_kx^k.$$
Such a Taylor series is called a
Maclaurin series
. The Taylor polynomials in a Maclaurin series are simpler
$$\begin{align*}
s_0(x) &= a_0 \\
\\
s_1(x) &= a_0 + a_1x \\
\\
s_2(x) &= a_0 + a_1x + a_2^2 \\
\\
s_3(x) &= a_0 + a_1x + a_2x + a_3x^3 \\
&\ddots \\
s_n(x) &= a_0 + a_1x + a_2x +\ \cdots\ + a_nx^n. \\
\end{align*}$$
Each Taylor polynomial $s_n(x)$ is a function of one real variable; its domain is the entire real line and its codomain is the entire real line. They have all the properties of polynomials that we have come to expect from polynomials. Evaluating a polynomial for any real number $x$ is well-understood, which is why Taylor polynomials are such a useful tool.
List the first 4 Taylor polynomials of the Taylor series
$$\displaystyle\sum_{k=1}^{\infty}\dfrac{(x+1)^k}{2^k}.$$
We can evaluate the Taylor Series at every point as well. This can be more difficult if we don't recognize the series. For example, to evaluate the series
$$\displaystyle\sum_{k=1}^{\infty}\dfrac{(x+1)^k}{k2^k}$$
at $x=-1$ we proceed as always for algebraic expressions; we replace the variable $x$ with the value $-1$ to obtain
$$\displaystyle\sum_{k=1}^{\infty}\dfrac{(-1+1)^k}{2^k}.$$
Then we proceed to evaluate the series by computing the limit of the sequence of partial sums.
$$\begin{align*}
\displaystyle\sum_{k=1}^{\infty}\dfrac{(-1 + 1)^k}{2^k} &= \displaystyle\sum_{k=1}^{\infty}\dfrac{0^k}{2^k} \\
\\
s_1(-1) &= 0 \\
\\
s_2(-1) &= 0 + 0 \\
&\ddots \\
s_n(-1) &= 0 + 0 +\ \cdots\ + 0 \\
\\
\displaystyle\lim_{n\to\infty} 0 &= 0.
\end{align*}$$
If we evaluate the series at $x=0$, then obtain the series
$$\displaystyle\sum_{k=1}^{\infty}\dfrac{(0+1)^k}{2^k} = \displaystyle\sum_{k=1}^{\infty}\dfrac{1}{2^k}.$$
This is similar to the geometric series with $r=\frac{1}{2}$ and $a=1$ so
$$\begin{align*}
\displaystyle\sum_{k=1}^{\infty}\dfrac{(0+1)^k}{2^k} &= \displaystyle\sum_{k=1}^{\infty}\dfrac{1}{2}^k \\
\\
&= \displaystyle\sum_{k=0}^{\infty}\dfrac{1}{2}^k - \dfrac{1}{2}^0 \\
\\
&= \dfrac{1}{1-\frac{1}{2}} - 1 = 2 - 1 = 1.
\end{align*}$$
Notice that we added $0$ to the expression in the form $\frac{1}{2}^0 - \frac{1}{2}^0$ so that we could recognize the value of the series.
It seems that for at least some real numbers $x$, the series converges to a finite limit. We need to determine all of the real numbers $x$ for which the series converges to a finite limit.
We can apply the ratio test to our series. We compute the limit
$$\displaystyle\lim_{k\to\infty} \left|\dfrac{a_{k+1}}{a_k}\right|.$$
Where this limit is less than $1$, the series converges absolutely. Where the limit is greater than $1$ or does not exist, the series diverges. The test is inconclusive for values of $x$ in which the limit equals $1$. Each of the terms of our series is given by
$$a_k = \dfrac{(x+1)^k}{2^k}.$$
and
$$\left|a_k\right| = \left|\dfrac{(x+1)^k}{2^k}\right| = \dfrac{|x+1|^k}{2^k}.$$
Thus for our series the limit becomes
$$\displaystyle\lim_{k\to\infty} \left|\dfrac{a_{k+1}}{a_k}\right| = \displaystyle\lim_{k\to\infty} \dfrac{\dfrac{|x+1|^{k+1}}{2^{k+1}}}{\dfrac{|x+1|^k}{2^k}} = \displaystyle\lim_{k\to\infty} \dfrac{|x+1|^{k+1}}{2^{k+1}}\cdot\dfrac{2^k}{|x+1|^k} = \displaystyle\lim_{k\to\infty} \dfrac{|x+1|}{2}.$$
Notice that the last expression has no instances of the index $k$ so the limit is the limit of a constant expression. We need this expression to be less than 1 to pass the
Ratio Test
.
$$\begin{align*}
\displaystyle\lim_{k\to\infty} \dfrac{|x+1|}{2} &< 1 \\
\\
\dfrac{|x+1|}{2} &< 1 \\
\\
|x+1| &< 2 \\
\\
-2 < x+1 &< 2 \\
\\
-3 < x &< 1.
\end{align*}$$
This means that for any real value $x$ in the interval $(-3,1) = (-1-2 < x < -1+2)$, the series
$$\displaystyle\sum_{k=1}^{\infty}\dfrac{(x+1)^k}{2^k}$$
converges absolutely. We need to evaluate the series at $x=-3$ and $x=1$. At $x=1$ we have
$$\displaystyle\sum_{k=1}^{\infty}\dfrac{(1+1)^k}{2^k} = \displaystyle\sum_{k=1}^{\infty} 1 = \infty.$$
At $x=-3\ $ we have the series
$$\displaystyle\sum_{k=1}^{\infty}\dfrac{(-3+1)^k}{2^k} = \displaystyle\sum_{k=1}^{\infty} (-1)^k,$$
which diverges. We can create a function $f:(-3,1)\to\mathbb{R}$ that assigns the value
$$y = f(x) = \displaystyle\sum_{k=1}^{\infty}\dfrac{(x+1)^k}{2^k}$$
for every $x\in(-3,1)$. For any value $x \le -3$ or $x \ge 1$ the series diverges. We call the function $f$
analytic
because the function $f$ has an absolutely convergent Taylor series expression on its domain $(-3,1)$.
Recall that in our example the interval of absolute convergence is
$$|x+1| < 2.$$
If we consider $x$ to be a complex number instead of a real number, then this equation expresses a disk on the complex plane. This is because the absolute value of a complex number $x = a + bi$ is $|x| = \sqrt{a^2 + b^2}$. Thus our equation becomes
$$\begin{align*}
\sqrt{(a+1)^2 + b^2} &< 2 \\
\\
(a+1)^2 + b^2 &< 4;
\end{align*}$$
the equation of a circle on the plane of radius $2$ and centered at $(-1,0)$.
The radius of this disk of values is called the radius of convergence . The intersection of this disk with the real line, $(-3,1)$ is called the interval of convergence .
The term analytic is used in a few different ways to describe functions:
When solving differential equations, we need our functions to be analytic to guarantee the existence and uniqueness of a solution.
Additionally, we need to understand the relationship between functions we have learned about previously and analytic functions. The functions we have studied in our previous mathematics courses are called elementary functions:
All of the above are elementary . Most of these functions are analytic functions on their domains.
There are several important differential equations that do not have elementary solutions.
To provide some context for how we think about using analytical solutions of differential equations, we consider how we employ rational and irrational numbers. The rational numbers are like our elementary functions.
A transcendental number is one that cannot be expressed as a root of a (finite) polynomial with rational coefficients. All numbers that are roots of such polynomials are called algebraic .
How do we perform computations with these numbers?
We write down a rational approximation to our irrational number; we write down the first several decimals of our irrational number. We cannot write down all of them. We truncate our decimal expansion so that the errors in our computations are small enough to be acceptable to our application.
How do we perform computations with a power series?
The solutions to many important differential equations have an analytic power series solution. We cannot write down all the terms in the series. We truncate our power series expansion so that the errors in our computations using the Taylor polynomial approximation are small enough to be acceptable to our application.
But first we must find the power series expansion of our solution.
If $f$ is an analytic function how do we find the power series expansion (representation) of $f$?
The easiest way to compute the power series at $x_0=0$, that is the Maclaurin series, of a function is to use Taylor's Theorem .
Taylor's Theorem ¶
If $f$ has a convergent power series representation (expansion) at $a$, that is if
$$f(x) = \displaystyle\sum_{k=0}^{\infty} a_k(x-a)^k,\qquad |x-a|<\rho,$$
then the coefficients have the form
$$a_k = \dfrac{f^{(k)}(a)}{k!};$$
and the power series becomes
$$f(x) = \displaystyle\sum_{k=0}^{\infty}\dfrac{f^{(k)}(a)}{k!}(x-a)^k,\qquad |x-a|<\rho.$$
Here $f^{(0)}(a)$, the zeroth derivative of $f$ at $a$ is just $f(a)$. If $a=0$, then we have the Maclaurin series whose form is
$$f(x) = \displaystyle\sum_{k=0}^{\infty}\dfrac{f^{(k)}(0)}{k!}x^k,\qquad |x|<\rho.$$
Consider the polynomial $p(x) = 3x^2 - 7x + 13$.
Consider the function $f(x) = e^x$.
If $f$ is an analytic function with absolutely convergent powers series
$$f(x) = \displaystyle\sum_{k=0}^{\infty} a_k x^k,$$
then the derivative can be easily computed
term by term
.
$$\begin{align*}
f(x) &= a_0 + a_1x + a_2x^2 + a_3x^3 +\ \cdots\ + a_nx^n +\ \cdots \\
\\
f'(x) &= 0 + a_1 + 2a_2x + 3a_3x^2 +\ \cdots\ + na_nx^{n-1} +\ \cdots \\
\\
&= \displaystyle\sum_{k=1}^{\infty} ka_nx^{k-1} \\
\\
f''(x) &= 0 + 0 + 2a_2 + 6a_3x +\ \cdots\ + n(n-1)a_nx^{n-2} +\ \cdots \\
\\
&= \displaystyle\sum_{k=2}^{\infty} k(k-1)a_nx^{n-2} \\
\\
f'''(x) &= 0 + 0 + 0 + 6a_3 +\ \cdots\ + n(n-1)(n-2)a_nx^{n-3} \\
\\
&= \displaystyle\sum_{k=3}^{\infty} k(k-1)(k-2)a_nx^{n-3} \\
&\ddots \\
f^{(n)}(x) &= 0 + 0 + 0 +\ \cdots\ + n!a_n +\ \cdots \\
\\
&= \displaystyle\sum_{k=n}^{\infty} k(k-1)\cdots(k-n)x^{k-n} \\
\end{align*}$$
Notice that the derivative of a constant is $0$, not a lower power of $x$ so the index of the power series must start at a higher positive integer with each derivative.
If a series is conditionally convergent then the derivatives cannot be calculated using this method. This only works if the series is absolutely convergent. That is why we will always require analytic functions and analytic solutions.
Consider the analytic function $f(x) = e^x$. Differentiate the Maclaurin series term by term to obtain the derivative of the exponential function.
Creative Commons Attribution-NonCommercial-ShareAlike 4.0
Attribution
You must give appropriate credit, provide a link to the license, and indicate if changes were made. You may do so in any reasonable manner, but not in any way that suggests the licensor endorses you or your use.
Noncommercial
You may not use the material for commercial purposes.
Share Alike
You are free to share, copy and redistribute the material in any medium or format. If you adapt, remix, transform, or build upon the material, you must distribute your contributions under the
same license
as the original.