Wichita State University Logo

Math 451: Computational Mathematics

3.3 Finding Zeros of a Function


3.3.1 Operation Counts

Another way to evaluate an algorithm or compare algorithms is in terms of execution time. We usually keep track of the number of computations or floating point operations ( flops ) employed by the algorithm to achieve the desired result.

Example 3

Consider an algorithm to compute the sum of the first $n$ nonnegative numbers.

function result = sum(n) 
    result = 0;                  % 1 count
    for k=1:n                    % n counts
        result = result + k;     % n counts 
    end
end


We might count the number of assignment statements. We can define a function $f:\mathbb{Z}\mapsto\mathbb{Z}$ that assigns to each input $n$, the number of assignment statements required by the algorithm to compute

$$ \text{sum}(n) = \displaystyle\sum_{k=1}^n k = 1 + 2 + 3 + \dots + n $$

For example, $T(0) = 1$ since if $n=0$, the number of assignment statements executed is one. Similarly $T(1) = 3$, $T(2) = 5$, $T(3) = 7$, $\dots$, and

$$ T(n) = 2n+1 $$
So the time it takes to compute a problem of size $n$ is $2n+1$ steps. Clearly the number of assignments to determine the sum of the first 100,000 integers is bigger than that number of assignments to determine the sum of the first 1,000 integers.

We want to measure how the algorithm's execution time changes with respect to the size of the problem. We don't need the exact number of operations. We are instead interested in the order of magnitude the number of operations increasing with the size of the problem. This order of magnitude is written using Big-O notation, O for order. In our example, $T(n)=n+1$ for a problem of order $n$. So

$$ T(n) = O(n) $$
because

$$ \displaystyle\lim_{n\rightarrow\infty} \dfrac{T(n)}{n} = \displaystyle\lim_{n\rightarrow\infty} \dfrac{2n+1}{n} = 2 = C $$
where $C$ is a constant.

Definition

We write $T(n) = O\left(f(n)\right)$ if

$$ \displaystyle\lim_{n\rightarrow\infty} \dfrac{T(n)}{f(n)} = C \neq 0 $$

Example 4

Consider the following algorithm

In [2]:
n=10;               % 1 count
a=5;                % 1 count
b=6;                % 1 count
c=10;               % 1 count
for i=1:n           % n counts
    for j=1:n       % n^2 counts
        x = i*i;    % n^2 count
        y = j*j;    % n^2 count
        z = i*j;    % n^2 count
    end
end

for k=1:n           % n counts
    w = a*k + 45;   % 2n counts
    v = b*b;        % n counts
end

d = 33;             % 1 count

fprintf('%g, ', x, y, z, w, v, d)
100, 100, 100, 95, 36, 33, 

Notice we assign $x$, $y$, and $z$ a total of $n^2$ times. We assign $w$, $v$ a total of $n$ times. We assign $d$ once. Thus we have a total of $T(n) = 4n^2 + 5n + 5$ assignments for this algorithm. This algorithm is $O\left(n^2\right)$ because

$$ \displaystyle\lim_{n\rightarrow\infty} \dfrac{T(n)}{f(n)} = \displaystyle\lim_{n\rightarrow\infty} \dfrac{4n^2 + 5n + 5}{n^2} = 4 $$
This algorithm is not of order $n$ because

$$ \displaystyle\lim_{n\rightarrow\infty} \dfrac{T(n)}{f(n)} = \displaystyle\lim_{n\rightarrow\infty} \dfrac{4n^2 + 5n + 5}{n} = \displaystyle\lim_{n\rightarrow\infty} 4n + 5 + \frac{5}{n} = \infty. $$
The algorithm is not of order $n^3$ because

$$ \displaystyle\lim_{n\rightarrow\infty} \dfrac{T(n)}{f(n)} = \displaystyle\lim_{n\rightarrow\infty} \dfrac{4n^2 + 5n + 5}{n^3} = 0. $$

3.3.2 Bisection Algorithm

Let us consider the bisection algorithm .

Bisection Algorithm

Figure 1

Notice that in the loop the algorithm executes to assignments; $c = \frac{a+b}{2}$, and either $a=c$ or $b=c$.

Also in each loop the distance from $a$ to $b$ is half the distance of the previous loop. What do we know? If we say that the operation count for computing $f$ is $f(n)$ counts, then

for iterations=1:max_loop_count                  % n operation counts

    c = (a + b)/2;                               % 2n operation counts

    if f(a)*f(c) > 0                             % (f(n) + f(n) + 1)*n operation counts
        a = c;                                   % n operation counts
    else 
        b = c;
    end

    if abs(b-a) < 2*tolerance                    % 4n operation counts
        break
    end  
end

x0 = (a + b)/2;                                  % 2 operation counts

There will be a total of $(2f(n) + 8)n + 2$ operation counts where $n$ is the number of loop iterations. Thus the execution time of the bisection algorithm grows of order $O(n)$ because it executes in $Kn + 2$ operations. This assumes that the operation count for function $f$ is a constant so that $K$ is a constant.

$$ \displaystyle\lim_{n\rightarrow\infty} \dfrac{Kn + 2}{n} = K $$
After the execution of each iteration, we obtain a new approximation $x_k = \frac{b-a}{2}$ from the previous iteration.

$$ \left| x_* - x_k \right| < \dfrac{\left|x_* - x_{k-1}\right|}{2} < \dots < \dfrac{x_* - x_1}{2^n} < \dfrac{|b - a|}{2^{n+1}} $$

If we require and accuracy $\delta$ from the bisection algorithm, then how many loops $n$ must be executed? For $0<\delta<1$ we have

$$ \begin{align*} \frac{|b-a|}{2^{n+1}} &\le\,\delta \\ \frac{2^{n+1}}{|b-a|} &\ge \frac{1}{\delta} \\ 2^{n+1} &\ge \frac{|b-a|}{\delta} \\ n+1 &\ge \log_2\left(\frac{|b-a|}{\delta}\right) \\ n &\ge \log_2\left(\frac{|b-a|}{\delta}\right) - 1 \\ n &\ge \log_2\left(|b-a|\right) - \log_2\left(\delta\right) - 1 \\ n &\ge \log_2\left(\frac{1}{\delta}\right) + \log_2\left(|b-a|\right) - 1 \\ n &\ge \frac{\ln\left(\frac{1}{\delta}\right)}{\ln(2)} + \log_2\left(|b-a|\right) - 1 \\ n &\ge \frac{1}{\ln(2)}\cdot\ln\left(\frac{1}{\delta}\right) + \log_2\left(|b-a|\right) - 1 \\ n &\ge C\,\ln\left(\frac{1}{\delta}\right) + k \end{align*} $$
where our constants $C = \frac{1}{\ln(2)}$ and $k = \log_2\left(|b-a|\right) - 1$.

This means that the execution time of our algorithm grows with order $\,O(n) = O\left(\ln\left(\frac{1}{\delta}\right)\right)$.

Notice that as $\delta$ gets smaller so that we can realize better accuracy, the number of loops, and hence the computation time, grows of order $O\left(\ln\left(\frac{1}{\delta}\right)\right)$. We can use the Intermediate Value Theorem from Calculus I to show that the bisection algorithm converges to the zero of the function between $a$ and $b$.

When an algorithm converges to a desired value as the tolerance converges to zero, we call the order of the algorithm the rate of convergence . The rate of convergence of the bisection algorithm is $\log_2\left(\frac{|b-a|}{\delta}\right) - 1$.

Example 1

If $a=1$, $b=2$, and we want to guarantee an error less than or equal to $10^4$, how many iterations do we need to complete$^{[1]}$?

$$ \begin{align*} N &\ge \log_2\left(\frac{|b-a|}{\delta}\right) - 1 = \log_2\left(\frac{2-1}{10^{-4}}\right) - 1 \\ &\ge \log_2\left(10^4\right) - 1 = 4\log_2(10) - 1 = 4\frac{\log(10)}{\log(2)} - 1 \approx 12.29 \end{align*} $$
So it would take 13 iterations to guarantee an error of less than $10^{-4}$.

[1] Anne Greenbaum & Timothy P. Cartier, "Numerical Methods", Princeton University Press, 2012, example 4.1.2.

3.3.3 The Conditions Required for the Bisection Algorithm

Example 1

The bisection algorithm requires that you have a continuous function $f:[a,b]\rightarrow\mathbb{R}$ and that $f(a)$ and $f(b)$ have opposite signs. Can this be a problem? Let us consider the function

$$ f(x) = (x-1)(x-2)(x-3) $$
and ask bisect to find a zero between 0 and 4.

In [16]:
f = @(x) (x - 1).*(x - 2).*(x - 3);
[x0, iterations] = bisect(f,0,5,1e-8)
x0 =

       3       


iterations =

      28       


Would you have predicted that it would find the largest zero? When there are multiple zeros between the end points, it can be difficult predicting to which root the algorithm will converge. Not all function cross the horizontal axis. Some functions touch the axis at the zero.

Example 2

In [25]:
g = @(x) (x-1).*(x-2).^2.*(x-3);
[x0, iterations] = bisect(g,.5,2.95)
a = axis;
a(3:4) = [ -1, 4];
axis(a)
x0 =

       1       


iterations =

      27       


The bisection algorithm requires that $f(a)$ and $f(b)$ have opposite sign, and that the function $f:[a,b]\rightarrow\mathbb{R}$ is continuous. Using the intermediate value theorem we may conclude that there exists a value $a < c < b$ so that $f(c)=0$. In each iteration we get a new interval $[a_n, b_n]$, while $f(a_n)$ and $f(b_n)$ have opposite sign. Also in each step we compute a new midpoint $c_n = \frac{a_n + b_n}{2}$ and use this midpoint to determine our next interval $[a_{n+1}, b_{n+1}]$.

This algorithm must converge to our zero $x_*$ because

$$ \displaystyle\lim_{n\rightarrow\infty} (b_n - a_n) = \displaystyle\lim_{n\rightarrow\infty} \dfrac{b-a}{2^n} = 0 $$
So we have

$$ \displaystyle\lim_{n\rightarrow\infty} a_n \le \displaystyle\lim_{n\rightarrow\infty} c_n \le \displaystyle\lim_{n\rightarrow\infty} b_n $$
As $\displaystyle\lim_{n\rightarrow\infty} a_n = \displaystyle\lim_{n\rightarrow\infty} b_n$ we have

$$ \displaystyle\lim_{n\rightarrow\infty} a_n = \displaystyle\lim_{n\rightarrow\infty} c_n = \displaystyle\lim_{n\rightarrow\infty} b_n $$
Thus the sequence of approximations $\left\{c_n\right\}$ converges. Let us call the limit $c = \displaystyle\lim_{n\rightarrow\infty} c_n$. Since $f$ is continuous on $[a,b]$,

$$ \displaystyle\lim_{n\rightarrow\infty} f(a_n) = f(\displaystyle\lim_{n\rightarrow\infty} a_n) = f(\displaystyle\lim_{n\rightarrow\infty}c_n) = f(c) = f(\displaystyle\lim_{n\rightarrow\infty} f(b_n)) = \displaystyle\lim_{n\rightarrow\infty} f(b_n) $$
However $\displaystyle\lim_{n\rightarrow\infty} f(a_n)$ and $\displaystyle\lim_{n\rightarrow\infty} f(b_n)$ have opposite signs and are equal to zero. Therefore

$$ f(c) = \displaystyle\lim_{n\rightarrow\infty} f(c_n) = 0 = f(x_*) $$
Using the continuity of $f$ again we conclude

$$ \displaystyle\lim_{n\rightarrow\infty} = x_* $$
The difference $\left|x_* - c_n\right|$ is called the backward error . Whenever an algorithm results in a series of approximations $\left\{c_k\right\}$ that converges to our required value, we say that our algorithm exhibits backward stability . That is

$$ \displaystyle\lim_{n\rightarrow\infty} \left|x_* - c_n\right| = 0 $$
We also have that $\displaystyle\lim_{n\rightarrow\infty} f(c_n) = 0$. If we define $L = 0$ we can write this

$$ \displaystyle\lim_{n\rightarrow\infty} \left| f(c_n) - L \right| = 0 $$
Recall that the algorithm was designed to find a value in the domain $c$ so that $f(c)=0$. The difference $\left|f(x_*) - f(c_n)\right| = \left|L - f(c_n)\right|$ is called the forward error . Since the values of $f(c_n)$ converge to the answer $L=0$, we say that the bisection algorithm exhibits forward stability .

We will want our numerical algorithms to be numerically stable . Together forward stability and backward stability are related by the condition number $\kappa$. The condition number measures how sensitive the output of an algorithm is to small changes in the input. That is

$$ \left|f(x_*) - f(c_n)\right| = \kappa\,\left|x_* - c_n\right| $$
If the condition number $\kappa$ is reasonably small then a backward stable algorithm will also be forward stable. However a forward stable algorithm is not necessarily backward stable. If the condition number is vary large or infinity, then a backward stable algorithm will not be forward stable.

3.3.4 Taylor's Theorem

Taylor's Theorem is one of the most important theorems in numerical computing.

Theorem

Let $f$, $f'$, $f''$, $\dots$, $f^{(n)}$ be continuous functions on the closed interval $[a,b]$ so that $f^{(n+1)}(x)$ exists on the open subset $(a,b)$. Then there is a value $c\in(a,b)$ such that

$$ f(b) = f(a) + (b-a)f'(a) + \frac{(b-a)^2}{2!}f''(a) + \dots + \frac{(b-a)^n}{n!}f^{(n)}(a) + \frac{(b-a)^{n+1}}{(n+1)!}f^{(n+1)}(c) $$
The value $\frac{(b-a)^{n+1}}{(n+1)!}f^{(n+1)}(c)$ is called the Taylor Remainder .

When $b$ is close to $a$ we write $b = a + x$. Since we can use Taylor's theorem for every $x\in[a,b]$ where $f$ has $n+1$ derivatives on the interval $[a,b]$, we write

$$ \begin{align*} f(x) &= f(a) + (x-a)f'(a) + \frac{(x-a)^2}{2!}f''(x) + \dots + \frac{(x-a)^n}{n!}f^{(n)}(x) + \frac{(x-a)^{n+1}}{(n+1)!}f^{(n+1)}(c) \\ \\ &= \displaystyle\sum_{k=0}^n \frac{(x-a)^k}{k!}f^{(k)}(a) + \frac{(x-a)^{n+1}}{(n+1)!}f^{(n+1)}(c) \\ \end{align*} $$
where $a
$$ \begin{align*} R_n(x) &= f(x) - \displaystyle\sum_{k=0}^n \frac{(x-a)^k}{k!}f^{(k)}(a) \\ \\ &= \frac{(x-a)^{n+1}}{(n+1)!}f^{(n+1)}(c) \\ \\ &= \frac{f^{(n+1)}(c)}{(n+1)!}\,(x-a)^{n+1} = O\left((x-a)^{n+1}\right) = O\left(x^n\right) \end{align*} $$
since $\frac{f^{(n+1)}(c)}{(n+1)!}$ is a constant. This is true because

$$ \displaystyle\lim_{x\rightarrow a}\frac{R_n(x)}{(x-a)^{n+1}} = \frac{f^{(n+1)}(c)}{(n+1)!} = C $$

3.3.5 Newton's Method

Using Taylor's theorem we may write a twice differential function $f$ on the interval $[x_0,x]$

$$ f(x) = f(x_0) + (x-x_0)f'(x_0) + \frac{(x-x_0)^2}{2!}f''(c) = f(x_0) + (x-x_0)f'(x_0) + O\left((x-x_0)^2\right) $$
for some $c\in[x_0,x]$. If we are looking for the point $x_*$ where $f(x_*) = 0$, then

$$ 0 = f(x_0) + (x_*-x_0)f'(x_0) + \frac{(x_*-x_0)^2}{2}f''(c) $$
If $x_*$ is close enough to $x_0$ then we may approximate using $x_* = x_1$ where

$$ \begin{align*} 0 &= f(x_0) + (x_1-x_0)f'(x_0) \\ \\ (x_1-x_0)f'(x_0) &= -f(x_0) \\ \\ x_1 &= x_0 - \frac{f(x_0)}{f'(x_0)} \end{align*} $$
Repeating this process we define for $k=0,1,2,\dots$,

$$ x_{k+1} = x_k - \frac{f(x_k)}{f'(x_k)} $$

Example 3

Consider the following polynomial $f$ and its first derivative.

$$ \begin{align*} f(x) &= x(x-1)(x-2)(x-3) = x^4 - 6x^3 + 11x^2 - 6x \\ f'(x) &= 4x^3 - 18x^2 + 22x - 6 \end{align*} $$

>> [x0, count] = newton(f, .5, g, 1e-8)

(  0.5000000000000000,  -0.9375000000000000)
(  1.4375000000000000,   0.5527496337890625)
( -0.3368436520376175,   3.5113530694563666)
( -0.1118403412187572,   0.8171829547818639)
( -0.0178165065157076,   0.1104247795413513)
( -0.0005564304363047,   0.0033419894147334)
( -0.0000005668148277,   0.0000034008925002)
( -0.0000000000005890,   0.0000000000035341)

x0 =

  -5.8901e-13


count =

     7

>>

Newton Algorithm Polynomial

Figure 4

Notice that Newton's algorithm converges in 8 instead of 28 loops! However we need for our function to be continuously differentiable on an interval centered at our actual zero $x_*$. This interval must include our initial estimate for $x_*$.

In this example we passed $.5$ and the actual zero is at $x=0$. Hence we need for our function $f$ to be continuously differentiable on an interval $[0-\epsilon, 0+\epsilon]$ so that $.5\in[0-\epsilon, 0+\epsilon]$. That also means $f'(x) = g(x)$ must be continuous on this same interval.

In addition Newton's method requires that $f'(x)\neq 0$ on this interval. In the formula for Newton's method we must divide by $f'(x_k)$, this algorithm will fail for functions that have a maximum or minimum in this interval. Recall from our discussion of floating point numbers, Newton's algorithm will fail even if $f'(x_k)$ get too close to zero. Dividing by a small enough number will result in an inf .

Theorem

If $f\in C^2$, if $x_0$ is sufficiently close to a root $x_*$ of $f$, and if $f'(x_*)\neq 0$, then Newton's method converges to $x_*$. In addition,

$$ \begin{align*} 0 &= f(x_k) + (x_*-x_k)f'(x_k) + \frac{(x_*-x_k)^2}{2}f''(c) \\ \\ (x_*-x_k)f'(x_k) &= - f(x_k) - \frac{(x_*-x_k)^2}{2}f''(c) \\ \\ x_* &= x_k - \frac{f(x_k)}{f'(x_k)} - \frac{(x_*-x_k)^2}{2}\frac{f''(c)}{f'(x_k)} \\ \\ x_{k+1}-x_* &= \frac{f''(c)}{2f'(x_k)}\,(x_k-x_*)^2 \end{align*} $$
This means that for some constant $K$

$$ \left|x_{k+1}-x_*\right| \le K\left|x_k-x_*\right|^2 $$

Sufficiently Close is taken to mean that $\left|\frac{f''(x)}{2f'(x)}\right| < K$ for some constant $K$ on the interval $[x_*-x_0, x_*+x_0]$, and $\left|x_*-x_0\right|\lt\frac{1}{K}$, where $x_*$ is the actual zero of $f$, and $x_0$ is the approximation. On such an interval

$$ \left|x_{k+1}-x_*\right| = \left|\frac{f''(c)}{2f'(x_k)}\,(x_k-x_*)^2\right| = O\left((x_k-x_*)^2\right) $$ We call this a quadratic convergence rate. This means

If Newton's algorithm converges, it will converge an order of magnitude faster than the bisection algorithm.

Example 4

Even though sine and cosine are differentiable at $\frac{\pi}{2}$, Newton's method fails because the value of $x_0$ is too close to $\frac{pi}{2}$ and cosine is zero at $\frac{\pi}{2}$.

>> [x0, count] = newton(@sin, 1.57079632679, @cos, 1e-8)

(  1.5707963267900000,   1.0000000000000000)
(-204223803254.4025268554687500,   0.2189793465990881)
(-204223803254.6269531250000000,  -0.0036579552340805)
(-204223803254.6232910156250000,   0.0000041459832422)

...   % 1019 more times

(-204223803254.6232910156250000,   0.0000041459832422)
(-204223803254.6232910156250000,   0.0000041459832422)
(-204223803254.6232910156250000,   0.0000041459832422)
Warning: Warning: newton FindZero exceed Maximum Loop Count(1025)
> In newton>FindZero (line 135)
  In newton (line 62) 

x0 =

    -2.042238032546233e+11


count =

        1025
>

Example 5

If we move our initial approximation a little farther away from $\frac{\pi}{2}$, then

>> [x0, count] = newton(@sin, 1.570796, @cos, 1e-8)

(  1.5707960000000001,   0.9999999999999466)
(-3060021.7361568440683186,   0.5105441470260828)
(-3060021.1423983611166477,  -0.0579085590973118)
(-3060021.2004042603075504,   0.0000649260504789)
(-3060021.2003393340855837,  -0.0000000001714423)

x0 =

    -3.060021200339334e+06


count =

     4

>> sin(x0)

ans =

    -1.714422740192194e-10

>>

Newton's method converges in 4 iterations through its loop only because sine is periodic. If we use $x_0 = 1.5$ for our initial estimate

Newton Algorithm Sine

Figure 5

Example 6

Only when Newton's Method is given an initial estimate close to our result does it perform well.

>> [x0, count] = newton(@sin, pi/4, @cos, 1e-8)

(  0.7853981633974483,   0.7071067811865475)
( -0.2146018366025516,  -0.2129584151592961)
(  0.0033562618583103,   0.0033562555572155)
( -0.0000000126022536,  -0.0000000126022536)
(  0.0000000000000000,   0.0000000000000000)

x0 =

     1.654361225106055e-24


count =

     4

>>

Newton Algorithm Sine

Figure 6

3.3.6 Avoiding Derivatives

Some derivatives are computationally challenging to symbolically calculate or numerically difficult to compute. As we saw in the examples in the preceding section, sometimes the value of the derivative can be too close to zero for Newton's Algorithm to succeed. We can however approximate a tangent line with a secant line instead. The algorithm is very similar to Newton's method with the secant line replacing the tangent line in our loop.

The advantage here is that we avoid requiring our function to be continuously differentiable, and we are not restricted to intervals where the first derivative is small.

The disadvantage is that we will need two points in the domain to compute the secant line.

Secant Algorithm

Figure 7

Example 7

>> f = @(x) polyval([1 -6 11 -6 0], x)

f =

  function_handle with value:

    @(x)polyval([1,-6,11,-6,0],x)

>> [c, count] = secant(f, pi/4, pi/2, 1e-8)

(  -0.4533701315622922,   0.5499948216266440 )
(   0.5499948216266440,   0.2557478847883077 )
(   0.2557478847883077,  -0.4939393401566612 )
(  -0.4939393401566612,   0.0250961694670592 )
(   0.0250961694670592,   0.0014182340671302 )
(   0.0014182340671302,  -0.0000092600601211 )
(  -0.0000092600601211,   0.0000000032901806 )
(   0.0000000032901806,   0.0000000000000080 )

c =

       1       


count =

       7       
>>

Secant Algorithm Polynomial

Figure 8

Example 8

However the secant algorithm has a limitations similar to Newton's algorithm.

>> f = @(x) polyval([1 -6 11 -6 0], x)

f =

  function_handle with value:

    @(x)polyval([1,-6,11,-6,0],x)

>> [c, count] = secant(f, 1.25, 1.75, 1e-8)

(   0.4101562500000000,   0.4101562500000000 )
(   0.4101562500000000,                  Inf )
(                  Inf,                  NaN )
(                  NaN,                  NaN )

c =

       0/0     


count =

       3       

>>

Secant Algorithm Fails

Figure 9

If the secant line has zero slope, the algorithm will fail just like Newton's algorithm.

Example 9

Unlike the bisection algorithm, one does not need to use initial estimates on opposites sides of the zero.

>> [c, count] = secant(f, 1.25, 1.5, 1e-8)

(   0.4101562500000000,   0.5625000000000000 )
(   0.5625000000000000,  -0.8416525156682191 )
(  -0.8416525156682191,   0.2393501404869194 )
(   0.2393501404869194,   0.0153596242428046 )
(   0.0153596242428046,  -0.0013818505202609 )
(  -0.0013818505202609,   0.0000054189644168 )
(   0.0000054189644168,   0.0000000018681909 )
(   0.0000000018681909,  -0.0000000000000036 )

c =

       1       


count =

       7       

>

Secant Algorithm Succeeds

Figure 10

3.3.7 Exercises

Exercise 1

  1. Create the flowchart for Newton's method.

  2. Create an M-file newton.m with function
function [x0, loopCount] = newton(fn, x0, fnDerivative, tolerance, max_loop_count)

with at least 3 inputs and up to 5 inputs, and two outputs

> % Inputs:
> % fn               a handle to the target function for which Newton's algorithm 
> %                  will attempt to find and return the zero
> % x0               an initial approximation to the zero desired
> % fnDerivative     a handle to the derivative of the target function
> % tolerance        (optional) an acceptable error value to terminating the 
> %                  algorithm. The default is square root of machine epsilon
> % max_loop_count   (optional) the maximum number of times to execute the
> %                  algorithm. The default value is 128
> %
> % Outputs:
> % x0               the value of the domain variable closest to the zero of the 
> %                  target function. abs(f(x0)) should be small unless the maximum 
> %                  loop count is exceeded or Newton's algorithm fails
> % loopCount        the number of iterations of Newton's algorithm before 
> %                  terminating the algorithm
  1. Remember to include your function trailer and header. Make sure you use comments, indentation, and whitespace.

  2. Check your first three inputs for valid values. The MATLAB builtin function isa will be helpful.

  3. Check your last two arguments and set their values to the specified defaults if necessary.

  4. Implement Newton's algorithm in a separate local function from the named procedure newton . Local functions that are not named procedures are also called sub-functions .

Exercise 2

  1. Create the flowchart for the Secant method.

  2. Create an M-file secant.m with function
function [x0, loopCount] = secant(fn, a, b, tolerance, max_loop_count)

with at least 3 inputs and up to 5 inputs, and two outputs

> % Inputs:
> % fn               a handle to the target function for which the secant algorithm 
> %                  will attempt to find and return the zero
> % a                first of two approximations to the zero of the target function 
> % b                second of two approximations to the zero of the target function
> % tolerance        (optional) an acceptable error value to terminating the 
> %                  algorithm. The default is square root of machine epsilon
> % max_loop_count   (optional) the maximum number of times to execute the
> %                  algorithm. The default value is 128
> %
> % Outputs:
> % x0               the value of the domain variable closest to the zero of the 
> %                  target function. abs(f(x0)) should be small unless the maximum 
> %                  loop count is exceeded or the secant algorithm fails
> % loopCount        the number of iterations of the secant algorithm before 
> %                  terminating the algorithm
  1. Remember to include your function trailer and header. Make sure you use comments, indentation, and whitespace.

  2. Check your first three inputs for valid values. The MATLAB builtin function isa will be helpful.

  3. Check your last two arguments and set their values to the specified defaults if necessary.

  4. Implement the secant algorithm in a sub-function.

Creative Commons Logo - White


Your use of this self-initiated mediated course material is subject to our Creative Commons License .


Creative Commons Attribution-NonCommercial-ShareAlike 4.0

Creative Commons Logo - Black
Attribution
You must give appropriate credit, provide a link to the license, and indicate if changes were made. You may do so in any reasonable manner, but not in any way that suggests the licensor endorses you or your use.

Creative Commons Logo - Black
Noncommercial
You may not use the material for commercial purposes.

Creative Commons Logo - Black
Share Alike
You are free to share, copy and redistribute the material in any medium or format. If you adapt, remix, transform, or build upon the material, you must distribute your contributions under the same license as the original.