Comparing Convergence Of False Position And Bisection Methods Engineering Essay

Modified: 1st Jan 2015
Wordcount: 2239 words

Disclaimer: This is an example of a student written essay. Click here for sample essays written by our professional writers.
Any scientific information contained within this essay should not be treated as fact, this content is to be used for educational purposes only and may contain factual inaccuracies or be out of date.

Cite This

Explain with example that rate of convergence of false position method is faster than that of the bisection method.

Introduction

False position method

In numerical analysis, the false position method or regula falsi method is a root-finding algorithm that combines features from the bisection method and the secant method.

The method:

The first two iterations of the false position method. The red curve shows the function f and the blue lines are the secants.

Like the bisection method, the false position method starts with two points a0 and b0 such that f(a0) and f(b0) are of opposite signs, which implies by the intermediate value theorem that the function f has a root in the interval [a0, b0], assuming continuity of the function f. The method proceeds by producing a sequence of shrinking intervals [ak, bk] that all contain a root of f.

At iteration number k, the number

is computed. As explained below, ck is the root of the secant line through (ak, f(ak)) and (bk, f(bk)). If f(ak) and f(ck) have the same sign, then we set ak+1 = ck and bk+1 = bk, otherwise we set ak+1 = ak and bk+1 = ck. This process is repeated until the root is approximated sufficiently well.

The above formula is also used in the secant method, but the secant method always retains the last two computed points, while the false position method retains two points which certainly bracket a root. On the other hand, the only difference between the false position method and the bisection method is that the latter uses ck = (ak + bk) / 2.

Bisection method

In mathematics, the bisection method is a root-finding algorithm which repeatedly bisects an interval then selects a subinterval in which a root must lie for further processing. It is a very simple and robust method, but it is also relatively slow.

The method is applicable when we wish to solve the equation for the scalar variable x, where f is a continuous function.

The bisection method requires two initial points a and b such that f(a) and f(b) have opposite signs. This is called a bracket of a root, for by the intermediate value theorem the continuous function f must have at least one root in the interval (a, b). The method now divides the interval in two by computing the midpoint c = (a+b) / 2 of the interval. Unless c is itself a root–which is very unlikely, but possible–there are now two possibilities: either f(a) and f(c) have opposite signs and bracket a root, or f(c) and f(b) have opposite signs and bracket a root. We select the subinterval that is a bracket, and apply the same bisection step to it. In this way the interval that might contain a zero of f is reduced in width by 50% at each step. We continue until we have a bracket sufficiently small for our purposes. This is similar to the computer science Binary Search, where the range of possible solutions is halved each iteration.

Get Help With Your Essay

If you need assistance with writing your essay, our professional essay writing service is here to help!

Essay Writing Service

Explicitly, if f(a) f(c) < 0, then the method sets c as the new value for b, and if f(b) f(c) < 0, then the method sets c as the new a. In both cases, the new f(a) and f(b) have opposite signs, so the method is applicable to this smaller interval. A practical implementation of this method must guard against the uncommon occurrence that the midpoint is indeed a solution.

Advantages and drawbacks of the bisection method

Advantages of Bisection Method

The bisection method is always convergent. Since the method brackets the root, the method is guaranteed to converge.

As iterations are conducted, the interval gets halved. So one can guarantee the decrease in the error in the solution of the equation.

Drawbacks of Bisection Method

The convergence of bisection method is slow as it is simply based on halving the interval.

If one of the initial guesses is closer to the root, it will take larger number of iterations to reach the root.

If a function is such that it just touches the x-axis (Figure 3.8) such as

it will be unable to find the lower guess, , and upper guess, , such that

For functions where there is a singularity and it reverses sign at the singularity, bisection method may converge on the singularity (Figure 3.9).

An example include

and, are valid initial guesses which satisfy

.

However, the function is not continuous and the theorem that a root exists is also not applicable.

Figure.3.8. Function has a single root at that cannot be bracketed.

Figure.3.9. Function has no root but changes sign.

Explanation

Source code for False position method:

Example code of False-position method

C code was written for clarity instead of efficiency. It was designed to solve the same problem as solved by the Newton’s method and secant method code: to find the positive number x where cos(x) = x3. This problem is transformed into a root-finding problem of the form f(x) = cos(x) – x3 = 0.

#include

#include

double f(double x)

{

return cos(x) – x*x*x;

}

double FalsiMethod(double s, double t, double e, int m)

{

int n,side=0;

double r,fr,fs = f(s),ft = f(t);

for (n = 1; n <= m; n++)

{

r = (fs*t – ft*s) / (fs – ft);

if (fabs(t-s) < e*fabs(t+s)) break;

fr = f(r);

if (fr * ft > 0)

{

t = r; ft = fr;

if (side==-1) fs /= 2;

side = -1;

}

else if (fs * fr > 0)

{

s = r; fs = fr;

if (side==+1) ft /= 2;

side = +1;

}

else break;

}

return r;

}

int main(void)

{

printf(“%0.15fn”, FalsiMethod(0, 1, 5E-15, 100));

return 0;

}

After running this code, the final answer is approximately 0.865474033101614

Example 1

Consider finding the root of f(x) = x2 – 3. Let εstep = 0.01, εabs = 0.01 and start with the interval [1, 2].

Table 1. False-position method applied to f(x) = x2 – 3.

a

b

f(a)

f(b)

c

f(c)

Update

Step Size

1.0

2.0

-2.00

1.00

1.6667

-0.2221

a = c

0.6667

1.6667

2.0

-0.2221

1.0

1.7273

-0.0164

a = c

0.0606

1.7273

2.0

-0.0164

1.0

1.7317

0.0012

a = c

0.0044

Thus, with the third iteration, we note that the last step 1.7273 → 1.7317 is less than 0.01 and |f(1.7317)| < 0.01, and therefore we chose b = 1.7317 to be our approximation of the root.

Note that after three iterations of the false-position method, we have an acceptable answer (1.7317 where f(1.7317) = -0.0044) whereas with the bisection method, it took seven iterations to find a (notable less accurate) acceptable answer (1.71344 where f(1.73144) = 0.0082)

Example 2

Consider finding the root of f(x) = e-x(3.2 sin(x) – 0.5 cos(x)) on the interval [3, 4], this time with εstep = 0.001, εabs = 0.001.

Table 2. False-position method applied to f(x) = e-x(3.2 sin(x) – 0.5 cos(x)).

a

b

f(a)

f(b)

c

f(c)

Update

Step Size

3.0

4.0

0.047127

-0.038372

3.5513

-0.023411

b = c

0.4487

3.0

3.5513

0.047127

-0.023411

3.3683

-0.0079940

b = c

0.1830

3.0

3.3683

0.047127

-0.0079940

3.3149

-0.0021548

b = c

0.0534

3.0

3.3149

0.047127

-0.0021548

3.3010

-0.00052616

b = c

0.0139

3.0

3.3010

0.047127

-0.00052616

3.2978

-0.00014453

b = c

0.0032

3.0

3.2978

0.047127

-0.00014453

3.2969

-0.000036998

b = c

0.0009

Thus, after the sixth iteration, we note that the final step, 3.2978 → 3.2969 has a size less than 0.001 and |f(3.2969)| < 0.001 and therefore we chose b = 3.2969 to be our approximation of the root.

In this case, the solution we found was not as good as the solution we found using the bisection method (f(3.2963) = 0.000034799) however, we only used six instead of eleven iterations.

Source code for Bisection method

#include

#include

#define epsilon 1e-6

main()

{

double g1,g2,g,v,v1,v2,dx;

int found,converged,i;

found=0;

printf(” enter the first guessn”);

scanf(“%lf”,&g1);

v1=g1*g1*g1-15;

printf(“value 1 is %lfn”,v1);

while (found==0)

{

printf(“enter the second guessn”);

scanf(“%lf”,&g2);

v2=g2*g2*g2-15;

printf(” value 2 is %lfn”,v2);

if (v1*v2>0)

{found=0;}

else

found=1;

}

printf(“right guessn”);

i=1;

while (converged==0)

{

printf(“n iteration=%dn”,i);

g=(g1+g2)/2;

printf(“new guess is %lfn”,g);

v=g*g*g-15;

printf(“new value is%lfn”,v);

if (v*v1>0)

{

g1=g;

printf(“the next guess is %lfn”,g);

dx=(g1-g2)/g1;

}

else

{

g2=g;

printf(“the next guess is %lfn”,g);

dx=(g1-g2)/g1;

}

if (fabs(dx)’less than’ epsilon

{converged=1;}

i=i+1;

}

printf(“nth calculated value is %lfn”,v);

}

Example 1

Consider finding the root of f(x) = x2 – 3. Let εstep = 0.01, εabs = 0.01 and start with the interval [1, 2].

Table 1. Bisection method applied to f(x) = x2 – 3.

a

b

f(a)

f(b)

c = (a + b)/2

f(c)

Update

new b − a

1.0

2.0

-2.0

1.0

1.5

-0.75

a = c

0.5

1.5

2.0

-0.75

1.0

1.75

0.062

b = c

0.25

1.5

1.75

-0.75

0.0625

1.625

-0.359

a = c

0.125

1.625

1.75

-0.3594

0.0625

1.6875

-0.1523

a = c

0.0625

1.6875

1.75

-0.1523

0.0625

1.7188

-0.0457

a = c

0.0313

1.7188

1.75

-0.0457

0.0625

1.7344

0.0081

b = c

0.0156

1.71988/td>

1.7344

-0.0457

0.0081

1.7266

-0.0189

a = c

0.0078

Thus, with the seventh iteration, we note that the final interval, [1.7266, 1.7344], has a width less than 0.01 and |f(1.7344)| < 0.01, and therefore we chose b = 1.7344 to be our approximation of the root.

Example 2

Consider finding the root of f(x) = e-x(3.2 sin(x) – 0.5 cos(x)) on the interval [3, 4], this time with εstep = 0.001, εabs = 0.001.

Table 1. Bisection method applied to f(x) = e-x(3.2 sin(x) – 0.5 cos(x)).

a

b

f(a)

f(b)

c = (a + b)/2

f(c)

Update

new b − a

3.0

4.0

0.047127

-0.038372

3.5

-0.019757

b = c

0.5

3.0

3.5

0.047127

-0.019757

3.25

0.0058479

a = c

0.25

3.25

3.5

0.0058479

-0.019757

3.375

-0.0086808

b = c

0.125

3.25

3.375

0.0058479

-0.0086808

3.3125

-0.0018773

b = c

0.0625

3.25

3.3125

0.0058479

-0.0018773

3.2812

0.0018739

a = c

0.0313

3.2812

3.3125

0.0018739

-0.0018773

3.2968

-0.000024791

b = c

0.0156

3.2812

3.2968

0.0018739

-0.000024791

3.289

0.00091736

a = c

0.0078

3.289

3.2968

0.00091736

-0.000024791

3.2929

0.00044352

a = c

0.0039

3.2929

3.2968

0.00044352

-0.000024791

3.2948

0.00021466

a = c

0.002

3.2948

3.2968

0.00021466

-0.000024791

3.2958

0.000094077

a = c

0.001

3.2958

3.2968

0.000094077

-0.000024791

3.2963

0.000034799

a = c

0.0005

Thus, after the 11th iteration, we note that the final interval, [3.2958, 3.2968] has a width less than 0.001 and |f(3.2968)| < 0.001 and therefore we chose b = 3.2968 to be our approximation of the root.

Convergence Rate

Why don’t we always use false position method?

There are times it may converge very, very slowly.

Example:

What other methods can we use?

Comparison of rate of convergence for bisection and false-position method

 

Cite This Work

To export a reference to this article please select a referencing style below:

Give Yourself The Academic Edge Today

  • On-time delivery or your money back
  • A fully qualified writer in your subject
  • In-depth proofreading by our Quality Control Team
  • 100% confidentiality, the work is never re-sold or published
  • Standard 7-day amendment period
  • A paper written to the standard ordered
  • A detailed plagiarism report
  • A comprehensive quality report
Discover more about our
Essay Writing Service

Essay Writing
Service

AED558.00

Approximate costs for Undergraduate 2:2

1000 words

7 day delivery

Order An Essay Today

Delivered on-time or your money back

Reviews.io logo

1858 reviews

Get Academic Help Today!

Encrypted with a 256-bit secure payment provider