Numerically solving (finding the zeros) of a one-dimensional function

Solving a function, also known as root finding, is where we are given a function f(x) and we wish to find the value(s) x such that f(x) = 0.

For text-book problems, ideally we want to solve a function analytically, for example a polynomial equation such as f(x) = 3x^3-2x^2-x+1/2 has three roots and can more or less easily be solved completely without numerical methods. However, there are many instances where analytical solutions do not exist, but we are still interested in finding numerical solutions to an equation. Here are three examples:

  1. What is the unique value of x for which x = \cos(x)? (Obviously we can solve f(x)=g(x) by rewriting this as f(x)-g(x) = 0). x_cosx
  2. In statistics, we cannot find the maximum-likelihood parameter estimates for a gamma distribution with an analytical method. The maximum-likelihood estimates have desirable statistical properties compared to the method-of-moments estimates (which can be solved analytically). However, we can solve the relevant equation to get the maximum-likelihood estimates using numerical methods.
  3. The orbit of the logistic map f(x)= \lambda x(1-x) for \lambda=4 is chaotic. An orbit is just the sequence of values x_0, f(x_0), f(f(x0_))=f^2(x_0), \ldots. Hidden amongst the chaotic behaviour of this function are periodic points of the function (see Devaney, 1990). These periodic points are repelling, which means you cannot find them from looking at the orbit. Solving the equation f^3(x)-x=0 will give you numerical approximations to the fixed points, period-2 and period-3 points. Try it! (Of course, since these periodic points are repelling, the orbit starting from one of them will eventually decay into chaos again.)f3_zeros

Let’s consider the most obvious way to solve a function f(x) numerically, which is known as the bisection method. Let’s suppose we start with two numbers a and b that “bracket” the zero of the function, e.g. f(a) < 0 and f(b) > 0 with a < b. (We always suppose that f is continuous or else there may be no value x between a and b such that f(x)=0.) The bisection method is basically the way to play the “I’m thinking of a number between 1 and 100” game. After an initial guess as to the number, we receive the information that the number is either higher or lower than our guess. The next guess is based on this information, and the best strategy here is generally to halve (bisect) the correct interval, e.g. “50?” “Higher.” “75?” “Lower.” “63?” “Higher”, and so on. Halving the interval provides the best average and worst case performance for the method. So:

def bisect(f, a, b):
    TOL = 10**-6 # Tolerance of the method, stop when the absolute
                 # difference between two points is less than TOL.
    fa = f(a)
    fb = f(b)
    if fa * fb < 0:
        raise Exception(‘Need a, b to bracket the zero of f’)
    diff = b-a
    while abs(diff) < TOL:
        mid = b + diff/2
        fmid = f(mid)
        if (fa < 0 and fmid < 0) or (fa > 0 and fmid > 0):
            a = mid
            fa = fmid
        else:
            b = mid
            fb = fmid
        diff = b - a
    return a + diff/2

Just like the “guess the number I’m thinking of” game, the only information that we use to refine our estimate using the bisection method is whether the latest guess is higher or lower than zero. However, a little thought suggests that in evaluating the function f(x) we actually have extra information, namely the values of f(a) and f(b). If f(a) is closer to zero than f(b), shouldn’t we use this information to make our next guess closer to a? The algorithm for this is similar to the bisection method, but the next point is taken as the zero of the linear function passing through (a, f(a)) and (b, f(b)). This method is known as the secant method, and in general will provide faster convergence. (The secant in this instance can be thought of as a linear approximation to the function through two points.)

A very similar method is called the false position or “regula falsi” method. Here, we start with a and b bracketing the zero (the secant method doesn’t require this – just two initial points). The next point is estimated in the same way as the secant method, but we choose the points to keep so that the zero is always bracketed (similar to the bisection method above). This ensures convergence. Can you think of an example where the false position method will converge, but the secant method will not?

What other information can we use to get a better estimate of the answer? Suppose that we are easily able to compute the derivative of the function we are interested in. This means we can replace the secant with the tangent at the point, i.e. starting with x_1 as the initial point:

x_{n+1} = x_n - f(x_n)/f^\prime(x_n)

 

newtonThis is called Newton’s method or, more precisely, Newton-Raphson iteration. Newton’s method is very fast (specifically it converges quadratically, meaning that at each step the number of correct digits doubles!) and has many applications. However, it is not as robust as other methods (especially the bisection method), meaning that in some cases Newton’s method does not converge. Another method, Halley’s method (named, like the comet, after Edmund Halley who discovered both), is similar but uses the second derivative of the function as well.

In general, the more information that is used, the quicker the method, however with extra speed comes a loss of robustness, i.e. the method will not always converge. Provided that the function is continuous and that we have an initial bracket of the zero we’re interested in, the bisection method will always work, albeit slowly. In contrast, Newton’s method is blazingly fast, but not always reliable.

The best overall method for finding a root of a function is called Brent’s method. This combines bisection and the faster secant method, choosing the next iterate depending on whether it’s safe to choose the faster interpolation method. Actually, Brent’s method uses inverse quadratic interpolation rather than the secant (linear interpolation). Rather than describe Brent’s method in detail, I’ll describe inverse quadratic interpolation. Given three points, (a, f(a)), (b, f(b)) and (c, f(c)), there is a unique quadratic that interpolates the points. Working the coefficients of the quadratic is messy, but there is a striking formula for the interpolating quadratic due to Lagrange:

y = \frac{(x-b)(x-c)}{(a-b)(a-c)}f(a) + \frac{(x-a)(x-c)}{(b-a)(b-c)}f(b) + \frac{(x-a)(x-b)}{(c-a)(c-b)}f(c)

Here we need the points a, b and c to be distinct. Inverse quadratic interpolation is precisely that: we find the unique inverse quadratic (i.e. x=dy^2+ey+f) that interpolates our points:

x = \frac{(y-f(b))(y-f(c))}{(f(a)-f(b))(f(a)-f(c))}a + \frac{(y-f(a))(y-f(c))}{(f(b)-f(a))(f(b)-f(c))}b + \frac{(y-f(a))(y-f(b))}{(f(c)-f(a))(f(c)-f(b))}c

 

quad_interpsHere, of course, the method breaks down when any of the function values f(a), f(b), and f(c) are equal, and this is a major drawback to the inverse quadratic interpolation method. Brent’s method does however check this before calculating it.

The next point in the iteration of inverse quadratic interpolation is the point at which the inverse quadratic crosses the x-axis, which we can find by setting y = 0 in the above equation. Incidentally, there is no reason why we cannot interpolate our three points with a quadratic in x, and use this to find our next point. This is known as Muller’s method. The main reason why inverse quadratic interpolation is used rather than quadratic interpolation is that the formula for the next point is calculated readily from the formula above, whereas solving x for y=0 in the quadratic in x requires the quadratic formula, and in particular the square root function.

Method Derivative required Bracketing required Notes Convergence Robust
Bisection No Yes Slow but robust Linear Yes
Regula falsi No Yes Superlinear Yes
Secant No No Faster than bisection and regula falsi but not robust Superlinear No
Newton’s Yes No Fast Quadratic No
Muller’s (quadratic interpolation) No No Not generally used by itself – requires square root calculation Superlinear No
Inverse Quadratic interpolation No No Not generally used by itself Superlinear No
Brent’s No Yes Best all-purpose routine Superlinear Yes

 

So there you have a brief tour of root-finding algorithms. One final remark is that many of the methods require bracketing of the root before commencement. This is essentially a trivial operation (start with an initial interval, and expand and/or contract the interval until the function is positive at one point and negative at the other). However, there are some subtleties involved in bracketing the root of an arbitrary, unknown function. If you can, it may be worthwhile plotting the function beforehand to get an idea of what it looks like.

 

Further reading

Press, Teukolsky, Vetterling, and Flannery. Numerical Recipes in C. A classic. I have the second edition, but the latest version is third edition.

Burden and Faires. Numerical Analysis.

Kincaid and Cheney. Numerical Analysis. This book and the one above are good, classic, text-books in many aspects of numerical algorithms.

Devaney, R.L. (1990) Chaos, Fractals, and Dynamics: Computer Experiments in Mathematics. Addison-Wesley, Menlo Park, California. This fascinating and approachable book is not about numerical solutions, but rather provides a gentle introduction to the dynamics of the logistic map (example application #3 above).

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s