Solving equations is one of the most important things we do in mathematics, yet we are surprisingly limited in what we can solve analytically. For instance, equations as simple as ${x}^{5}+x+1=0$ or $\mathrm{cos}x=x$ cannot be solved by algebraic methods in terms of familiar functions. Fortunately, there are methods that can give us approximate solutions to equations like these. These methods can usually give an approximation correct to as many decimal places as we like. In Section 1.6 we learned about the Bisection Method. This section focuses on another technique (which generally works faster), called Newton’s Method.
Newton’s Method is built around tangent lines. The main idea is that if $x$ is sufficiently close to a root of $f(x)$, then the tangent line to the graph at $(x,f(x))$ will cross the $x$-axis at a point closer to the root than $x$.
We start Newton’s Method with an initial guess about roughly where the root is. Call this ${x}_{0}$. (See Figure 4.4.1(a).) Draw the tangent line to the graph at $({x}_{0},f({x}_{0}))$ and see where it meets the $x$-axis. Call this point ${x}_{1}$. Then repeat the process — draw the tangent line to the graph at $({x}_{1},f({x}_{1}))$ and see where it meets the $x$-axis. (See Figure 4.4.1(b).) Call this point ${x}_{2}$. Repeat the process again to get ${x}_{3}$, ${x}_{4}$, etc. This sequence of points will often converge rather quickly to a root of $f$.
We can use this geometric process to create an algebraic process. Let’s look at how we found ${x}_{1}$. We started with the tangent line to the graph at $({x}_{0},f({x}_{0}))$. The slope of this tangent line is ${f}^{\prime}({x}_{0})$ and the equation of the line is
$$y={f}^{\prime}({x}_{0})(x-{x}_{0})+f({x}_{0}).$$ |
This line crosses the $x$-axis when $y=0$, and the $x$-value where it crosses is what we called ${x}_{1}$. So let $y=0$ and replace $x$ with ${x}_{1}$, giving the equation:
$$0={f}^{\prime}({x}_{0})({x}_{1}-{x}_{0})+f({x}_{0}).$$ |
Now solve for ${x}_{1}$:
$${x}_{1}={x}_{0}-\frac{f({x}_{0})}{{f}^{\prime}({x}_{0})}.$$ |
Since we repeat the same geometric process to find ${x}_{2}$ from ${x}_{1}$, we have
$${x}_{2}={x}_{1}-\frac{f({x}_{1})}{{f}^{\prime}({x}_{1})}.$$ |
In general, given an approximation ${x}_{n}$, we can find the next approximation, ${x}_{n+1}$ as follows:
$${x}_{n+1}={x}_{n}-\frac{f({x}_{n})}{{f}^{\prime}({x}_{n})}.$$ |
We summarize this process as follows.
Let $f$ be a differentiable function on an interval $I$ with a root in $I$. To approximate the value of the root, accurate to $d$ decimal places:
Choose a value ${x}_{0}$ as an initial approximation of the root. (This is often done by looking at a graph of $f$.)
Create successive approximations iteratively; given an approximation ${x}_{n}$, compute the next approximation ${x}_{n+1}$ as
$${x}_{n+1}={x}_{n}-\frac{f({x}_{n})}{{f}^{\prime}({x}_{n})}.$$ |
Stop the iterations when successive approximations do not differ in the first $d$ places after the decimal point.
Watch the video:
Newton’s Method from https://youtu.be/1uN8cBGVpfs
Let’s practice Newton’s Method with a concrete example.
Approximate the real root of ${x}^{3}-{x}^{2}-1=0$, accurate to the first 3 places after the decimal, using Newton’s Method and an initial approximation of ${x}_{0}=1$.
SolutionTo begin, we compute ${f}^{\prime}(x)=3{x}^{2}-2x$. Then we apply the Newton’s Method algorithm, outlined in Key Idea 4.4.1.
${x}_{1}$ | $=1-{\displaystyle \frac{f(1)}{{f}^{\prime}(1)}}=1-{\displaystyle \frac{{1}^{3}-{1}^{2}-1}{3\cdot {1}^{2}-2\cdot 1}}=2,$ | ||
${x}_{2}$ | $=2-{\displaystyle \frac{f(2)}{{f}^{\prime}(2)}}=2-{\displaystyle \frac{{2}^{3}-{2}^{2}-1}{3\cdot {2}^{2}-2\cdot 2}}=1.625,$ |
${x}_{3}$ | $=1.625-{\displaystyle \frac{f(1.625)}{{f}^{\prime}(1.625)}}=1.625-{\displaystyle \frac{{1.625}^{3}-{1.625}^{2}-1}{3\cdot {1.625}^{2}-2\cdot 1.625}}\approx 1.48579,$ | ||
${x}_{4}$ | $=1.48579-{\displaystyle \frac{f(1.48579)}{{f}^{\prime}(1.48579)}}\approx 1.46596,$ | ||
${x}_{5}$ | $=1.46596-{\displaystyle \frac{f(1.46596)}{{f}^{\prime}(1.46596)}}\approx 1.46557$ |
We performed 5 iterations of Newton’s Method to find a root accurate to the first 3 places after the decimal; our final approximation is $1.465.$ The exact value of the root, to six decimal places, is $1.465571$; It turns out that our ${x}_{5}$ is accurate to more than just 3 decimal places.
A graph of $f(x)$ is given in Figure 4.4.2. We can see from the graph that our initial approximation of ${x}_{0}=1$ was not particularly accurate; a closer guess would have been ${x}_{0}=1.5$. Our choice was based on ease of initial calculation, and shows that Newton’s Method can be robust enough that we do not have to make a very accurate initial approximation.
We can automate this process on a calculator that has an Ans
key that returns the result of the previous calculation. Start by pressing 1
and then Enter. (We have just entered our initial guess, ${x}_{0}=1$.) Now compute
$$\text{\U0001d670\U0001d697\U0001d69c}-\frac{f(\text{\U0001d670\U0001d697\U0001d69c})}{{f}^{\prime}(\text{\U0001d670\U0001d697\U0001d69c})}$$ |
by entering the following and repeatedly press the Enter key:
Ans-(Ans^3-Ans^2-1)/(3*Ans^2-2*Ans)
Each time we press the Enter key, we are finding the successive approximations, ${x}_{1}$, ${x}_{2}$, …, and each one is getting closer to the root. In fact, once we get past around ${x}_{7}$ or so, the approximations don’t appear to be changing. They actually are changing, but the change is far enough to the right of the decimal point that it doesn’t show up on the calculator’s display. When this happens, we can be pretty confident that we have found an accurate approximation.
We can use a similar approach in most spreadsheet programs, which intelligently copy formulas. Start by entering 1
in cell A1. Then in cell A2, enter:
A1-(A1^3-A1^2-1)/(3*A1^2-2*A1)
Copy this cell, and paste it into A3. The spreadsheet will automatically change A1 to A2, giving you the next approximation. Continue pasting this into A4, A5, and so on. Each time we paste the formula, we are finding the successive approximations, and each one is getting closer to the root.
Using a calculator in this manner makes the calculations simple; many iterations can be computed very quickly.
Use Newton’s Method to approximate a solution to $\mathrm{cos}x=x$, accurate to 5 places after the decimal.
SolutionNewton’s Method provides a method of solving $f(x)=0$; it is not (directly) a method for solving equations like $f(x)=g(x)$. However, this is not a problem; we can rewrite the latter equation as $f(x)-g(x)=0$ and then use Newton’s Method.
So we rewrite $\mathrm{cos}x=x$ as $\mathrm{cos}x-x=0$. Written this way, we are finding a root of $f(x)=\mathrm{cos}x-x$. We compute ${f}^{\prime}(x)=-\mathrm{sin}x-1$. Next we need a starting value, ${x}_{0}$. Consider Figure 4.4.3, where $f(x)=\mathrm{cos}x-x$ is graphed. It seems that ${x}_{0}=0.75$ is pretty close to the root, so we will use that as our ${x}_{0}$. (The figure also shows the graphs of $y=\mathrm{cos}x$ and $y=x$, drawn with dashed lines. Note how they intersect at the same $x$ value as when $f(x)=0$.)
We now compute ${x}_{1}$, ${x}_{2}$, etc. The formula for ${x}_{1}$ is
$${x}_{1}=0.75-\frac{\mathrm{cos}(0.75)-0.75}{-\mathrm{sin}(0.75)-1}\approx 0.7391111388.$$ |
Apply Newton’s Method again to find ${x}_{2}$:
$${x}_{2}=0.7391111388-\frac{\mathrm{cos}(0.7391111388)-0.7391111388}{-\mathrm{sin}(0.7391111388)-1}\approx 0.7390851334.$$ |
We can continue this way, but it is really best to automate this process. On a calculator with an Ans key, we would start by pressing 0.75, then Enter, inputting our initial approximation. We then enter:
Ans - (cos(Ans)-Ans)/(-sin(Ans)-1).
(In a spreadsheet, we would enter A1-(cos(A1)-A1)/(-sin(A1)-1) in A2.)
Repeatedly pressing the Enter key gives successive approximations. We quickly find:
${x}_{3}$ | $=0.7390851332$ | ||
${x}_{4}$ | $=0.7390851332.$ |
Our approximations ${x}_{2}$ and ${x}_{3}$ did not differ for at least the first 5 places after the decimal, so we could have stopped. However, using our calculator in the manner described is easy, so finding ${x}_{4}$ was not hard. It is interesting to see how we found an approximation, accurate to as many decimal places as our calculator displays, in just 4 iterations.
If you know how to program, you can translate the following pseudocode into your favorite language to perform the computation in this problem.
x = .75 while true oldx = x x = x - (cos(x)-x)/(-sin(x)-1) print x if abs(x-oldx) < .0000000001 break
This code calculates ${x}_{1}$, ${x}_{2}$, etc., storing each result in the variable x. The previous approximation is stored in the variable oldx. We continue looping until the difference between two successive approximations, abs(x-oldx), is less than some small tolerance, in this case, .0000000001.
What should one use for the initial guess, ${x}_{0}$? Generally, the closer to the actual root the initial guess is, the better. However, some initial guesses should be avoided. For instance, consider Example 4.4.1 where we sought the root to $f(x)={x}^{3}-{x}^{2}-1$. Choosing ${x}_{0}=0$ would have been a particularly poor choice. Consider Figure 4.4.4, where $f(x)$ is graphed along with its tangent line at $x=0$. Since ${f}^{\prime}(0)=0$, the tangent line is horizontal and does not intersect the $x$-axis. Graphically, we see that Newton’s Method fails.
We can also see analytically that it fails. Since
$${x}_{1}=0-\frac{f(0)}{{f}^{\prime}(0)}$$ |
and ${f}^{\prime}(0)=0$, we see that ${x}_{1}$ is not well defined.
This problem can also occur if, for instance, it turns out that ${f}^{\prime}({x}_{5})=0$. Adjusting the initial approximation ${x}_{0}$ by a very small amount will likely fix the problem.
It is also possible for Newton’s Method to not converge while each successive approximation is well defined. Consider $f(x)={x}^{1/3}$, as shown in Figure 4.4.5. It is clear that the root is $x=0$, but let’s approximate this with ${x}_{0}=0.1$. Figure 4.4.5(a) shows graphically the calculation of ${x}_{1}$; notice how it is farther from the root than ${x}_{0}$. Figures 4.4.5(b) and (c) show the calculation of ${x}_{2}$ and ${x}_{3}$, which are even farther away; our successive approximations are getting worse. (It turns out that in this particular example, each successive approximation is twice as far from the true answer as the previous approximation.)
There is no “fix” to this problem; Newton’s Method simply will not work and another method must be used.
While Newton’s Method does not always work, it does work “most of the time,” and it is generally very fast. Once the approximations get close to the root, Newton’s Method can as much as double the number of correct decimal places with each successive approximation. A course in Numerical Analysis will introduce the reader to more iterative root finding methods, as well as give greater detail about the strengths and weaknesses of Newton’s Method.
We first learned of the derivative in the context of instantaneous rates of change and slopes of tangent lines. We furthered our understanding of the power of the derivative by studying how it relates to the graph of a function (leading to ideas of increasing/decreasing and concavity). This chapter has put the derivative to yet more uses:
Related Rates (furthering our use of the derivative to find instantaneous rates of change)
Optimization (applied extreme values), and
Differentials (useful for various approximations and for something called integration).
Equation solving (Newton’s Method)
In the next chapters, we will consider the “reverse” problem to computing the derivative: given a function $f$, can we find a function whose derivative is $f$? Being able to do so opens up an incredible world of mathematics and applications.
T/F: Given a function $f\left(x\right)$, Newton’s Method produces an exact solution to $f\left(x\right)=0$.
T/F: In order to get a solution to $f\left(x\right)=0$ accurate to $d$ places after the decimal, at least $d+1$ iterations of Newton’s Method must be used.
In Exercises 3–8., the roots of $f\left(x\right)$ are known or are easily found. Use 5 iterations of Newton’s Method with the given initial approximation to approximate the root. Compare it to the known value of the root.
$f\left(x\right)=\mathrm{cos}x$, ${x}_{0}=1.5$
$f\left(x\right)=\mathrm{sin}x$, ${x}_{0}=1$
$f\left(x\right)={x}^{2}+x-2$, ${x}_{0}=0$
$f\left(x\right)={x}^{2}-2$, ${x}_{0}=1.5$
$f\left(x\right)=\mathrm{ln}x$, ${x}_{0}=2$
$f\left(x\right)={x}^{3}-{x}^{2}+x-1$, ${x}_{0}=1$
In Exercises 9–12., use Newton’s Method to approximate all roots of the given functions accurate to 3 places after the decimal. If an interval is given, find only the roots that lie in that interval. Use technology to obtain good initial approximations.
$f\left(x\right)={x}^{3}+5{x}^{2}-x-1$
$f\left(x\right)={x}^{4}+2{x}^{3}-7{x}^{2}-x+5$
$f\left(x\right)={x}^{17}-2{x}^{13}-10{x}^{8}+10$ on $(-2,2)$
$f\left(x\right)={x}^{2}\mathrm{cos}x+\left(x-1\right)\mathrm{sin}x$ on $(-3,3)$
In Exercises 13–16., use Newton’s Method to approximate when the given functions are equal, accurate to 3 places after the decimal. Use technology to obtain good initial approximations.
$f\left(x\right)={x}^{2}$, $g\left(x\right)=\mathrm{cos}x$
$f\left(x\right)={x}^{2}-1$, $g\left(x\right)=\mathrm{sin}x$
$f\left(x\right)={e}^{{x}^{2}}$, $g\left(x\right)=\mathrm{cos}x+1$
$f\left(x\right)=x$, $g\left(x\right)=\mathrm{tan}x$ on $[-6,6]$
Why does Newton’s Method fail in finding a root of $f\left(x\right)={x}^{3}-3{x}^{2}+x+3$ when ${x}_{0}=1$?
Why does Newton’s Method fail in finding a root of $f\left(x\right)=-17{x}^{4}+130{x}^{3}-301{x}^{2}+156x+156$ when ${x}_{0}=1$?
In Exercises 19–22., use Newton’s Method to approximate the given value.
$\sqrt{16.5}$.
$\sqrt{24}$.
$\sqrt[3]{63}$.
$\sqrt[3]{8.5}$.
If we need to calculate ${c}^{-1/2}$ quickly (for example, in doing computer graphics), one possible approach is to use Newton’s Method. Show that ${c}^{-1/2}$ is a root of $f\left(x\right)={x}^{-2}-c$. According to Newton’s Method, what is ${x}_{n+1}$ in terms of ${x}_{n}$ and $c$ for this $f$? (You can read the Wikipedia article on Fast Inverse Square Root for even more details.)