|
|
Lagrange Error Bound

When you are making plans for something, you might try to think of all the ways your plan could go wrong so you can prepare for them.  For example, before going on a car trip you might get the oil changed, have the tires checked, and make sure your insurance is up to date.  

Mockup Schule

Explore our app and discover over 50 million learning materials for free.

Lagrange Error Bound

Illustration

Lerne mit deinen Freunden und bleibe auf dem richtigen Kurs mit deinen persönlichen Lernstatistiken

Jetzt kostenlos anmelden

Nie wieder prokastinieren mit unseren Lernerinnerungen.

Jetzt kostenlos anmelden
Illustration

When you are making plans for something, you might try to think of all the ways your plan could go wrong so you can prepare for them. For example, before going on a car trip you might get the oil changed, have the tires checked, and make sure your insurance is up to date.

The same process happens with Taylor polynomials. What is the worst case for how far the Taylor polynomial is from the actual function value? The Lagrange error bound is the worst case scenario. Once you have a handle on that you have a guaranteed way of checking to make sure that your Taylor series converges!

Definition of the Lagrange Error Bound

Let's do a little review first. You will need the definition of the Taylor polynomial.

Let \(f\) be a function with at least \(n\) derivatives at \(x=a\). Then, the \(n^{th}\) order Taylor polynomial centered at \(x=a\) is given by

\[\begin{align} T_n(x)&=f(a)+\frac{f'(a)(x-a)}{1!}+\frac{f''(a)(x-a)^2}{2!}+\dots\\ & \quad +\frac{f^{(n)}(a)(x-a)^n}{n!}. \end{align}\]

Once you know how to define a Taylor polynomial, you can define the Taylor series.

Let \( f \) be a function that has derivatives of all orders at \( x=a \). The Taylor Series for \( f \) at \( x=a \) is

\[ T(x) = \sum_{n=0}^{\infty}\dfrac{f^{(n)}(a)}{n!}(x-a)^n , \]

where \( f^{(n)} \) indicates the \( n^{\text{th}}\) derivative of \( f \), and \( f^{(0)}\) is the original function \( f\).

The big problem is that you need a way to know if the Taylor series converges. You can find the actual error between the function and the Taylor polynomial, however in many cases that can be quite challenging! What you need is a way to figure out just how bad the error is. That is where the Lagrange error comes in!

Let \( f \) be a function that has derivatives of all orders in an open interval \(I\) containing \( x=a \). Then the Lagrange form of the remainder for the Taylor polynomial, also known as the Lagrange error, for \(f\) centered at \(a\) is

\[ R_n(x) = \frac{f^{(n+1)}(c)}{(n+1)!}(x-a)^{n+1} \]

where \(c\) is between \(x\) and \(a\).

Let's take a look at what the Lagrange error can do for you.

Formula for the Lagrange Error Bound

Once you know what the Lagrange error is you can start to see how helpful it can be. That starts with looking at the Taylor's Theorem with Remainder.

Taylor's Theorem with Remainder

Let \( f \) be a function that has derivatives of all orders in an open interval \(I\) containing \( x=a \). Then for each positive integer \(n\) and for each \(x\) in \(I\),

\[f(x) = T_n(x) + R_n(x)\]

for some \(c\) is between \(x\) and \(a\).

If you look closely, you will notice that the definition of the Lagrange error says that \(c\) is between \(x\) and \(a\), but Taylor's Theorem with Remainder gives you something more. It says that for some value of \(c\) between \(x\) and \(a\), the function is actually equal to the sum of the Taylor polynomial and the Lagrange error!

So if you want to know how far apart a function and its Taylor polynomial are, all you need to do is look at the Lagrange error.

The Lagrange error bound is the largest value the Lagrange error takes on given the function \(f\) and the interval \(I\).

That means the formula for the Lagrange error bound for a given function \(f\), interval \(I\), and point \(a\) in the interval is

\[ \max\limits_{x\in I}|R_n(x)| = \max\limits_{x\in I}\left| \frac{f^{(n+1)}(a)}{(n+1)!}(x-a)^{n+1} \right|, \]

and you know by the way it is defined that

\[|R_n(x)| \le \max\limits_{x\in I} \left| \frac{f^{(n+1)}(a)}{(n+1)!}(x-a)^{n+1} \right| .\]

Now you have a way to tell if the Taylor series converges!

If \(R_n(x) \to 0\) as \(n \to \infty\) for all \(x\) in \(I\), then the Taylor series generated by \(f\) at \(x=a\) converges to \(f\) on \(I\), and this is written as

\[f(x) = \sum_{n=0}^{\infty}\dfrac{f^{(n)}(a)}{n!}(x-a)^n .\]

Notice that in the definition of the Taylor series, you weren't writing \(f(x) = \text{series}\) because you didn't know if the series actually converged. By looking at the Lagrange error you can tell if the series really does converge. Before going any further let's look at some examples.

Lagrange Error Bound Example

There are some properties the function and interval can have that will make finding the Lagrange error bound even simpler than defined above:

  • if the interval is centered at \(x=a\) it can be written as \(I=(a-R,a+R)\) for some \(R>0\), then \(|(x-a)^{n+1} |\le R^{n+1}\); and

  • if \(f^{(n+1)}(x) \le M\) on \(I\) for some \(M>0\) (in other words the derivatives are bounded), then \(|f^{(n+1)}(c) |<M\) on \(I\);

then you can conclude that

\[|R_n(x) | \le M\frac{R^{n+1}}{(n+1)!}.\]

Let's look at an example applying this conclusion.

What is the maximum error when finding a Maclaurin polynomial for \(\sin x\) on the interval \( \left[ -\dfrac{\pi}{2}, \dfrac{\pi}{2} \right]\)? What can you conclude about the Maclaurin series for \(\sin x\)?

Solution:

First, remember that a Maclaurin polynomial is just a Taylor polynomial centered at \(x=0\). Looking at some of the derivatives of \(f(x)=\sin x\) along with their function values at \(x=0\) you get:

\[ \begin{array}{ccc} &f(x) = \sin x & \quad \quad & f(0) = 0\\ &f'(x) = \cos x & \quad \quad & f'(0)= 1 \\ &f''(x) = -\sin x & \quad \quad & f''(0)=0 \\ &f'''(x) = -\cos x & \quad \quad & f'''(0)= -1 \\ &f^{(4)}(x) = \sin x & \quad \quad & f^{(4)}(0) = 0. \end{array} \]

As you can see it cycles back around to the start of the list when you get to the \(4^{\text{th}}\) derivative. So the Maclaurin polynomial of order \(n\) for \(\sin x\) is

\[\begin{align} T_n(x) &= 0 + \frac{1}{1!}x + 0 + \frac{-1}{3!}x^3 + 0 + \dots \\ & \quad + \begin{cases} 0 & \text{ if } n \text{ is even} \\ \dfrac{f^{(n)}(0)}{n!}x^n & \text{ if } n \text{ is odd} \end{cases} \end{align}\]

and the Lagrange error will have a different formula depending on if \(n\) is odd or even as well.

However you want to find the maximum error, and that certainly isn't going to happen when the error term is zero! This polynomial is centered at \(x=0\), and the interval is

\[\left[ -\dfrac{\pi}{2}, \dfrac{\pi}{2} \right].\]

That means \(R = \frac{\pi}{2}\). Because all of the derivatives involve sine and cosine, you also know that

\[|f^{(n+1)}(c) |<1\]

for any \(c\) in the interval \(I\). Therefore

\[\begin{align} |R_n(x) | &\le M\frac{R^{n+1}}{(n+1)!} \\ &= 1\cdot \dfrac{\left(\dfrac{\pi}{2}\right)^{n+1} }{(n+1)!} \\ &= \left(\dfrac{\pi}{2}\right)^{n+1} \frac{1}{(n+1)!}, \end{align}\]

and that is the maximum error.

You would like to draw a conclusion about the Maclaurin series for \(\sin x\). To do that you need to look at

\[\lim\limits_{n\to \infty} |R_n(x) | = \lim\limits_{n\to \infty} \left(\dfrac{\pi}{2}\right)^{n+1} \frac{1}{(n+1)!} .\]

Since this sequence converges to \(0\) as \(n \to \infty\), you can conclude that the Maclaurin series does converge. In fact the Maclaurin series is equal to the function on the entire interval \( \left[ -\dfrac{\pi}{2}, \dfrac{\pi}{2} \right]\).

For a reminder on sequences and their convergence, see Sequences and Limit of a Sequence

Let's look at the idea from a slightly different angle.

When you are estimating

\[\sin \left(\dfrac{\pi}{16}\right)\]

using the Maclaurin polynomial, what is the smallest degree of the polynomial that guarantees the error will be less than \(\dfrac{1}{100}\)?

Solution:

From the previous example you know that the error on the interval \( \left[ -\dfrac{\pi}{2}, \dfrac{\pi}{2} \right]\) has the property that

\[|R_n(x) | \le \left(\dfrac{\pi}{2}\right)^{n+1} \frac{1}{(n+1)!} \]

You want that error to be less than \(\dfrac{1}{100}\), or in other words that

\[ \left(\dfrac{\pi}{2}\right)^{n+1} \frac{1}{(n+1)!} < \frac{1}{100}.\]

Unfortunately solving for \(n\) is quite challenging! So the only thing you can do is try out values of \(n\) and see which one makes the Lagrange error bound sufficiently small.

But what if you don't have a calculator handy? The problem is really that the interval is too big, which makes \(\dfrac{\pi}{2} >1\). Can you change the interval so that \(\dfrac{\pi}{16} \) is inside the interval, but the bound is smaller? Sure thing!

The maximum error when finding a Maclaurin polynomial for \(\sin x\) on the interval \( \left[ -\dfrac{\pi}{4}, \dfrac{\pi}{4} \right]\) has the property that

\[|R_n(x) | \le \left(\dfrac{\pi}{4}\right)^{n+1} \frac{1}{(n+1)!} ,\]

where you have used the same technique as in the previous example. Then

\[ \dfrac{\pi}{16} \in \left[ -\dfrac{\pi}{4}, \dfrac{\pi}{4} \right] \]

and

\[ \dfrac{\pi}{4} < 1, \]

so

\[\begin{align} |R_n(x) | &\le \left(\dfrac{\pi}{4}\right)^{n+1} \frac{1}{(n+1)!} \\ &< \frac{1}{(n+1)!}. \end{align}\]

Now you need to make sure that the error is small enough, which means you need that

\[ \frac{1}{(n+1)!} < \frac{1}{100},\]

which is much easier to calculate. In fact if you take \(n=4\) you get that

\[ \frac{1}{(4+1)!} = \frac{1}{5!} = \frac{1}{120} < \frac{1}{100}.\]

That might make you think that you need a \(4^{\text{th}}\) degree Maclaurin polynomial, but you already know that the even terms of the Maclaurin polynomial are zero! So do you pick \(n=3\) or \(n=5\) to make sure the error is small enough since the Maclaurin polynomial is the same for \(n=3\) and \(n=4\)? If you want an absolute guarantee that the error is going to be small enough, use \(n=5\).

If you check the actual errors,

\[ \begin{align} \left|T_3\left(\dfrac{\pi}{16}\right) - \sin \left(\dfrac{\pi}{16}\right) \right|&= \left| \frac{1}{1!}\left(\dfrac{\pi}{16}\right) + \frac{-1}{3!}\left(\dfrac{\pi}{16}\right) ^3 - \sin \left(\dfrac{\pi}{16}\right) \right| \\ &= \left|\dfrac{\pi}{16} - \dfrac{1}{6}\left(\dfrac{\pi}{16}\right) ^3 - \sin \left(\dfrac{\pi}{16}\right) \right| \\ &\approx 0.0000024, \end{align}\]

which is quite a bit smaller than you needed!

Would it have been small enough if you had taken \(n=1\)? In that case

\[ \begin{align} \left|T_1\left(\dfrac{\pi}{16}\right) - \sin \left(\dfrac{\pi}{16}\right) \right|&= \left| \frac{1}{1!}\left(\dfrac{\pi}{16}\right) - \sin \left(\dfrac{\pi}{16}\right) \right| \\ & \approx 0.00126, \end{align}\]

so even that is smaller than the error you were given. The problem of course is doing the approximation without using a calculator!

You might have noticed that the Maclaurin series in the example involving the sine function is an alternating series. So how does the alternating series error bound compare to the Lagrange error bound?

Alternating Series Error Bound vs Lagrange Error Bound

Be wary, the Lagrange error bound and the alternating series error bound are not the same thing!

Given a series

\[ f(x) = \sum\limits_{n=1}^\infty a_nx^n\]

where the signs of \(a_n\) are alternating, then the error bound after the \(x^n\) term is

\[ \text{alternating series error} = \left| a_{n+1}x^{n+1}\right|.\]

Notice that the alternating series error bound does not have any derivatives in it. Even when you are looking at a Maclaurin series the alternating series error bound and the Lagrange error bound might very well give you different bounds because one involves powers of \(x\) and the other involves derivatives of the function as well as powers of \(x\).

Lagrange Error Bound Proof

The proof of the Lagrange error bound involves repeatedly integrating the error bound and comparing it to the Taylor polynomial. Needless to say, that can get technical and complicated quite quickly, so the proof is not included here.

Lagrange Error Bound - Key takeaways

  • Let \( f \) be a function that has derivatives of all orders in an open interval \(I\) containing \( x=a \). Then the Lagrange form of the remainder for the Taylor polynomial, also known as the Lagrange error, for \(f\) centered at \(a\) is

    \[ R_n(x) = \frac{f^{(n+1)}(c)}{(n+1)!}(x-a)^{n+1} \]

    where \(c\) is between \(x\) and \(a\).

  • The Lagrange error bound is the largest value the Lagrange error takes on given the function \(f\) and the interval \(I\).

  • If \(R_n(x) \to 0\) as \(n \to \infty\) for all \(x\) in \(I\), then the Taylor series generated by \(f\) at \(x=a\) converges to \(f\) on \(I\), and this is written as

    \[f(x) = \sum_{n=0}^{\infty}\dfrac{f^{(n)}(a)}{n!}(x-a)^n .\]

  • If the interval is centered at \(x=a\) it can be written as \(I=(a-R,a+R)\) for some \(R>0\), then \(|(x-a)^{n+1} |\le R^{n+1}\), and if \(f^{(n+1)}(x) \le M\) on \(I\) for some \(M>0\) then \(|f^{(n+1)}(c) |<M\) on \(I\), then

    \[|R_n(x) | \le M\frac{R^{n+1}}{(n+1)!}.\]

Frequently Asked Questions about Lagrange Error Bound

The Lagrange error bound is an upper bound for how far away the Taylor polynomial approximation is from the actual function at a given point.

By using the Lagrange form of the remainder for a Taylor polynomial.  It involves taking one more derivative than is used in the Taylor polynomial.

The Lagrange error bound acts as a worst case scenario for how far the Taylor polynomial is from the actual function at a point.  That is why if the Lagrange error bound goes to 0 as you take the limit then you know the Taylor series converges.

The function needs to have derivatives of all orders in an open interval around the point you care about.  Then you can calculate the Lagrange error bound and use it to see if the Taylor series converges.

It is the order of the associated Taylor polynomial.

Test your knowledge with multiple choice flashcards

If the interval is centered at \(x=a\) it can be written as \(I=(a-R,a+R)\) for some \(R>0\), what can you say about \(|(x-a)^{n+1} |\)?

If \(f^{(n+1)}(x) \le M\) on \(I\) for some \(M>0\), what can you say about \(|f^{(n+1)}(c) |\) on \(I\)?

True or False: All Taylor series are equal to the actual function.

Next

Join over 22 million students in learning with our StudySmarter App

The first learning app that truly has everything you need to ace your exams in one place

  • Flashcards & Quizzes
  • AI Study Assistant
  • Study Planner
  • Mock-Exams
  • Smart Note-Taking
Join over 22 million students in learning with our StudySmarter App Join over 22 million students in learning with our StudySmarter App

Sign up to highlight and take notes. It’s 100% free.

Entdecke Lernmaterial in der StudySmarter-App

Google Popup

Join over 22 million students in learning with our StudySmarter App

Join over 22 million students in learning with our StudySmarter App

The first learning app that truly has everything you need to ace your exams in one place

  • Flashcards & Quizzes
  • AI Study Assistant
  • Study Planner
  • Mock-Exams
  • Smart Note-Taking
Join over 22 million students in learning with our StudySmarter App