Skip to main content

Section 7.1 The Integral Form of the Remainder

Now that we have a rigorous definition of the convergence of a sequence, let’s apply this to Taylor series. Recall that the Taylor series of a function \(f(x)\) expanded about the point \(a\) is given by
\begin{equation*} \sum_{n=0}^\infty\frac{f^{(n)}(a)}{n!}(x-a)^n=f(a)+\frac{f^{\,\prime}(a)}{1!}(x-a)+\frac{f^{\,\prime\prime}(a)}{2!}(x-a)^2+\cdots \end{equation*}
When we say that \(f(x)=\sum_{n=0}^\infty\frac{f^{(n)}(a)}{n!}(x-a)^n\) for a particular value of \(x\text{,}\) what we mean is that the sequence of partial sums
\begin{alignat*}{1} \amp\left(\sum_{j=0}^n\frac{f^{(j)}(a)}{j!}(x-a)^j\right)_{n=0}^\infty\\ \amp= \left(f(a), f(a)+\frac{f^{\prime}(a)}{1!}(x-a),f(a) +\frac{f^{\prime}(a)}{1!}(x-a)+\frac{f^{\prime\prime}(a)}{2!}(x-a)^2,\ldots\right) \end{alignat*}
converges to the number \(f(x)\text{.}\) Note that the index in the summation was changed to \(j\) to allow \(n\) to represent the index of the sequence of partial sums. As intimidating as this may look, bear in mind that for a fixed real number \(x\text{,}\) this is still a sequence of real numbers so, that saying \(f(x)=\sum_{n=0}^\infty\frac{f^{(n)}(a)}{n!}(x-a)^n\) means that \(\lim_{n\rightarrow\infty}\left(\sum_{j=0}^n\frac{f^{(j)}(a)}{j!}(x-a)^j\right)=f(x)\) and in the previous chapter we developed some tools to examine this phenomenon. In particular, we know that \(\lim_{n\rightarrow\infty}\left(\sum_{j=0}^n\frac{f^{(j)}(a)}{j!}(x-a)^j\right)=f(x)\) is equivalent to
\begin{equation*} \lim_{n\rightarrow\infty}\Biggl[f(x)-\left(\sum_{j=0}^n \frac{f^{(j)}(a)}{j!}(x-a)^j\right)\Biggr]=0\text{.} \end{equation*}
We have seen an example of this already. Problem 6.4.4 of the last chapter basically had you show that the geometric series, \(1+x+x^2+x^3+\cdots\) converges to \(\frac{1}{1-x}\text{,}\)for \(|x|\lt 1\) by showing that \(\limit{n}{\infty}{\Biggl[\frac{1}{1-x}-\left(\sum_{j=0}^nx^j\right)\Biggr]=0}\text{.}\)
There is generally not a readily recognizable closed form for the partial sum for a Taylor series. The geometric series is a special case. Fortunately, for the issue at hand (convergence of a Taylor series), we don’t need to analyze the series itself. What we need to show is that the difference between the function and the \(n\)th partial sum converges to zero. This difference is called the remainder (of the Taylor series). (Why?)
While it is true that the remainder is simply
\begin{equation*} f(x)-\left(\sum_{j=0}^n\frac{f^{(j)}(a)}{j!}(x-a)^j\right)\text{,} \end{equation*}
this form is not easy to work with. Fortunately, a number of alternate versions of this remainder are available. We will explore these in this chapter.
Recall the result from Theorem 4.1.9 from Chapter 4,
\begin{align*} f(x)=f(a)+\frac{f^{\,\prime}(a)}{1!}(x-a)+\frac{f^{\,\prime\prime}(a)}{2!}(x-a)^2\amp +\cdots+\frac{f^{(n)}(a)}{n!}(x-a)^n\\ \amp +\frac{1}{n!}\int_{t=a}^xf^{(n+1)}(t)(x-t)^n\dx{ t}\text{.} \end{align*}
We can use this by rewriting it as
\begin{equation*} f(x)-\left(\sum_{j=0}^n\frac{f^{(j)}(a)}{j!}(x-a)^j\right)=\frac{1}{n!}\int_{t=a}^xf^{(n+1)}(t)(x-t)^n\dx{ t}\text{.} \end{equation*}
The expression \(\frac{1}{n!}\int_{t=a}^xf^{(n+1)}(t)(x-t)^n\dx{ t}\) is called the integral form of the remainder for the Taylor series of \(f(x)\text{,}\) and the Taylor series will converge to \(f(x)\) exactly when the sequence \(\lim_{n\rightarrow\infty}\left(\frac{1}{n!}\int_{t=a}^xf^{(n+1)}(t)(x-t)^n\dx{ t}\text{ } \right)\) converges to zero. It turns out that this form of the remainder is often easier to handle than the original \(f(x)-\left(\sum_{j=0}^n\frac{f^{(j)}(a)}{j!}(x-a)^j\right)\) and we can use it to obtain some general results.
In order to prove this, it might help to first prove the following.

Problem 7.1.3.

Prove Lemma 7.1.2.
Hint.
\(-|f(t)|\leq f(t)\leq|f(t)|\text{.}\)

Problem 7.1.4.

Hint.
You might want to use Problem 6.4.8 of Chapter 6. Also there are two cases to consider: \(a\lt x\) and \(x\lt a\) (the case \(x=a\) is trivial). You will find that this is true in general. This is why we will often indicate that \(t\) is between \(a\) and \(x\) as in the theorem. In the case \(x\lt a\text{,}\) notice that
\begin{align*} \left|\int_{t=a}^xf^{(n+1)}(t)(x-t)^n\dx{ t}\right|\amp =\left|(-1)^{n+1}\int_{t=x}^af^{(n+1)}(t)(t-x)^n\dx{ t}\right|\\ \amp =\left|\int_{t=x}^af^{(n+1)}(t)(t-x)^n\dx{ t}\right|. \end{align*}

Problem 7.1.5.

Use Theorem 7.1.1 to prove that for any real number \(x\)

(a)

\(\displaystyle\sin x=\sum_{n=0}^\infty\frac{(-1)^nx^{2n+1}}{(2n+1)!}\)

(b)

\(\displaystyle\cos x= \sum_{n=0}^\infty\frac{(-1)^nx^{2n}}{(2n)!}\)

(c)

\(\displaystyle e^x=\sum_{n=0}^\infty\frac{x^n}{n!}\)
Problem 7.1.5.c shows that the Taylor series of \(e^x\) expanded at zero converges to \(e^x\) for any real number \(x\text{.}\) Theorem 7.1.1 can be used in a similar fashion to show that
\begin{equation*} e^x=\sum_{n=0}^\infty\frac{e^a(x-a)^n}{n!} \end{equation*}
for any real numbers \(a\) and \(x\text{.}\)
Recall that Chapter 3 we showed that if we define the function \(E(x)\) by the power series \(\sum_{n=0}^\infty\frac{x^n}{n!}\) then \(E(x+y)=E(x)E(y)\text{.}\) This, of course, is just the familiar addition property of integer exponents extended to any real number. In Chapter 3 we had to assume that defining \(E(x)\) as a series was meaningful because we did not address the convergence of the series in that chapter. Now that we know the series converges for any real number we see that the definition
\begin{equation*} f(x) = e^x = \sum_{n=0}^\infty\frac{x^n}{n!} \end{equation*}
is in fact valid.
Assuming that we can differentiate this series term-by-term it is straightforward to show that \(f^\prime(x) = f(x)\text{.}\)
Along with Taylor’s formula this can then be used to show that \(e^{a+b}=e^ae^b\) more elegantly than the rather cumbersome proof in equation (14), as the following problem shows.

Problem 7.1.6.

Recall that if \(f(x)=e^x\) then \(f^\prime(x) = e^x\text{.}\) Use this along with the Taylor series expansion of \(e^x\) about \(a\) to show that
\begin{equation*} e^{a+b}=e^ae^b. \end{equation*}
Theorem 7.1.1 is a nice “first step” toward a rigorous theory of the convergence of Taylor series, but it is not applicable in all cases. For example, consider the function \(f(x)=\sqrt{1+x}\text{.}\) As we saw in Chapter 3, Problem 3.2.12, this function’s Maclaurin series (the binomial series for \(\left(1+x\right)^{1/2})\)appears to be converging to the function for \(x\in(-1,1)\text{.}\) While this is, in fact, true, the above proposition does not apply. If we consider the derivatives of \(f(t)=(1+t)^{1/2}\text{,}\) we obtain:
\begin{align*} f^\prime(t)\amp =\frac{1}{2}(1+t)^{\frac{1}{2}-1}\\ f^{\prime\prime}(t)\amp =\frac{1}{2}\left(\frac{1}{2}-1\right)(1+t)^{\frac{1}{2}-2}\\ f^{\prime\prime\prime}(t)\amp =\frac{1}{2}\left(\frac{1}{2}-1\right)\left(\frac{1}{2}-2 \right)(1+t)^{\frac{1}{2}-3}\\ \amp \vdots\\ f^{(n+1)}(t)\amp =\frac{1}{2}\left(\frac{1}{2}-1\right)\left(\frac{1}{2}-2\right) \cdots\left(\frac{1}{2}-n\right)(1+t)^{\frac{1}{2}-(n+1)}\text{.} \end{align*}
Notice that
\begin{equation*} \left|f^{(n+1)}(0)\right|=\frac{1}{2}\left(1-\frac{1}{2}\right)\left(2-\frac{1}{2}\right)\cdots\left(n-\frac{1}{2}\right)\text{.} \end{equation*}
Since this sequence grows without bound as \(n\rightarrow\infty\text{,}\) then there is no chance for us to find a number \(B\) to act as a bound for all of the derviatives of \(f\) on any interval containing 0 and\(\) \(x\text{,}\) and so the hypothesis of Theorem 7.1.1 will never be satisfied. We need a more delicate argument to prove that
\begin{equation*} \sqrt{1+x}=1+\frac{1}{2}x+\frac{\frac{1}{2}\left(\frac{1}{2}-1\right)}{2!}x^2+\frac{\frac{1}{2}\left(\frac{1}{2}-1\right)\left(\frac{1}{2}-2\right)}{3!}x^3+\cdots \end{equation*}
is valid for \(x\in(-1,1)\text{.}\) To accomplish this task, we will need to express the remainder of the Taylor series differently. Fortunately, there are at least two such alternate forms.