Skip to main content

Chapter 7 A “Tayl” of Three Remainders

Section 7.1 The Integral Form of the Remainder

Now that we have a rigorous definition of the convergence of a sequence, let’s apply this to Taylor series. Recall that the Taylor series of a function \(f(x)\) expanded about the point \(a\) is given by
\begin{equation*} \sum_{n=0}^\infty\frac{f^{(n)}(a)}{n!}(x-a)^n=f(a)+\frac{f^{\,\prime}(a)}{1!}(x-a)+\frac{f^{\,\prime\prime}(a)}{2!}(x-a)^2+\cdots \end{equation*}
When we say that \(f(x)=\sum_{n=0}^\infty\frac{f^{(n)}(a)}{n!}(x-a)^n\) for a particular value of \(x\text{,}\) what we mean is that the sequence of partial sums
\begin{align*} \amp{} \left(\sum_{j=0}^n\frac{f^{(j)}(a)}{j!}(x-a)^j\right)_{n=0}^\infty\\ \amp{}\ \ \ \ \ \ \ \ \ \ \ \ = \left(f(a), f(a)+\frac{f^{\prime}(a)}{1!}(x-a),\right. \\ \amp{}\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \left.{}f(a) +\frac{f^{\prime}(a)}{1!}(x-a)+\frac{f^{\prime\prime}(a)}{2!}(x-a)^2,\ldots\right) \end{align*}
converges to the number \(f(x)\text{.}\) Note that the index in the summation was changed to \(j\) to allow \(n\) to represent the index of the sequence of partial sums. As intimidating as this may look, bear in mind that for a fixed real number \(x\text{,}\) this is still a sequence of real numbers so, that saying \(f(x)=\sum_{n=0}^\infty\frac{f^{(n)}(a)}{n!}(x-a)^n\) means that \(\limitt{n}{\infty}{\left(\sum_{j=0}^n\frac{f^{(j)}(a)}{j!}(x-a)^j\right)}=f(x)\) and in the previous chapter we developed some tools to examine this phenomenon. In particular, we know that \(\limitt{n}{\infty}{\left(\sum_{j=0}^n\frac{f^{(j)}(a)}{j!}(x-a)^j\right)}=f(x)\) is equivalent to
\begin{equation*} \limit{n}{\infty}{\Biggl[f(x)-\left(\sum_{j=0}^n \frac{f^{(j)}(a)}{j!}(x-a)^j\right)\Biggr]}=0\text{.} \end{equation*}
We have seen an example of this already. In Problem 6.1.13 you had to show that the geometric series, \(1+x+x^2+x^3+\cdots\) converges to \(\frac{1}{1-x}\text{,}\)for \(|x|\lt 1\) by showing that
\begin{equation*} \limit{n}{\infty}{\Biggl[\frac{1}{1-x}-\left(\sum_{j=0}^nx^j\right)\Biggr]=0}\text{.} \end{equation*}
For the issue at hand (convergence of a Taylor series), we don’t need to analyze the series itself. What we need to show is that the difference between the function and the \(n\)th partial sum converges to zero. This difference is called the remainder (of the Taylor series). (Why?)
While it is true that the remainder is simply
\begin{equation*} f(x)-\left(\sum_{j=0}^n\frac{f^{(j)}(a)}{j!}(x-a)^j\right)\text{,} \end{equation*}
it is not easy to work with it in this form. Fortunately, a number of alternate forms of this remainder are available. We will explore some of those in this chapter.
Recall from Theorem 4.1.12 from Chapter 4 that,
\begin{align*} f(x)=f(a)+\frac{f^{\,\prime}(a)}{1!}(x-a)+\frac{f^{\,\prime\prime}(a)}{2!}(x-a)^2\amp{} +\cdots+\frac{f^{(n)}(a)}{n!}(x-a)^n\\ \amp{} + \frac{1}{n!}\int_{t=a}^xf^{(n+1)}(t)(x-t)^n\dx{t} \amp{}\amp{} \end{align*}
or
\begin{equation} f(x)=\left(\sum_{j=0}^n\frac{f^{(j)}(a)}{j!}(x-a)^j\right)+\frac{1}{n!}\int_{t=a}^xf^{(n+1)}(t)(x-t)^n\dx{t}\text{.}\tag{7.1.1} \end{equation}
which we can rewrite as
\begin{equation*} f(x)-\left(\sum_{j=0}^n\frac{f^{(j)}(a)}{j!}(x-a)^j\right)=\frac{1}{n!}\int_{t=a}^xf^{(n+1)}(t)(x-t)^n\dx{ t}\text{.} \end{equation*}
The expression
\begin{equation} \frac{1}{n!}\int_{t=a}^xf^{(n+1)}(t)(x-t)^n\dx{ t}\tag{7.1.2} \end{equation}
is called the Integral Form of the remainder of the Taylor series of \(f(x)\text{.}\)
Clearly, the Taylor series will converge to \(f(x)\) exactly when
\begin{equation*} \limit{n}{\infty}{\left(\frac{1}{n!}\int_{t=a}^xf^{(n+1)}(t)(x-t)^n\dx{t} \right)}=0 \text{.} \end{equation*}
The Integral Form of the remainder is usually easier to work with than the original form:
\begin{equation*} f(x)-\left(\sum_{j=0}^n\frac{f^{(j)}(a)}{j!}(x-a)^j\right) \end{equation*}
and we can use the Integral Form to obtain some conditions which guarantee the convergence of particular Taylor series such as the following.
In order to prove this, it might help to first prove the following.

Problem 7.1.4.

Hint.
You might want to use Problem 6.2.20 of Chapter 6. Also there are two cases to consider: \(a\lt x\) and \(x\lt a\) (the case \(x=a\) is trivial). You will find that this is true in general. This is why we will often indicate that \(t\) is between \(a\) and \(x\) as in the theorem. In the case \(x\lt a\text{,}\) notice that
\begin{align*} \abs{\int_{t=a}^xf^{(n+1)}(t)(x-t)^n\dx{ t}}\amp =\abs{(-1)^{n+1}\int_{t=x}^af^{(n+1)}(t)(t-x)^n\dx{ t}}\\ \amp =\abs{\int_{t=x}^af^{(n+1)}(t)(t-x)^n\dx{ t}}. \end{align*}

Problem 7.1.5.

Use Theorem 7.1.1 to prove that for any real number \(x\)
(a)
\(\displaystyle\sin x=\sum_{n=0}^\infty\frac{(-1)^nx^{2n+1}}{(2n+1)!}\)
(b)
\(\displaystyle\cos x= \sum_{n=0}^\infty\frac{(-1)^nx^{2n}}{(2n)!}\)
(c)
\(\displaystyle e^x=\sum_{n=0}^\infty\frac{x^n}{n!}\)
Part (c) of Problem 7.1.5 shows that the Taylor series of \(e^x\) expanded at zero converges to \(e^x\) for any real number \(x\text{.}\) Theorem 7.1.1 can be used in a similar fashion to show that
\begin{equation*} e^x=\sum_{n=0}^\infty\frac{e^a(x-a)^n}{n!} \end{equation*}
for any real numbers \(a\) and \(x\text{.}\)
Recall that in Chapter 3 we showed that if we define the function \(E(x)\) by the power series \(\sum_{n=0}^\infty\frac{x^n}{n!}\) then \(E(x+y)=E(x)E(y)\text{.}\) This, of course, is just the familiar addition property of integer exponents extended to any real number. In Chapter 3 we had to assume that defining \(E(x)\) as a series was meaningful because we did not address the convergence of the series in that chapter. Now that we know the series converges for any real number we see that the definition
\begin{equation*} f(x) = e^x = \sum_{n=0}^\infty\frac{x^n}{n!} \end{equation*}
is in fact valid.
Assuming that we can differentiate this series term–by–term it is straightforward to show that \(f^\prime(x) = f(x)\text{.}\) (We can, but that proof will have to wait for Section 11.1.)
If term–by–term differentiation is valid we can use Taylor’s formula to show that \(e^{a+b}=e^ae^b\) more elegantly than the rather cumbersome proof in equation (3.2.3), as the following problem shows.

Problem 7.1.6.

Expand \(e^x\) about \(a\) using the Taylor series expansion to show that
\begin{equation*} e^{a+x}=e^a\cdot e^x\text{.} \end{equation*}
Theorem 7.1.1 is a nice first step toward a rigorous theory of the convergence of Taylor series, but it is not applicable in all cases. For example, consider the function \(f(x)=\left(1+x\right)^{\frac12}\text{.}\) As we saw in Problem 3.2.18, the Maclaurin (binomial) series for \(\left(1+x\right)^{\frac12}\) appears to be converging to the function for \(x\in(-1,1)\text{.}\) While this is true, the above proposition does not apply. We can see this by finding a general formula for the derivatives of \(f(x)=\left(1+x\right)^{\frac12}\) as follows.
\begin{align*} f^\prime(x)\amp =\frac{1}{2}(1+x)^{\frac{1}{2}-1}\\ f^{\prime\prime}(x)\amp =\frac{1}{2}\left(\frac{1}{2}-1\right)(1+x)^{\frac{1}{2}-2}\\ f^{\prime\prime\prime}(x)\amp =\frac{1}{2}\left(\frac{1}{2}-1\right)\left(\frac{1}{2}-2 \right)(1+x)^{\frac{1}{2}-3}\\ \amp \vdots\\ f^{(n+1)}(x)\amp =\frac{1}{2}\left(\frac{1}{2}-1\right)\left(\frac{1}{2}-2\right) \cdots\left(\frac{1}{2}-n\right)(1+x)^{\frac{1}{2}-(n+1)}\text{.} \end{align*}
Notice that
\begin{equation*} \abs{f^{(n+1)}(0)}=\frac{1}{2}\left(1-\frac{1}{2}\right)\left(2-\frac{1}{2}\right)\cdots\left(n-\frac{1}{2}\right)\text{.} \end{equation*}
Since this sequence grows without bound as \(n\rightarrow\infty\text{,}\) there is no bound for all of the derivatives of \(f\) on any interval containing \(0\) and \(x\text{,}\) and so the hypothesis of Theorem 7.1.1 will never be satisfied on an interval containing zero. We need a more delicate argument to prove that
\begin{equation*} \left(1+x\right)^{\frac12}=1+\frac{1}{2}x+\frac{\frac{1}{2}\left(\frac{1}{2}-1\right)}{2!}x^2+\frac{\frac{1}{2}\left(\frac{1}{2}-1\right)\left(\frac{1}{2}-2\right)}{3!}x^3+\cdots \end{equation*}
is valid for \(x\in(-1,1)\text{.}\) To accomplish that task, we will need to express the remainder of the Taylor series differently. Fortunately there are at least two more forms the remainder available.

Section 7.2 Lagrange’s Form of the Remainder

Joseph–Louis Lagrange provided an alternate form for the remainder of a Taylor series in his \(1797\) work Théorie des fonctions analytiques as follows.

Proof.

Note first that equation (7.2.1) is true when \(x=a\) as both sides reduce to \(0\) (in that case \(c=x=a\text{.}\)) We will prove the case where \(a\lt x\text{.}\) Problem 7.2.2 asks you to prove the case \(x\lt a\) thereby finishing the proof.
First, we already have
\begin{equation*} f(x)-\left(\sum_{j=0}^n\frac{f^{(j)}(a)}{j!}(x-a)^j\right)=\frac{1}{n!} \int_{t=a}^xf^{(n+1)}(t)(x-t)^n\dx{ t} \end{equation*}
so it suffices to show that
\begin{equation*} \int_{t=a}^xf^{(n+1)}(t)(x-t)^n\dx{ t}=\,\frac{f^{\,(n+1)}(c)}{n+1}(x-a)^{n+1} \end{equation*}
for some \(c\) with \(c\in[\,a,x]\text{.}\) To this end, let
\begin{equation*} M=\max_{a\le t\le x}\left(f^{(n+1)}(t)\right) \end{equation*}
and
\begin{equation*} m=\min_{a\le t\le x}\left(f^{(n+1)}(t)\right)\text{.} \end{equation*}
Note that for all \(t\in[\,a,x]\text{,}\) we have \(m\leq f^{(n+1)}(t)\leq M\text{.}\) Since \(x-t\geq 0\text{,}\) this gives us
\begin{equation} m\left(x-t\right)^n\leq f^{(n+1)}(t)(x-t)^n\leq M(x-t)^n\tag{7.2.2} \end{equation}
and so
\begin{equation} \int_{t=a}^xm\left(x-t\right)^n\dx{ t}\leq\int_{t=a}^xf^{(n+1)}(t)(x-t)^n\dx{ t}\leq \int_{t=a}^xM(x-t)^n\dx{ t}\text{.}\tag{7.2.3} \end{equation}
Computing the outside integrals, we have
\begin{equation*} m\int_{t=a}^x\left(x-t\right)^n\dx{ t}\leq\int_{t=a}^xf^{(n+1)}(t)(x-t)^n\dx{ t}\leq M\int_{t=a}^x(x-t)^n\dx{ t} \end{equation*}
\begin{equation} m\frac{(x-a)^{n+1}}{n+1}\leq\int_{t=a}^xf^{(n+1)}(t)(x-t)^n\dx{ t}\leq M\frac{(x-a)^{n+1}}{n+1}\tag{7.2.4} \end{equation}
\begin{equation*} m\leq\frac{\int_{t=a}^xf^{(n+1)}(t)(x-t)^n\dx{ t}}{\left(\frac{(x-a)^{n+1}}{n+1} \right)}\leq M\text{.} \end{equation*}
Since
\begin{equation*} \frac{\int_{t=a}^xf^{(n+1)}(t)(x-t)^n\dx{ t}}{\left(\frac{(x-a)^{n+1}}{n+1} \right)} \end{equation*}
is a value that lies between the maximum and minimum of \(f^{(n+1)}\) on \([\,a,x]\text{,}\) then by the Intermediate Value Theorem, there must exist a number \(c\in[\,a,x]\) with
\begin{equation*} f^{(n+1)}(c)=\frac{\int_{t=a}^xf^{(n+1)}(t)(x-t)^n\dx{ t}}{\left( \frac{(x-a)^{n+1}}{n+1}\right)}\text{.} \end{equation*}

Aside: The Intermediate and Extreme Value Theorems.

This gives us
\begin{equation*} \int_{t=a}^xf^{(n+1)}(t)(x-t)^n\dx{ t}=\,\frac{f^{\,(n+1)}(c)}{n+1}(x-a)^{n+1}\text{.} \end{equation*}
And the result follows.

Problem 7.2.2.

Prove Theorem 7.2.1 for the case where \(x\lt a\text{.}\)
Hint.
Note that
\begin{equation*} \int_{t=a}^xf^{(n+1)}(t)(x-t)^n\dx{ t}=(-1)^{n+1}\int_{t=x}^af^{(n+1)}(t)(t-x)^n\dx{ t}\text{.} \end{equation*}
Use the same argument on this integral. It will work out in the end. Really! You just need to keep track of all of the negatives.
This is not Lagrange’s proof. In particular he did not use the Integral Form of the remainder. However, this is similar to Lagrange’s proof in that he also used the Intermediate Value Theorem (IVT) and Extreme Value Theorem (EVT) much as we did just now.
In Lagrange’s day, these were taken to be obviously true for a continuous function and we have followed Lagrange’s lead by assuming the IVT and the EVT. However, in mathematics we need to keep our assumptions few and simple. The IVT and the EVT do not satisfy this need in the sense that both can be proved from simpler ideas. We will return to this in Chapter 9.
Also, a word of caution about this: Lagrange’s form of the remainder is \(\frac{f^{\,(n+1)}(c)}{(n+1)!}\) \((x-a)^{n+1}\text{,}\) where \(c\) is some number between \(a\) and \(x\text{.}\) The proof does not indicate what this \(c\) might be and, in fact, this \(c\) changes as \(n\) changes. All we know is that this \(c\) lies between \(a\) and \(x\text{.}\) To illustrate this issue and its potential dangers, consider the following problem where we have a chance to compute the value of \(c\) for the function \(f(x)=\frac{1}{1+x}\text{.}\)

Problem 7.2.3.

This problem investigates the Taylor series representation
\begin{equation*} \frac{1}{1+x}=1-x+x^2-x^3+\cdots\text{.} \end{equation*}
(a)
Use the identity
\begin{equation*} \frac{1-(-x)^{n+1}}{1+x}=1-x+x^2-x^3+\cdots+(-x)^n \end{equation*}
to compute the remainder
\begin{equation*} \frac{1}{1+x}-\left(1-x+x^2-x^3+\cdots+(-x)^n\right) \end{equation*}
exactly, in terms of \(x\text{.}\)
(b)
Evaluate the remainder in part (a) when \(x=1\) and explain how this shows that the Taylor series does not converge to \(\frac{1}{1+x}\) when \(x=1\text{.}\)
(c)
Compare the remainder in part (a) with the Lagrange form of the remainder to determine what \(c\) is when \(x=1\text{.}\)
(d)
Consider the following argument: If \(f(x)=\frac{1}{1+x}\text{,}\) then
\begin{equation*} f^{(n+1)}(c)=\frac{(-1)^{n+1}(n+1)!}{(1+c)^{n+2}} \end{equation*}
so the Lagrange form of the remainder when \(x=1\) is given by
\begin{equation*} \frac{(-1)^{n+1}(n+1)!}{(n+1)!(1+c)^{n+2}}=\frac{(-1)^{n+1}}{(1+c)^{n+2}} \end{equation*}
where \(c\in[\,0,1]\text{.}\) Observe from part (c) that \(c\neq 0\text{.}\) Thus \(1+c\gt1\) and so by Problem 6.1.7 of Chapter 6, the Lagrange remainder converges to \(0\) as \(n\rightarrow\infty\text{.}\) This argument would suggest that the Taylor series does converge to \(\frac{1}{1+x}\) for \(x=1\text{.}\) However, we know from part (a) that this is incorrect. What is wrong with the argument?
Even though there are potential dangers in misusing the Lagrange form of the remainder, it is a useful form. For example, armed with the Lagrange form of the remainder, we can prove the following theorem.

Proof.

First note that the binomial series is, in fact, the Taylor series for the function \(f(x)=\sqrt{1+x}\) expanded about \(a=0\text{.}\) If we let \(x\) be a fixed number with \(0\leq x\leq 1\text{,}\) then it suffices to show that the Lagrange form of the remainder converges to \(0\text{.}\) With this in mind, notice that
\begin{equation*} f^{(n+1)}(x)=\left(\frac{1}{2}\right)\left(\frac{1}{2}-1\right)\cdots\left(\frac{1}{2}-n\right)\left(1+x\right)^{\frac{1}{2}-(n+1)} \end{equation*}
and so the Lagrange form of the remainder is
\begin{equation*} \frac{f^{(n+1)}(c)}{(n+1)!}x^{n+1}= \frac{\left(\frac{1}{2}\right)\left(\frac{1}{2}-1\right)\cdots \left(\frac{1}{2}-n\right)}{(n+1)!}\frac{x^{n+1}}{(1+c)^{n+\frac{1}{2}}} \end{equation*}
where \(c\) is some number between \(0\) and \(x\text{.}\) Since \(0\leq x\leq 1\) and \(1+c\geq 1\text{,}\) then we have \(\frac{1}{1+c}\leq 1\text{,}\) and so
\begin{align*} 0\amp \leq \left|\frac{\left(\frac{1}{2}\right)\left(\frac{1}{2}-1\right)\cdots\left(\frac{1}{2}-n\right)}{(n+1)!}\frac{x^{n+1}}{(1+c)^{n+\frac{1}{2}}}\right|\\ \amp =\frac{\left(\frac{1}{2}\right)\left(1-\frac{1}{2}\right)\cdots\left(n-\frac{1}{2}\right)}{(n+1)!}\frac{x^{n+1}}{(1+c)^{n+\frac{1}{2}}}\\ \amp =\frac{\left(\frac{1}{2}\right)\left(\frac{1}{2}\right)\left(\frac{3}{2}\right)\left(\frac{5}{2}\right)\cdots\left(\frac{2n-1}{2}\right)}{(n+1)!}\left(x^{n+1}\right)\frac{1}{(1+c)^{n+\frac{1}{2}}}\\ \amp \leq\frac{1\cdot 1\cdot 3\cdot5\cdot\,\cdots\,\cdot\left(2n-1\right)}{2^{^{n+1}}(n+1)!}\\ \amp =\frac{1\cdot 3\cdot 5\cdot\,\cdots\,\cdot\left(2n-1\right)\cdot 1}{2\cdot4\cdot 6\cdot\,\cdots\,\cdot 2n\cdot\left(2n+2\right)}\\ \amp =\frac{1}{2}\cdot\frac{3}{4}\cdot\frac{5}{6}\cdot\cdots\,\cdot\frac{2n-1}{2n}\cdot\frac{1}{2n+2}\\ \amp \leq\frac{1}{2n+2}\text{.} \end{align*}
Since \(\limitt{n}{\infty}{\frac{1}{2n+2}}=0=\limitt{n}{\infty}{0}\text{,}\) then by the Squeeze Theorem,
\begin{align*} \limit{n}{\infty}{\abs{\frac{f^{(n+1)}(c)}{(n+1)!}x^{n+1}}}=0, \amp{}\amp{}\text{ so } \amp{}\amp{}\limit{n}{\infty}{\left(\frac{f^{(n+1)}(c)}{(n+1)!}x^{n+1}\right)}=0. \end{align*}
Thus the Taylor series
\begin{equation*} 1+\frac{1}{2}x+\frac{\frac{1}{2}\left(\frac{1}{2}-1\right)}{2!}x^2+\frac{\frac{1}{2}\left(\frac{1}{2}-1\right)\left(\frac{1}{2}-2\right)}{3!}x^3+\cdots \end{equation*}
converges to \(\sqrt{1+x}\) for \(0\leq x\leq 1\text{.}\)
Unfortunately, this proof will not work for \(-1\lt x\lt 0\text{.}\) In this case, the fact that \(x\leq c\leq 0\) makes\(\,1+c\leq 1\text{.}\) Thus \(\frac{1}{1+c}\geq 1\) and so the inequality
\begin{equation*} \frac{\left(\frac{1}{2}\right)\left(\frac{1}{2}\right)\left(\frac{3}{2}\right)\left(\frac{5}{2}\right)\cdots\left(\frac{2n-1}{2}\right)}{(n+1)!}\frac{|x|^{n+1}}{(1+c)^{n+\frac{1}{2}}}\leq\frac{1\cdot 1\cdot 3\cdot 5\cdot\,\cdots\,\cdot\left(2n-1\right)}{2^{^{n+1}}(n+1)!} \end{equation*}
may not hold.

Problem 7.2.5.

Show that if \(-\frac{1}{2}\leq x\leq c\leq 0\text{,}\) then \(|\frac{x}{1+c}|\leq 1\) and modify the above proof to show that the binomial series converges to \(\sqrt{1+x}\) for \(x\in\left[-\frac{1}{2},0\right]\text{.}\)
To take care of the case where \(-1\lt x\lt -\frac{1}{2}\text{,}\) we will use yet another form of the remainder for Taylor series. However before we tackle that, we will use the Lagrange Form of the remainder to resolve a puzzle that we mentioned in Chapter 4. Recall that we noticed that the series representation
\begin{equation*} \frac{1}{1+x}=1-x+x^2-x^3+\cdots \end{equation*}
did not work when \(x=1\text{,}\) however we noticed that the series obtained by integrating term by term did seem to converge to the antiderivative of \(\frac{1}{1+x}\text{.}\) Specifically, we have the Taylor series
\begin{equation*} \ln\left(1+x\right)=x-\frac{1}{2}x^2+\frac{1}{3}x^3-\cdots\text{.} \end{equation*}
Substituting \(x=1\) into this provided the convergent series \(1-\frac{1}{2}+\frac{1}{3}-\frac{1}{4}+\cdots\text{.}\) We made the claim that this, in fact, converges to \(\ln 2\text{,}\) but that this was not obvious. The Lagrange form of the remainder gives us the machinery to prove this.

Problem 7.2.6.

(a)
Compute the Lagrange Form of the remainder for the Maclaurin series of \(\ln\left(1+x\right)\text{.}\)
(b)
Show that when \(x=1\text{,}\) the Lagrange Form of the remainder converges to \(0\) and so the equation
\begin{equation*} \ln 2=1-\frac{1}{2}+\frac{1}{3}-\frac{1}{4}+\cdots \end{equation*}
is actually correct.

Section 7.3 Cauchy’s Form of the Remainder

In his 1823 work, Résumée des leçons données á l’ecole royale polytechnique sur le calcul infintésimal, Augustin Cauchy provided another form of the remainder for Taylor series.
Figure 7.3.1. Augustin Cauchy (1789–1857)
Using Cauchy’s form of the remainder, we can prove that the binomial series
\begin{equation*} 1+\frac{1}{2}x+\frac{\frac{1}{2}\left(\frac{1}{2}-1\right)}{2!}x^2+\frac{\frac{1}{2}\left(\frac{1}{2}-1\right)\left(\frac{1}{2}-2\right)}{3!}x^3+\cdots \end{equation*}
converges to \(\sqrt{1+x}\) for \(x\in(-1,0).\) (Strictly speaking we only need to show this for \(x\in(-1,-1/2) \text{.}\) We covered the case \(x\in [-1/2,0]\) in Problem 7.2.5.)
With this in mind, let \(x\) be a fixed number with \(-1\lt x\lt 0\) and consider that the binomial series is the Maclaurin series for the function \(f(x)=(1+x)^{\frac{1}{2}}\text{.}\) As we saw before,
\begin{equation*} f^{(n+1)}(t)=\left(\frac{1}{2}\right)\left(\frac{1}{2}-1\right)\cdots\left(\frac{1}{2}-n\right)\left(1+t\right)^{\frac{1}{2}-(n+1)}\text{,} \end{equation*}
so the Cauchy form of the remainder is given by
\begin{equation*} \frac{f^{\left(n+1\right)}\left(c\right)}{n!}{\left(x-c\right)}^n\left(x-0\right)=\frac{\left(\frac{1}{2}\right)\left(\frac{1}{2}-1\right)\dots \left(\frac{1}{2}-n\right)}{n!}\frac{{\left(x-c\right)}^n}{{\left(1+c\right)}^{n+\frac{1}{2}}}\cdot x \end{equation*}
where \(c\) is some number with \(x\le c\le 0\text{.}\) Thus we have
\begin{equation*} 0\le \left|\frac{f^{\left(n+1\right)}\left(c\right)}{n!}{\left(x-c\right)}^n\left(x-0\right)\right|=\left|\frac{\left(\frac{1}{2}\right)\left(\frac{1}{2}-1\right)\dots \left(\frac{1}{2}-n\right)}{n!}\frac{{\left(x-c\right)}^n}{{\left(1+c\right)}^{n+\frac{1}{2}}}\cdot x\right| \end{equation*}
Notice that if \(-1\lt x\leq c\text{,}\)\(\) then \(0\lt 1+x\leq 1+c\text{.}\) Thus \(0\lt \frac{1}{1+c}\leq\frac{1}{1+x}\) and \(\frac{1}{\sqrt{1+c}}\leq\frac{1}{\sqrt{1+x}}\text{.}\) Thus we have
\begin{equation*} 0\leq\left|\frac{\left(\frac{1}{2}\right)\left(\frac{1}{2}-1\right)\cdots\left(\frac{1}{2}-n\right)}{n!}\frac{(x-c)^nx}{(1+c)^{n+\frac{1}{2}}}\right|\leq\left(\frac{c-x}{1+c}\right)^n\frac{|\,x|}{\sqrt{1+x}}\text{.} \end{equation*}

Problem 7.3.4.

Suppose \(-1\lt x\leq c\leq 0\) and consider the function \(g(c)=\frac{c-x}{1+c}\text{.}\) Show that on \([x,0]\text{,}\) \(g\) is increasing and use this to conclude that for \(-1\lt x\leq c\leq 0\text{,}\)
\begin{equation*} \frac{c-x}{1+c}\leq|x|\text{.} \end{equation*}
Use this fact to finish the proof that the binomial series converges to \(\sqrt{1+x}\) for \(-1\lt x\lt 0\text{.}\)
The proofs of both the Lagrange Form and the Cauchy Form of the remainder of Taylor series made use of two crucial facts about continuous functions. First, we assumed the Extreme Value Theorem: Any continuous function on a closed bounded interval assumes its maximum and minimum somewhere on the interval. Second, we assumed that any continuous function satisfied the Intermediate Value Theorem: If a continuous function takes on two different values, then it must take on any value between those two values.
Mathematicians in the late \(1700\)s and early \(1800\)s typically considered these facts to be intuitively obvious. This was natural since our understanding of continuity at that time was solely intuitive. Intuition is a useful tool but as we have seen before it is also unreliable. For example consider the following function
\begin{align} f(x)= \begin{cases} x\sin\left(\frac{1}{x}\right),\amp \text{if } x\neq 0,\\ 0, \amp \text{ if } x=0 \end{cases} \text{.}\tag{7.3.2} \end{align}
Is this function continuous at \(0\text{?}\) Near zero its graph looks like this:
Figure 7.3.6. The graph of \(f(x)= \begin{cases} x\sin\left(\frac{1}{x}\right),\amp \text{if } x\neq 0 \\ 0, \amp \text{ if } x=0 \end{cases} \) near \(0\text{.}\)
It is impossible to show in the sketch but this graph oscillates infinitely often as \(x\) nears zero.
No matter what your guess may be, it is clear that it is hard to analyze such a function armed with only an intuitive notion of continuity. We will revisit this example in the next chapter.
As with convergence, continuity is more subtle than it first appears.
We put convergence on solid ground by providing a completely analytic definition in the previous chapter. What we need to do in the next chapter is provide a completely rigorous definition for continuity.