Skip to main content

Chapter 11 Back to Power Series

Section 11.1 Uniform Convergence

We have developed precise analytic definitions of the convergence of a sequence and continuity of a function and we have used these to prove the EVT and IVT for continuous functions. We have also carefully defined the derivative and the integral and used those definitions to prove the Fundamental Theorem of Calculus, which was instrumental in developing the three forms of the remainder for Taylor Series.
We now return to the question that originally motivated these definitions, “Why are Taylor series well behaved, but Fourier series are not necessarily?” More precisely, we mentioned that whenever a power series converges then the function it converges to is continuous and that if we differentiate or integrate a convergent power series term by term then the resulting series will converge to the derivative or integral of the original series, but this was not always the case for Fourier series.
We saw in our Interregnum that the graph of
\begin{equation} f(x) = \frac{4}{\pi}\left(\sum_{k=0}^\infty\frac{(-1)^k}{2k+1}\cos\left((2k+1)\pi x\right)\right)\tag{11.1.1} \end{equation}
specifically, formula (5.1.4)) is given by
But if we consider the sequence of partial sums of \(f(x)\) in equation (11.1.1):
\begin{align*} f_1(x)=\amp \frac{4}{\pi}\cos\left(\pi x\right)\\ f_2(x)=\amp \frac{4}{\pi}\left(\cos \left(\pi x\right)-\frac{1}{3}\cos\left( 3\pi x\right)\right)\\ f_3(x)=\amp \frac{4}{\pi}\left(\cos\left(\pi x\right)-\frac{1}{3}\cos\left(3\pi x\right)+\frac{1}{5}\cos\left(5\pi x\right)\right)\\ \amp \vdots \end{align*}
we see the sequence of continuous functions \(\left(f_n\right)\) converges to the non–continuous function \(f\) for each real number \(x\text{.}\) This didn’t happen with Taylor series. The partial sums for a Taylor series were polynomials and hence continuous but what they converged to was continuous as well.
The difficulty is quite delicate and it took mathematicians quite a while to determine that there are two very subtly different ways that a sequence of functions can converge: pointwise or uniformly. This distinction was touched upon by Niels Henrik Abel (1802–1829) in \(1826\) while studying the domain of convergence of a power series. However, the necessary formal definitions were not made explicit until Weierstrass did it in his \(1841\) paper Zur Theorie der Potenzreihen (On the Theory of Power Series). This was published in his collected works in \(1894\text{.}\)
Figure 11.1.1. Niels Henrik Abel
It will be instructive to take a look at an argument that doesn’t quite work before looking at the formal definitions we will need. In \(1821\) Augustin Cauchy “proved” that the infinite sum of continuous functions is continuous. Of course, it is obvious (to us) that this is not true because we’ve seen several counterexamples. But Cauchy, who was a first rate mathematician was so sure of the correctness of his argument that he included it in his textbook on analysis, Cours d’analyse \((1821)\text{.}\)

Problem 11.1.2.

Find the flaw in the following “proof” that if \(f_1, f_2, f_3, f_4 \ldots\) are all continuous at \(a\) then \(f=\sum_{n=1}^\infty f_n\) is also continuous at \(a\text{.}\)
Let \(\eps>0\text{.}\) Since \(f_n\) is continuous at \(a,\) we can choose \(\delta_n>0\) such that if \(\abs{x-a}\lt \delta_n,\) then \(\abs{f_n(x)-f_n(a)}\lt \frac{\eps}{2^n}\text{.}\) Let \(\delta=\inf(\delta_1,\delta_2,\delta_3,\ldots)\text{.}\) If \(\abs{x-a}\lt \delta\) then
\begin{align*} \abs{f(x)-f(a)} \amp = \abs{\sum_{n=1}^\infty f_n(x) - \sum_{n=1}^\infty f_n(a) }\\ \amp = \abs{\sum_{n=1}^\infty \left(f_n(x)-f_n(a)\right) }\\ \amp \le \sum_{n=1}^\infty \abs{f_n(x)-f_n(a) }\\ \amp \le \sum_{n=1}^\infty \frac{\eps}{2^n}\\ \amp \le \eps\sum_{n=1}^\infty \frac{1}{2^n}\\ \amp = \eps. \end{align*}
Thus \(f\) is continuous at \(a\text{.}\)

Definition 11.1.3. Pointwise Convergence.

Let \(S\) be a subset of the real number system and let \(\left(f_n\right)=\left(f_1,f_2,f_3,\,\ldots\right)\) be a sequence of functions defined on \(S\text{.}\) Let \(f\) be a function defined on \(S\) as well. We say that \(\left(f_n\right)\) converges to \(f\) pointwise on \(S\) provided that for all \(x\in S,\) the sequence of real numbers \(\left(f_n(x)\right)\) converges to the number \(f(x)\text{.}\) In this case we write\(\,f_n\ptwise f\) on \(S\text{.}\)
Symbolically, we have \(f_n\ptwise f\text{ on } S\Leftrightarrow \forall\,x\in S,\forall\ \eps>0,\,\exists\ N\) such that \(\left(n>N \Rightarrow|f_n(x)-f(x)|\lt \eps\right)\text{.}\)
This is the type of convergence we have been observing to this point. By contrast we have the following new definition.

Definition 11.1.4. Uniform Convergence.

Let \(S\) be a subset of the real number system and let \(\left(f_n\right)=\left(f_1,f_2,f_3,\,\ldots\right)\) be a sequence of functions defined on \(S\text{.}\) Let \(f\) be a function defined on \(S\) as well. We say that \(\left(f_n\right)\) converges to \(f\) uniformly on \(S\) provided \(\forall\ \eps>0,\,\exists\ N\) such that \(n>N\Rightarrow|f_n(x)-f(x)|\lt \eps\text{ , } \forall\ x\in S\text{.}\)
In this case we write \(f_n\unif f\) on \(S\text{.}\)
The difference between these two definitions is subtle. In pointwise convergence, we are given a fixed \(x\in S\) and an \(\eps>0\text{.}\) Then the task is to find an \(N\) that works for that particular \(x\) and that \(\eps\text{.}\) In uniform convergence, we are given \(\eps>0\) and must find a single \(N\) that works for that particular \(\eps\) but also simultaneously (uniformly) for all \(x\in S\text{.}\) Clearly uniform convergence implies pointwise convergence as an \(N\) which works uniformly for all \(x,\) works for each individual \(x\) also. However the reverse is not true. This will become evident, but first consider the following example.

Problem 11.1.5.

Let \(0\lt b\lt 1\) and consider the sequence of functions \(\left(f_n\right)\) defined on \([0,b]\) by \(f_n(x)=x^n\text{.}\) Use the definition to show that \(f_n\unif 0\) on \([0,b]\text{.}\)
Hint.
\(|x^n-0|=x^n\leq b^n\text{.}\)
Uniform convergence is not only dependent on the sequence of functions but also on the set \(S\text{.}\) For example, the sequence \(\left(f_n(x)\right)=\left(x^n\right)_{n=0}^\infty\) of Problem 11.1.5 does not converge uniformly on \([0,1]\text{.}\) We could use the negation of the definition to prove this, but instead, it will be a consequence of the following theorem.

Sketch of Proof.

Let \(a\in I\) and let \(\eps>0\text{.}\) The idea is to use uniform convergence to replace \(f\) with one of the known continuous functions \(f_n\text{.}\) Specifically, by uncancelling, we can write
\begin{align*} \left|f(x)-f(a)\right|\amp =\left|f(x)-f_n(x)+f_n(x)-f_n(a)+f_n(a)-f(a)\right|\\ \amp \leq \left|f(x)-f_n(x)\right|+\left|f_n(x)-f_n(a)\right|+\left|f_n(a)-f(a)\right| \end{align*}
If we choose \(n\) large enough, then we can make the first and last terms as small as we wish, noting that the uniform convergence makes the first term uniformly small for all \(x\text{.}\) Once we have a specific \(n,\) then we can use the continuity of \(f_n\) to find a \(\delta>0\) such that the middle term is small whenever \(x\) is within \(\delta\) of \(a\text{.}\)

Problem 11.1.8.

Consider the sequence of functions \(\left(f_n\right)\) defined on \([0,1]\) by \(f_n(x)=x^n\text{.}\) Show that the sequence converges to the function
\begin{equation*} f(x)= \begin{cases}0\amp \text{ if } x\in[0,1)\\ 1\amp \text{ if } x=1 \end{cases} \end{equation*}
pointwise on \(\,[0,1],\) but not uniformly on \([0,1]\text{.}\)
Notice that for the Fourier series at the beginning of this chapter,
\begin{equation*} f(x)=\frac{4}{\pi}\left(\cos\left(\frac{\pi}{2}x\right)-\frac{1}{3}\cos\left( 3\pi x\right)+\frac{1}{5}\cos\left(5\pi x\right)-\frac{1}{7}\cos\left(7\pi x\right)+\cdots\right) \end{equation*}
the convergence cannot be uniform on \((-\infty,\infty),\) as the function \(f\) is not continuous. This never happens with a power series, since they converge to continuous functions whenever they converge. Although it is not yet obvious that power series converge uniformly, we will soon see that they do and that uniform convergence is what guarantees that they converge to continuous functions. We will also see that uniform convergence is what allows us to integrate and differentiate a power series term by term.

Section 11.2 Uniform Convergence: Integrals and Derivatives

We saw in the previous section that if \(\left(f_n\right)\) is a sequence of continuous functions which converges uniformly to \(f\) on an interval, then \(f\) must be continuous on the interval as well. This was not necessarily true if the convergence was only pointwise. For example, we saw in equation (11.1.1) a sequence of continuous functions defined on \((-\infty,\infty)\) converging pointwise to a function that was not continuous on \((-\infty,\infty)\text{.}\) Uniform convergence guarantees some other nice properties as well.

Problem 11.2.2.

Hint.
For \(\eps>0,\) we need to make \(|f_n(x)-f(x)|\lt \frac{\eps}{b-a},\) for all \(x\in[a,b]\text{.}\)
Notice that this theorem is not true if the convergence is only pointwise, as illustrated by the following.

Problem 11.2.3.

Consider the sequence of functions \(\left(f_n\right)\) given by
\begin{equation*} f_n(x)= \begin{cases}n\amp \text{ if } x\in\left(0,\frac{1}{n}\right)\\ 0\amp \text{ otherwise } \end{cases} \text{.} \end{equation*}
(a)
Show that \(f_n\ptwise 0\) on \([0,1],\) but
\begin{equation*} \limit{n}{\infty}{\left( \int_{x=0}^1f_n(x)\dx{ x}\right) \neq\int_{x=0}^10\dx{ x}}\text{.} \end{equation*}
(b)
Can the convergence be uniform? Explain.
Applying the result of Problem 11.2.3 to power series we have the following.

Problem 11.2.5.

Hint.
Remember that
\begin{equation*} \displaystyle \sum_{n=0}^\infty f_n(x) = \limit{N}{\infty}{\left(\sum_{n=0}^N f_n(x)\right)}. \end{equation*}
Surprisingly, the issue of term–by–term differentiation depends not on the uniform convergence of \(\left(f_n\right),\) but on the uniform convergence of \(\left(f^\prime_n\right)\text{.}\) More precisely, we have the following result.

Problem 11.2.7.

Hint.
Let \(a\) be an arbitrary fixed point in \(I\) and let \(x\in I\text{.}\) By the Fundamental Theorem of Calculus, we have
\begin{equation*} \int_{t=a}^x f^\prime_n(t)\dx{t}=f_n(x)-f_n(a) \text{.} \end{equation*}
Take the limit of both sides and differentiate with respect to \(x\text{.}\)
Applying Theorem 11.2.6 to power series gives the following result.
Taken together the above results say that a power series can be differentiated and integrated term–by–term as long as the convergence is uniform. Fortunately it is in general true that when a power series converges the convergence of it and its integrated and differentiated series is also (almost) uniform.
However we do not yet have all of the tools necessary to see this. To build these tools requires that we return briefly to our study, begun in Chapter 6, of the convergence of sequences.

Subsection 11.2.1 Cauchy Sequences

Knowing that a sequence or a series converges and knowing what it converges to are typically two different matters. For example, we know that \(\sum_{n=0}^\infty\frac{1}{n!}\)and \(\sum_{n=0}^\infty\frac{1}{n!\,n!}\) both converge. The first converges to \(e,\) which has meaning in other contexts. We don’t know what the second one converges to, other than to say it converges to \(\sum_{n=0}^\infty\frac{1}{n!\,n!}\text{.}\) In fact, that question might not have much meaning without some other context in which \(\sum_{n=0}^\infty\frac{1}{n!\,n!}\) arises naturally. Be that as it may, we need to look at the convergence of a series (or a sequence for that matter) without necessarily knowing what it might converge to. We make the following definition.
Definition 11.2.10. Cauchy Sequence.
Let \(\left(s_n\right)\) be a sequence of real numbers. We say that \(\left(s_n\right)\)is a Cauchy sequence if for any \(\eps>0,\) there exists a real number \(N\) such that if \(m,n>N,\) then \(|s_m-s_n|\lt \eps\text{.}\)
Notice that this definition says that the terms in a Cauchy sequence get arbitrarily close to each other and that there is no reference to getting close to any particular fixed real number. Furthermore, you have already seen lots of examples of Cauchy sequences as illustrated by the following result.
Intuitively, Theorem 11.2.11 makes sense. If the terms of a sequence are getting arbitrarily close to \(s\text{,}\) then they should be getting arbitrarily close to each other as well. This is the basis of the proof.
So any convergent sequence is automatically Cauchy. For the real number system, the converse is also true and, in fact, is equivalent to any of our completeness axioms: the NIP, the Bolzano–Weierstrass Theorem, or the LUB Property. Thus, this could have been taken as our completeness axiom and we could have used it to prove the others. One of the most convenient ways to prove this converse is to use the Bolzano–Weierstrass Theorem. To do that, we must first show that a Cauchy sequence must be bounded. This result is reminiscent of the fact that a convergent sequence is bounded (Lemma 6.2.7 of Chapter 6) and the proof is very similar.
Problem 11.2.14.
Hint.
This is similar to Problem 6.2.8 of Chapter 6. There exists \(N\) such that if \(m,n>N\)then \(|s_n-s_m|\lt 1\text{.}\) Choose a fixed \(m>N\) and let \(B=\max\left(\abs{s_1}, \abs{s_2}, \ldots, \abs{s_{\lceil N\rceil}}, \abs{s_m}+1\right)\text{.}\)
Sketch of Proof.
We know that \(\left(s_n\right)\) \(\)is bounded, so by the Bolzano–Weierstrass Theorem, it has a convergent subsequence \(\left(s_{n_k}\right)\) converging to some real number \(s\text{.}\) We have
\begin{equation*} \abs{s_n-s}=\abs{s_n-s_{n_k}+s_{n_k}-s}\leq \abs{s_n-s_{n_k}}+\abs{s_{n_k}-s}\text{.} \end{equation*}
If we choose \(n\) and \(n_k\) large enough, we should be able to make each term arbitrarily small.
From Theorem 11.2.15 we see that every Cauchy sequence converges in \(\RR\text{.}\) Moreover the proof of this fact depends on the Bolzano–Weierstrass Theorem which, as we have seen, is equivalent to our completeness axiom, the Nested Interval Property. What this means is that if there is a Cauchy sequence which does not converge then the NIP is not true. A natural question to ask is if every Cauchy sequence converges does the NIP follow? In other words, is the convergence of Cauchy sequences also equivalent to our completeness axiom? The following theorem shows that the answer is yes.
Problem 11.2.18.
Hint.
If we start with two sequences \(\left(x_n\right)\) and \(\left(y_n\right)\text{,}\) satisfying all of the conditions of the NIP, you should be able to show that these are both Cauchy sequences.
Taken together Problem 11.2.16 and Problem 11.2.18 tell us that the following are equivalent: the Nested Interval Property, the Bolzano–Weierstrass Theorem, the Least Upper Bound Property, and the convergence of Cauchy sequences. Thus any one of these could have been taken as the completeness axiom of the real number system and then used to prove the each of the others as a theorem according to the following dependency graph:
Since we can get from any node on the graph to any other, simply by following the implications (indicated with arrows), any one of these statements is logically equivalent to each of the others.
Problem 11.2.19.
Since the convergence of Cauchy sequences can be taken as the completeness axiom for the real number system \((\RR)\text{,}\) it does not hold for the rational number system \((\QQ)\text{,}\) since \(\QQ{}\) is not complete. Give an example of a Cauchy sequence of rational numbers which does not converge to a rational number.
If we apply the above ideas to series we obtain the following important result, which will provide the basis for our investigation of power series.
Problem 11.2.21.
Prove the Cauchy criterion.
At this point several of the convergence tests that you probably saw in Calculus are easily proved.
Problem 11.2.22. The \(n\)th Term Test.
Show that if \(\sum_{n=1}^\infty a_n\) converges then \(\limit{n}{\infty}{a_n}=0\text{.}\)
Problem 11.2.23. The Strong Cauchy Criterion.
Show that \(\displaystyle\sum_{k=1}^\infty a_k\) converges if and only if \(\limit{n}{\infty}{\sum_{k=n+1}^\infty a_k}=0\text{.}\)
Hint.
The hardest part of this problem is recognizing that it is really about the limit of a sequence as in Chapter 6.
You may also recall the Comparison Test from your study of series in Calculus. The result follows from the fact that for a series \(\sum a_n \) with non–negative terms the partial sums of \(\sum a_n\) form an increasing sequence which is bounded above by \(\sum b_n\text{.}\) (See Corollary 9.4.5 of Chapter 9.) The Cauchy Criterion allows us to extend this to the case where the terms \(a_n\) could be negative as well. This can be seen in the following theorem.
Problem 11.2.25.
Hint.
Use the Cauchy criterion with the fact that \(\abs{\sum_{k=n+1}^ma_k}\leq\sum_{k=n+1}^m\abs{a_k}\text{.}\)
The following definition is of fundamental importance in the study of series.
Definition 11.2.26. Absolute Convergence.
Given a series \(\sum a_n\text{,}\) the series \(\sum|a_n|\) is called the absolute series of \(\sum a_{n}\) and if \(\sum|a_n|\) converges then we say that \(\sum a_{n}\) converges absolutely.
The significance of this definition comes from the following result.
Problem 11.2.29.
If \(\displaystyle\sum_{n=0}^\infty\abs{a_n}=s\text{,}\) does it follow that \(\displaystyle s= \abs{\sum_{n=0}^\infty a_n}\text{?}\) Justify your answer. What can be said?
The converse of Corollary 11.2.27 is not true as evidenced by the series \(\displaystyle\sum_{n=0}^\infty\frac{(-1)^n}{n+1}\text{.}\) As we noted in Chapter 4, this series converges to \(\ln 2\text{.}\) However, its absolute series is the Harmonic Series which diverges. Any such series which converges, but not absolutely, is said to converge conditionally. Recall also that in Chapter 4, we showed that we could rearrange the terms of the series \(\displaystyle\sum_{n=0}^\infty\frac{(-1)^n}{n+1}\) to make it converge to any number we wished. We noted further that all rearrangements of the series \(\displaystyle\sum_{n=0}^\infty\frac{(-1)^n}{\left(n+1\right)^2}\) converged to the same value. The difference between the two series is that the latter converges absolutely whereas the former does not. Specifically, we have the following result.
Sketch of Proof.
We will first show that this result is true in the case where \(a_n\geq 0\text{.}\) If \(\sum b_n\) represents a rearrangement of \(\sum a_n\text{,}\) then notice that the sequence of partial sums \(\displaystyle\left(\sum_{k=0}^nb_k\right)_{n=0}^\infty\)is an increasing sequence which is bounded by \(s\text{.}\) By Corollary 9.4.5 of Chapter 9, this sequence must converge to some number \(t\) and \(t\leq s\text{.}\) Furthermore \(\sum a_n\) is also a rearrangement of \(\sum b_n\text{.}\) Thus the result holds for this special case. (Why?) For the general case, notice that \(a_n=\frac{|a_n\mathopen|+a_n}{2}-\frac{|a_n\mathopen|-a_n}{2}\) and that \(\sum\frac{|a_n\mathopen|+a_n}{2}\) and \(\sum\frac{|a_n\mathopen|-a_n}{2}\) are both convergent series with non–negative terms. By the special case
\begin{align*} \sum\frac{\abs{b_n}+b_n}{2}= \sum\frac{\abs{a_n}+a_n}{2}\amp{}\amp{}\text{ and }\amp{}\amp{} \sum\frac{\abs{b_n}-b_n}{2}= \sum\frac{\abs{a_n}-a_n}{2}. \end{align*}

Section 11.3 Radius of Convergence of a Power Series

We’ve developed enough machinery to look at the convergence of power series. The fundamental result is the following theorem due to Abel.

Sketch of Proof.

To prove Theorem 11.3.1 first note that by Problem 11.2.22, \(\limit{n}{\infty}{a_nc^n}=0\text{.}\) Thus \(\left(a_nc^n\right)\) is a bounded sequence. Let \(B\) be a bound: \(\abs{a_nc^n}\le B\text{.}\) Then
\begin{equation*} \abs{a_nx^n}=\abs{a_nc^n\cdot\left(\frac{x}{c}\right)^n}\leq B\abs{\frac{x}{c}}^n\text{.} \end{equation*}
We can now use the comparison test.
As a result of Theorem 11.3.1 and Corollary 11.3.3, we have the following: Either \(\displaystyle\sum_{n=0}^\infty a_nx^n\) converges absolutely for all \(x\) or there exists some non–negative real number \(r\) such that \(\displaystyle\sum_{n=0}^\infty a_nx^n\) converges absolutely when \(|x|\lt r\) and diverges when \(|x|>r\text{.}\) In the latter case, we call \(r\) the radius of convergence of the power series \(\displaystyle\sum_{n=0}^{\infty}a_{n} x^{n}\text{.}\) In the former case, we say that the radius of convergence of \(\displaystyle\sum_{n=0}^\infty a_nx^n\) is \(\infty\text{.}\) Though we can say that \(\displaystyle\sum_{n=0}^\infty a_nx^n\) converges absolutely when \(|x|\lt r\text{,}\) we cannot say that the convergence is uniform. However, we can come close. We can show that the convergence is uniform for \(|x|\leq b\lt r\text{.}\) To see this we will use the following result

Sketch of Proof.

Since the crucial feature of the theorem is the function \(f(x)\) that our series converges to, our plan of attack is to first define \(f(x)\) and then show that our series, \(\displaystyle\sum_{n=1}^\infty f_n(x)\text{,}\) converges to it uniformly.
First observe that for any \(x\in S\text{,}\) \(\displaystyle\sum_{n=1}^\infty f_n(x)\) converges by the Comparison Test (in fact it converges absolutely) to some number we will denote by \(f(x)\text{.}\) This actually defines the function \(f(x)\) for all \(x\in S\text{.}\) It follows that \(\sum_{n=1}^\infty f_n(x)\) converges pointwise to \(f(x)\text{.}\)
Next, let \(\eps>0\) be given. Notice that since \(\displaystyle\sum_{n=1}^\infty M_n\) converges, say to \(M\text{,}\) then there is a real number, \(N\text{,}\) such that if \(n>N\text{,}\) then
\begin{equation*} \sum_{k=n+1}^\infty M_k = \abs{\sum_{k=n+1}^\infty M_k} = \abs{M-\sum_{k=1}^n M_k}\lt \eps\text{.} \end{equation*}
You should be able to use this to show that if \(n>N\text{,}\) then
\begin{equation*} \abs{f(x) - \sum_{k=1}^n f_k(x)}\lt \eps, \, \, \forall x\in S\text{.} \end{equation*}

Problem 11.3.7.

(a)
Referring back to equation (5.1.3), show that the Fourier series
\begin{equation*} \sum_{k=0}^\infty\frac{(-1)^k}{(2k+1)^2}\sin\left((2k+1)\pi x\right) \end{equation*}
converges uniformly on \(\RR\text{.}\)
(b)
Does its differentiated series converge uniformly on \(\RR?\) Explain.

Problem 11.3.8.

Identify which of the following series converges pointwise and which converges uniformly on the interval \([-1,1]\text{.}\) In every case identify the limit function.
(a)
\(\displaystyle \sum_{n=1}^\infty\left(x^n-x^{n-1}\right)\)
(b)
\(\displaystyle \sum_{n=1}^\infty\frac{\left(x^n-x^{n-1}\right)}{n}\)
(c)
\(\displaystyle \sum_{n=1}^\infty\frac{\left(x^n-x^{n-1}\right)}{n^2}\)
Using the Weierstrass–\(M\) test, we can prove the following result.

Problem 11.3.10.

Hint.
We know that \(\displaystyle\sum_{n=0}^\infty|a_nb^n|\) converges. This should be all set for the Weierstrass-\(M\) test.
To finish the story on differentiating and integrating power series, all we need to do is show that the power series, its integrated series, and its differentiated series all have the same radius of convergence. You might not realize it, but we already know that the integrated series has a radius of convergence at least as big as the radius of convergence of the original series. Specifically, suppose \(f(x)=\displaystyle\sum_{n=0}^\infty a_nx^n\)has a radius of convergence \(r\) and let \(\abs{x}\lt r\text{.}\) We know that \(\displaystyle\sum_{n=0}^\infty a_nx^n\) converges uniformly on an interval containing \(0\) and \(x\text{,}\) and so by Corollary 11.2.4, \(\int_{t=0}^xf(t)\dx{ t}=\displaystyle\sum_{n=0}^\infty\left(\frac{a_n}{n+1}x^{n+1}\right)\text{.}\) In other words, the integrated series converges for any \(x\) with \(\abs{x}\lt r\text{.}\) This says that the radius of convergence of the integated series must be at least \(r\text{.}\)
To show that the radii of convergence are the same, all we need to show is that the radius of convergence of the differentiated series is at least as big as \(r\) as well. This would say that the original series has a radius of convergence at least as big as the integrated series, so they must have the same radius of convergence.
Putting the differentiated series into the role of the original series, the original series is now the integrated series and so these would have the same radii of convergence as well. With this in mind, we want to show that if \(|x|\lt r\text{,}\) then \(\displaystyle\sum_{n=0}^\infty a_nnx^{n-1}\) converges. The strategy is to mimic what we did in Theorem 11.3.1, where we essentially compared our series with a converging geometric series. Only this time we need to start with the differentiated geometric series.

Problem 11.3.11.

Show that \(\displaystyle\sum_{n=1}^\infty nx^{n-1}\) converges for \(|x|\lt 1\text{.}\)
Hint.
We know that \(\displaystyle\sum_{k=0}^nx^k=\frac{x^{n+1}-1}{x-1}\text{.}\) Differentiate both sides and take the limit as \(n\) approaches infinity.

Section 11.4 Boundary Issues and Abel’s Theorem

Summarizing our results, we see that any power series \(\sum a_nx^n\) has a radius of convergence \(r\) such that \(\sum a_nx^n\) converges absolutely when \(\abs{x}\lt r\) and diverges when \(\abs{x}>r\text{.}\) Furthermore, the convergence is uniform on any closed interval \([-b,b]\subset(-r,r)\) which tells us that whatever the power series converges to must be a continuous function on \((-r,r)\text{.}\) Lastly, if \(f(x)=\displaystyle\sum_{n=0}^\infty a_nx^n\) for \(x\in(-r,r)\text{,}\) then \(f^\prime(x)=\displaystyle\sum_{n=1}^\infty a_nnx^{n-1}\) for \(x\in(-r,r)\) and \(\int_{t=0}^xf(t)\dx{ t} = \displaystyle\sum_{n=0}^\infty a_n\frac{x^{n+1}}{n+1}\) for \(x\in(-r,r)\text{.}\)
Thus power series are very well behaved within their interval of convergence, and our cavalier approach from Chapter 3 is justified, EXCEPT for one issue. If you go back to Problem 3.2.13 of Chapter 3, you see that we used the geometric series to obtain the series, \(\arctan x =\displaystyle\sum_{n=0}^\infty(-1)^n\frac{1}{2n+1}x^{2n+1}\text{.}\) We substituted \(x=1\) into this to obtain \(\frac{\pi}{4}=\displaystyle\sum_{n=0}^\infty(-1)^n\frac{1}{2n+1}\text{.}\) Unfortunately, our integration was only guaranteed on a closed subinterval of the interval \((-1,1)\) where the convergence was uniform and we substituted in \(x=1\text{.}\) We “danced on the boundary” in other places as well, including when we said that
\begin{equation*} \frac{\pi}{4}=\int_{x=0}^1\sqrt{1-x^2}\dx{x}=1+\displaystyle\sum_{n=1}^\infty\left(\frac{\prod_{j=0}^{n-1}\left(\frac{1}{2}-j\right)}{n!}\text{ } \right)\left(\frac{\left(-1\right)^n}{2n+1}\right)\text{.} \end{equation*}
The fact is that for a power series \(\sum a_nx^n\) with radius of convergence \(r\text{,}\) we know what happens when \(\abs{x}\lt r\) and when \(\abs{x}>r\text{.}\) But we’ve never talked about what happens when \(\abs{x}=r\text{.}\) That is because there is no systematic approach to this boundary problem. For example, consider the three series
\begin{equation*} \sum_{n=0}^\infty x^n,\sum_{n=0}^\infty\frac{x^{n+1}}{n+1}, \sum_{n=0}^\infty\frac{x^{n+2}}{(n+1)(n+2)}\text{.} \end{equation*}
They are all related in that we started with the geometric series and integrated twice, thus they all have radius of convergence equal to 1. Their behavior on the boundary, i.e., when \(x=\pm 1\text{,}\) is another story. The first series diverges when \(x=\pm 1\text{,}\) the third series converges when \(x=\pm 1\text{.}\) The second series converges when \(x=-1\) and diverges when \(x=1\text{.}\)
Even with the unpredictability of a power series at the endpoints of its interval of convergence, the Weierstrass-\(M\) test does give us some hope of uniform convergence.

Problem 11.4.1.

Suppose the power series \(\sum a_nx^n\) has radius of convergence \(r\) and the series \(\sum a_nr^n\) converges absolutely. Then \(\sum a_nx^n\) converges uniformly on \([-r,r]\text{.}\)
Hint.
For \(\abs{x}\leq r\text{,}\) \(|a_nx^n|\leq |a_nr^n|\text{.}\)
Unfortunately, this result doesn’t apply to the integrals we mentioned as the convergence at the endpoints is not absolute. Nonetheless, the integrations we performed in Chapter 3 are still legitimate. This is due to the following theorem by Abel which extends uniform convergence to the endpoints of the interval of convergence even if the convergence at an endpoint is only conditional. Abel did not use the term uniform convergence, as it hadn’t been defined yet, but the ideas involved are his.
The proof of this is not intuitive, but involves a clever technique known as Abel’s Partial Summation Formula.

Problem 11.4.7.

Hint.
Let \(\eps\gt0\text{.}\) Since \(\displaystyle{}\sum_{n=0}^\infty a_nr^n\) converges then by the Cauchy Criterion, there exists \(N\) such that if \(m>n>N\) then \(\abs{\displaystyle \sum_{k=n+1}^ma_kr^k}\lt \frac{\eps }{2}\text{.}\) Let \(0\leq x\leq r\text{.}\) By Abel’s Lemma,
\begin{equation*} \abs{\sum_{k=n+1}^ma_kx^k}=\abs{\sum_{k=n+1}^ma_kr^k\left(\frac{x}{r}\right)^k}\leq \left(\frac{\eps }{2}\right)\left(\frac{x}{r}\right)^{n+1}\leq\frac{\epsilon}{2}\text{.} \end{equation*}
Thus for \(0\leq x\leq r\text{,}\) \(n>N\text{,}\)
\begin{equation*} \abs{\sum_{k=n+1}^\infty a_kx^k}=\limit{m}{\infty}{\abs{\sum_{k=n+1}^ma_kx^k}}\leq\frac{\eps }{2}\lt \epsilon.\rbrack{} \end{equation*}