Section 4.2 Series Anomalies
Up to this point, we have been somewhat frivolous in our approach to series. This approach mirrors eighteenth century mathematicians who ingeniously exploited calculus and series to provide mathematical and physical results which were virtually unobtainable before. Mathematicans were eager to push these techniques as far as they could to obtain their results and they often showed good intuition regarding what was mathematically acceptable and what was not. However, as the envelope was pushed, questions about the validity of the methods surfaced.
As an illustration consider the series expansion
If we substitute
into this equation, we obtain
If we group the terms as follows
the series would equal
A regrouping of
provides an answer of
This violation of the associative law of addition did not escape the mathematicians of the 1700’s. In his 1760 paper
On Divergent Series Euler said:
Notable enough, however are the controversies over the series
whose sum was given by Leibniz as
although others disagree . . . Understanding of this question is to be sought in the word “sum;” this idea, if thus conceived - namely, the sum of a series is said to be that quantity to which it is brought closer as more terms of a series are taken - has relevance only for the convergent series, and we should in general give up this idea of sum for divergent series. On the other hand, as series in analysis arise from the expansion of fractions or irrational quantities or even of transcendentals, it will, in turn, be permissible in calculation to substitute in place of such series that quantity out of whose development it is produced.
Even with this formal approach to series, an interesting question arises. The series for the antiderivative of
does converge for
while this one does not. Specifically, taking the antiderivative of the above series, we obtain
If we substitute
into this series, we obtain
It is not hard to see that such an alternating series converges. The following picture shows why. In this diagram,
denotes the partial sum
From the diagram we can see
and
It seems that the sequence of partial sums will converge to whatever is in the “middle.” Our diagram indicates that it is ln
in the middle but actually this is not obvious. Nonetheless it is interesting that one series converges for
but the other does not.
Problem 4.2.1.
to determine how many terms of the series
should be added together to approximate
to within
without actually computing what
is.
There is an even more perplexing situation brought about by these examples. An infinite sum such as
appears to not satisfy the associative law for addition. While a convergent series such as
does satisfy the associative law, it does not satisfy the commutative law. In fact, it does not satisfy it rather spectacularly.
A generalization of the following result was stated and proved by Bernhard Riemann in 1854.
Theorem 4.2.2.
Let
be any real number. There exists a rearrangement of the series
which converges to
This theorem shows that a series is most decidedly
not a great big sum. It follows that a power series is
not a great big polynomial.
To set the stage, consider the
harmonic series
Even though the individual terms in this series converge to
the series still diverges (to infinity) as evidenced by the inequality
Armed with this fact, we can see why
Theorem 4.2.2 is true. First note that
This says that if we add enough terms of
we can make such a sum as small as we wish and if we add enough terms of
we can make such a sum as large as we wish. This provides us with the general outline of the proof. The trick is to add just enough positive terms until the sum is just greater than
Then we start to add on negative terms until the sum is just less than
Picking up where we left off with the positive terms, we add on just enough positive terms until we are just above
again. We then add on negative terms until we are below
In essence, we are bouncing back and forth around
If we do this carefully, then we can get this rearrangement to converge to
The notation in the proof below gets a bit hairy, but keep this general idea in mind as you read through it.
Let
be the first odd integer such that
Now choose
to be the first even integer such that
Notice that we still have
With this in mind, choose
to be the first odd integer with
In a similar fashion choose
to be the first even integer such that
Again choose
to be the first odd integer such that
Continue defining
and
in this fashion. Since
it is evident that the partial sums
must converge to
Furthermore, it is evident that every partial sum of the rearrangement
is trapped between two such extreme partial sums. This forces the entire rearranged series to converge to
The next two problems are similar to the above, but notationally are easier since we don’t need to worry about converging to an actual number. We only need to make the rearrangement grow (or shrink in the case of
problem 4.2.4) without bound.
Problem 4.2.3.
Show that there is a rearrangement of
which diverges to
Problem 4.2.4.
Show that there is a rearrangement of
which diverges to
It is fun to know that we can rearrange some series to make them add up to anything you like but there is a more fundamental idea at play here. That the negative terms of the alternating Harmonic Series
diverge to negative infinity and the positive terms
diverge to positive infinity make the
convergence of the alternating series very special.
Consider, first we add
This is one of the positive terms so our sum is starting to increase without bound. Next we add
which is one of the negative terms so our sum has turned around and is now starting to decrease without bound. Then another positive term is added: increasing without bound. Then another negative term: decreasing. And so on. The convergence of the alternating Harmonic Series is the result of a delicate balance between a tendency to run off to positive infinity and back to negative infinity. When viewed in this light it is not really too surprising that rearranging the terms can destroy this delicate balance.
Naturally, the alternating Harmonic Series is not the only such series. Any such series is said to converge “conditionally” — the condition being the specific arrangement of the terms.
To stir the pot a bit more, some series do satisfy the commutative property. More specifically, one can show that any rearrangement of the series
must converge to the same value as the original series (which happens to be
). Why does one series behave so nicely whereas the other does not?
Issues such as these and, more generally, the validity of using the infinitely small and infinitely large certainly existed in the 1700’s, but they were overshadowed by the utility of the calculus. Indeed, foundational questions raised by the above examples, while certainly interesting and of importance, did not significantly deter the exploitation of calculus in studying physical phenomena. However, the envelope eventually was pushed to the point that not even the most practically oriented mathematician could avoid the foundational issues.