Skip to main content

Section 3.2 Power Series as Infinite Polynomials

Applied to polynomials, the rules of differential and integral calculus are straightforward. Indeed, differentiating and integrating polynomials represent some of the easiest tasks in a calculus course. For example, computing \(\int(7-x+x^2)\dx{ x}\) is relatively easy compared to computing \(\int\sqrt[3]{1+x^3}\dx{ x}\text{.}\) Unfortunately, not all functions can be expressed as a polynomial. For example, \(f(x)=\sin x\) cannot be since a polynomial has only finitely many roots and the sine function has infinitely many roots, namely \(\{n\pi|\,n\in\ZZ\}\text{.}\) A standard technique in the 18th century was to write such functions as an “infinite polynomial,” what we typically refer to as a power series. Unfortunately an “infinite polynomial” is a much more subtle object than a mere polynomial, which by definition is finite. For now we will not concern ourselves with these subtleties. We will follow the example of our forebears and manipulate all “polynomial-like” objects (finite or infinite) as if they are polynomials.

Definition 3.2.1.

A power series centered at \(\boldsymbol{a}\) is a series of the form
\begin{equation*} \sum_{n=0}^\infty a_n(x-a)^n=a_0+a_1(x-a)+a_2(x-a)^2+\cdots\text{.} \end{equation*}
Often we will focus on the behavior of power series \(\sum_{n=0}^\infty a_nx^n\text{,}\) centered around \(0\text{,}\) as the series centered around other values of \(a\) are obtained by shifting a series centered at \(0\text{.}\)
Before we continue, we will make the following notational comment. The most advantageous way to represent a series is using summation notation since there can be no doubt about the pattern to the terms. After all, this notation contains a formula for the general term. This being said, there are instances where writing this formula is not practical. In these cases, it is acceptable to write the sum by supplying the first few terms and using ellipses (the three dots). If this is done, then enough terms must be included to make the pattern clear to the reader.
Returning to our definition of a power series, consider, for example, the geometric series \(\sum_{n=0}^\infty x^n=1+x+x^2+\cdots\text{.}\) If we multiply this series by \((1-x)\text{,}\) we obtain
\begin{equation*} (1-x)(1+x+x^2+\cdots)=(1+x+x^2+\cdots)-(x+x^2+x^3+\cdots)=1\text{.} \end{equation*}
This leads us to the power series representation
\begin{equation*} \frac{1}{1-x}=1+x+x^2+\cdots=\sum_{n=0}^\infty x^n\text{.} \end{equation*}
If we substitute \(x=\frac{1}{10}\) into the above, we obtain
\begin{equation*} 1+\frac{1}{10}+\left(\frac{1}{10}\right)^2+\left(\frac{1}{10}\right)^3+ \cdots=\frac{1}{1-\frac{1}{10}}=\frac{10}{9}\text{.} \end{equation*}
This agrees with the fact that \(.333\ldots=\frac{1}{3}\text{,}\) and so \(.111\ldots=\frac{1}{9}\text{,}\) and \(1.111\ldots=\frac{10}{9}\text{.}\)
There are limitations to these formal manipulations however. Substituting \(x=1\) or \(x=2\) yields the questionable results
\begin{equation*} \frac{1}{0}=1+1+1+\cdots\,\text{ and } \,\frac{1}{-1}=1+2+2^2+\cdots\text{.} \end{equation*}
We are missing something important here, though it may not be clear exactly what. A series representation of a function works sometimes, but there are some problems. For now, we will continue to follow the example of our 18th century predecessors and ignore them. That is, for the rest of this section we will focus on the formal manipulations to obtain and use power series representations of various functions. Keep in mind that this is all highly suspect until we can resolve problems like those just given.
Power series became an important tool in analysis in the 1700’s. By representing various functions as power series they could be dealt with as if they were (infinite) polynomials. The following is an example.

Example 3.2.2.

Solve the following Initial Value problem: Find \(y(x)\) given that \(\frac{\dx{ y}}{\dx{ x}}=y,\,y(0)=1\text{.}\)
Assuming the solution can be expressed as a power series we have
\begin{equation*} y=\sum_{n=0}^\infty a_nx^n=a_0+a_1x+a_2x^2+\cdots\text{.} \end{equation*}
Differentiating gives us
\begin{equation*} \frac{\dx{ y}}{\dx{ x}}=a_1+2a_2x+3a_3x^2+4a_4x^3+\ldots\text{.} \end{equation*}
Since \(\frac{\dx{ y}}{\dx{ x}}=y\) we see that
\begin{equation*} a_1=a_0\,,\,2a_2=a_1\,,\,3a_3=a_2\,,\,\ldots,\,na_n=a_{n-1}\,,\ldots\text{.} \end{equation*}
This leads to the relationship
\begin{equation*} a_n=\frac{1}{n}a_{n-1}=\frac{1}{n(n-1)}a_{n-2}=\cdots=\frac{1}{n!}a_0\text{.} \end{equation*}
Thus the series solution of the differential equation is
\begin{equation*} y=\sum_{n=0}^\infty\frac{a_0}{n!}x^n=a_0\sum_{n=0}^\infty\frac{1}{n!}x^n\text{.} \end{equation*}
Using the initial condition \(y(0)=1\text{,}\) we get \(1=a_0(1+0+\frac{1}{2!}0^2+\cdots)=a_0\text{.}\) Thus the solution to the initial problem is \(y=\sum_{n=0}^\infty\frac{1}{n!}x^n\text{.}\) Let’s call this function \(E(x)\text{.}\) Then by definition
\begin{equation*} E(x)=\sum_{n=0}^\infty\frac{1}{n!}x^n=1+\frac{x^1}{1!}+\frac{x^2}{2!}+\frac{x^3}{3!}+\,\ldots\text{.} \end{equation*}
Let’s examine some properties of this function. The first property is clear from the definition.
Property 1. \(E(0)=1\)
Property 2. \(E(x+y)=E(x)E(y)\text{.}\)
To see this we multiply the two series together, so we have
\begin{align*} E(x)E(y) \amp =\left(\sum_{n=0}^\infty\frac{1}{n!}x^n\right)\left(\sum_{n=0}^\infty\frac{1}{n!}y^n\right)\\ \amp =\left(\frac{x^0}{0!}+\frac{x^1}{1!}+\frac{x^2}{2!}+\frac{x^3}{3!}+\,\ldots\right)\left(\frac{y^0}{0!}+\frac{y^1}{1!}+\frac{y^2}{2!}+\frac{y^3}{3!}+\,\ldots\right)\\ \amp =\frac{x^0}{0!}\frac{y^0}{0!}+\frac{x^0}{0!}\frac{y^1}{1!}+\frac{x^1}{1!}\frac{y^0}{0!}+\frac{x^0}{0!}\frac{y^2}{2!}+\frac{x^1}{1!}\frac{y^1}{1!}+\frac{x^2}{2!}\frac{y^0}{0!}\\ \amp \ \ \ \ \ \ +\frac{x^0}{0!}\frac{y^3}{3!}+\frac{x^1}{1!}\frac{y^2}{2!}+\frac{x^2}{2!}\frac{y^1}{1!}+\frac{x^3}{3!}\frac{y^0}{0!}+\,\ldots\\ \amp =\frac{x^0}{0!}\frac{y^0}{0!}+\left(\frac{x^0}{0!}\frac{y^1}{1!}+ \frac{x^1}{1!}\frac{y^0}{0!}\right)\\ \amp \ \ \ \ \ \ +\left(\frac{x^0}{0!}\frac{y^2}{2!}+\frac{x^1}{1!}\frac{y^1}{1!}+\frac{x^2}{2!}\frac{y^0}{0!}\right)\\ \amp \ \ \ \ \ \ +\left(\frac{x^0}{0!}\frac{y^3}{3!}+\frac{x^1}{1!}\frac{y^2}{2!}+\frac{x^2}{2!}\frac{y^1}{1!}+\frac{x^3}{3!}\frac{y^0}{0!}\right)+\,\ldots\\ \amp =\frac{1}{0!}+\frac{1}{1!}\left(\frac{1!}{0!1!}x^0y^1+\frac{1!}{1!0!}x^1y^0\right)\\ \amp \ \ \ \ \ \ +\frac{1}{2!}\left(\frac{2!}{0!2!}x^0y^2+\frac{2!}{1!1!}x^1y^1+\frac{2!}{2!0!}x^2y^0\right)\\ \amp \ \ \ \ \ \ +\frac{1}{3!}\left(\frac{3!}{0!3!}x^0y^3+\frac{3!}{1!2!}x^1y^2+\frac{3!}{2!1!}x^2y^1+\frac{3!}{3!0!}x^3y^0\right)+\ldots \end{align*}
\begin{align} E(x)E(y) \amp =\frac{1}{0!}+\frac{1}{1!}\left(\binom{1}{0}x^0y^1+\binom{1}{1}x^1y^0\right)\notag\\ \amp \ \ \ \ \ \ +\frac{1}{2!}\left(\binom{2}{0}x^0y^2+\binom{2}{1}x^1y^1+\binom{2}{2}x^2y^0\right)\notag\\ \amp \ \ \ \ \ \ +\frac{1}{3!}\left(\binom{3}{0}x^0y^3+\binom{3}{1}x^1y^2+\binom{3}{2}x^2y^1+\binom{3}{3}x^3y^0\right)+\ldots\notag\\ \amp =\frac{1}{0!}+\frac{1}{1!}\left(x+y\right)^1+\frac{1}{2!}\left(x+y\right)^2+\frac{1}{3!}\left(x+y\right)^3+\ldots\notag\\ \amp =E(x+y)\text{.}\tag{14} \end{align}
Property 3. If \(m\) is a positive integer then \(E(mx)=\left(E(x\right))^m\text{.}\) In particular, \(E(m)=\left(E(1)\right)^m\text{.}\)

Problem 3.2.3.

Prove Property 3.
Property 4. \(E(-x)=\frac{1}{E(x)}=\left(E(x)\right)^{-1}\text{.}\)

Problem 3.2.4.

Prove Property 4.
Property 5. If \(n\) is an integer with \(n\neq 0\text{,}\) then \(E(\frac{1}{n})=\sqrt[n]{E(1)}=\left(E(1)\right)^{1/n}\text{.}\)

Problem 3.2.5.

Prove Property 5.
Property 6. If \(m\) and \(n\) are integers with \(n\neq 0\text{,}\) then \(E\left(\frac{m}{n}\right)=\left(E(1)\right)^{m/n}\text{.}\)

Problem 3.2.6.

Prove Property 6.

Definition 3.2.7.

Let \(E(1)\) be denoted by the number \(e\text{.}\) Using the series \(e=E(1)=\sum_{n=0}^\infty\frac{1}{n!}\text{,}\) we can approximate \(e\) to any degree of accuracy. In particular \(e\approx 2.71828\text{.}\)
In light of Property 6, we see that for any rational number \(r\text{,}\) \(E(r)=e^r\text{.}\) Not only does this give us the series representation \(e^r=\sum_{n=0}^\infty\frac{1}{n!}r^n\) for any rational number \(r\text{,}\) but it gives us a way to define \(e^x\) for irrational values of \(x\) as well. That is, we can define
\begin{equation*} e^x=E(x)=\sum_{n=0}^\infty\frac{1}{n!}x^n \end{equation*}
for any real number \(x\text{.}\)
As an illustration, we now have \(e^{\sqrt{2}}=\sum_{n=0}^\infty\frac{1}{n!}\left(\sqrt{2}\right)^n\text{.}\) The expression \(e^{\sqrt{2}}\) is meaningless if we try to interpret it as one irrational number raised to another. What does it mean to raise anything to the \(\sqrt{2}\) power? However the series \(\sum_{n=0}^\infty\frac{1}{n!}\left(\sqrt{2}\right)^n\) does seem to have meaning and it can be used to extend the exponential function to irrational exponents. In fact, defining the exponential function via this series answers the question we raised in Chapter 2: What does \(4^{\sqrt{2}}\) mean?
It means \(\displaystyle 4^{\sqrt{2}} = e^{\sqrt{2}\log 4} = \sum_{n=0}^\infty\frac{(\sqrt{2}\log 4)^n}{n!}\text{.}\)
This may seem to be the long way around just to define something as simple as exponentiation. But this is a fundamentally misguided attitude. Exponentiation only seems simple because we’ve always thought of it as repeated multiplication (in \(\ZZ\)) or root-taking (in \(\QQ\)). When we expand the operation to the real numbers this simply can’t be the way we interpret something like \(4^{\sqrt{2}}\text{.}\) How do you take the product of \(\sqrt{2}\) copies of \(4?\) The concept is meaningless. What we need is an interpretation of \(4^{\sqrt{2}}\) which is consistent with, say \(4^{3/2} = \left(\sqrt{4}\right)^3=8\text{.}\) This is exactly what the series representation of \(e^x\) provides.
We also have a means of computing integrals as series. For example, the famous “bell shaped” curve given by the function \(f(x)=\frac{1}{\sqrt{2\pi}}e^{-\frac{x^2}{2}}\) is of vital importance in statistics and must be integrated to calculate probabilities. The power series we developed gives us a method of integrating this function. For example, we have
\begin{align*} \int_{x=0}^b\frac{1}{\sqrt{2\pi}}e^{-\frac{x^2}{2}}d x \amp =\frac{1}{\sqrt{2\pi}}\int_{x=0}^b\left(\sum_{n=0}^\infty\frac{1}{n!}\left(\frac{-x^2}{2}\right)^n\right)d x\\ \amp =\frac{1}{\sqrt{2\pi}}\,\sum_{n=0}^\infty\left(\frac{\left(-1\right)^n}{n!2^n}\int_{x=0}^bx^{2n}d x\right)\\ \amp =\frac{1}{\sqrt{2\pi}}\,\sum_{n=0}^\infty\left(\frac{\left(-1\right)^nb^{2n+1}}{n!2^n\left(2n+1\right)}\right)\text{.} \end{align*}
This series can be used to approximate the integral to any degree of accuracy. The ability to provide such calculations made power series of paramount importance in the 1700’s.

Problem 3.2.8.

(a)

Show that if \(y=\sum_{n=0}^\infty a_nx^n\) satisfies the differential equation \(\frac{\dx^2y}{\dx{ x}^2}=-y\text{,}\) then
\begin{equation*} a_{n+2}=\frac{-1}{\left(n+2\right)\left(n+1\right)}\,a_n \end{equation*}
and conclude that
\begin{equation*} y=a_0+a_1x-\frac{1}{2!}\,a_0x^2-\frac{1}{3!}\,a_1x^3+\frac{1}{4!}\,a_0x^4+\frac{1}{5!}\,a_1x^5-\frac{1}{6!}\,a_0x^6-\frac{1}{7!}\,a_1x^7+\cdots\text{.} \end{equation*}

(b)

Since \(y=\sin x\) satisfies \(\frac{\dx^2y}{\dx{ x}^2}=-y\text{,}\) we see that
\begin{equation*} \sin x=a_0+a_1x-\frac{1}{2!}\,a_0x^2-\frac{1}{3!}\,a_1x^3+\frac{1}{4!}\,a_0x^4+\frac{1}{5!}\,a_1x^5-\frac{1}{6!}\,a_0x^6-\frac{1}{7!}\,a_1x^7+\cdots \end{equation*}
for some constants \(a_0\) and \(a_1\text{.}\) Show that in this case \(a_0=0\) and \(a_1=1\) and obtain
\begin{equation*} \sin x=x-\frac{1}{3!}\,x^3+\frac{1}{5!}x^5-\frac{1}{7!}x^7+\cdots=\sum_{n=0}^\infty\frac{\left(-1\right)^n}{\left(2n+1\right)!}x^{2n+1}\text{.} \end{equation*}

Problem 3.2.9.

(a)

Use the series
\begin{equation*} \sin x=x-\frac{1}{3!}\,x^3+\frac{1}{5!}x^5-\frac{1}{7!}x^7+\cdots=\sum_{n=0}^\infty\frac{\left(-1\right)^n}{\left(2n+1\right)!}x^{2n+1} \end{equation*}
to obtain the series
\begin{equation*} \cos x=1-\frac{1}{2!}\,x^2+\frac{1}{4!}x^4-\frac{1}{6!}x^6+\cdots=\sum_{n=0}^\infty\frac{\left(-1\right)^n}{\left(2n\right)!}x^{2n}\text{.} \end{equation*}

(b)

Let \(s(x,N)=\sum_{n=0}^N\frac{\left(-1\right)^n}{\left(2n+1\right)!}x^{2n+1}\) and \(c(x,N)=\sum_{n=0}^N\frac{\left(-1\right)^n}{\left(2n\right)!}x^{2n}\) and use a computer algebra system to plot these for\(\,-4\pi\leq x\leq 4\pi,\,\,N=1,2,5,\,10,\,15\text{.}\) Describe what is happening to the series as N becomes larger.

Problem 3.2.10.

Use the geometric series, \(\frac{1}{1-x}=1+x+x^2+x^3+\cdots=\sum_{n=0}^\infty x^n\text{,}\) to obtain a series for \(\frac{1}{1+x^2}\) and use this to obtain the series
\begin{equation*} \arctan x=x-\frac{1}{3}x^3+\frac{1}{5}x^5-\cdots=\sum_{n=0}^\infty(-1)^n \frac{1}{2n+1}x^{2n+1}\text{.} \end{equation*}
Use the series above to obtain the series \(\frac{\pi}{4}=\sum_{n=0}^\infty(-1)^n\frac{1}{2n+1}\text{.}\)
The series for arctangent was known by James Gregory (1638-1675) and it is sometimes referred to as “Gregory’s series.” Leibniz independently discovered \(\frac{\pi}{4}=1-\frac{1}{3}+\frac{1}{5}-\frac{1}{7}+\cdots\) by examining the area of a circle. Though it gives us a means for approximating \(\pi\) to any desired accuracy, the series converges too slowly to be of any practical use. For example, if we compute the sum of the first \(1000\) terms we get
\begin{equation*} 4\left(\sum_{n=0}^{1000}(-1)^n\frac{1}{2n+1}\right)\approx 3.142591654 \end{equation*}
which only approximates \(\pi\) to two decimal places.
Newton knew of these results and the general scheme of using series to compute areas under curves. These results motivated Newton to provide a series approximation for \(\pi\) as well, which, hopefully, would converge faster. We will use modern terminology to streamline Newton’s ideas. First notice that \(\frac{\pi}{4}=\int_{x=0}^1\sqrt{1-x^2}\dx{ x}\) as this integral gives the area of one quarter of the unit circle. The trick now is to find series that represents \(\sqrt{1-x^2}\text{.}\)
To this end we start with the binomial theorem
\begin{equation*} \left(a+b\right)^N=\sum_{n=0}^N\binom{N}{n}a^{N-n}b^n\text{,} \end{equation*}
where
\begin{align*} \binom{N}{n}\amp =\frac{N!}{n!\left(N-n\right)!}\\ \amp =\frac{N\left(N-1\right)\left(N-2\right)\cdots\left(N-n+1\,\right)}{n!}\\ \amp =\frac{\prod_{j=0}^{n-1}\left(N-j\right)}{n!}\text{.} \end{align*}
Unfortunately, we now have a small problem with our notation which will be a source of confusion later if we don’t fix it. So we will pause to address this matter. We will come back to the binomial expansion afterward.
This last expression is becoming awkward in much the same way that an expression like
\begin{equation*} 1+\frac{1}{2}+\left(\frac{1}{2}\right)^2+\left(\frac{1}{2}\right)^3+\ldots+\left(\frac{1}{2}\right)^k \end{equation*}
is awkward. Just as this sum is less cumbersome when written as \(\sum_{n=0}^k\left(\frac{1}{2}\right)^n\) the product
\begin{equation*} N\left(N-1\right)\left(N-2\right)\cdots\left(N-n+1\,\right) \end{equation*}
is less cumbersom when we write it as \(\prod_{j=0}^{n-1}\left(N-j\right)\text{.}\)
A capital pi (\(\Pi\)) is used to denote a product in the same way that a capital sigma (\(\Sigma\)) is used to denote a sum. The most familiar example would be writing
\begin{equation*} n!=\prod_{j=1}^{n}j\text{.} \end{equation*}
Just as it is convenient to define \(0!=1\text{,}\) we will find it convenient to define \(\prod_{j=1}^{0}=1\text{.}\) Similarly, the fact that \(\binom{N}{0}=1\) leads to the convention \(\prod_{j=0}^{-1}\left(N-j\right)=1\text{.}\) Strange as this may look, it is convenient and is consistent with the convention \(\sum_{j=0}^{-1}s_j=0\text{.}\)
Returning to the binomial expansion and recalling our convention
\begin{equation*} \prod_{j=0}^{-1}\left(N-j\right)=1\text{,} \end{equation*}
we can write,
\begin{equation*} \left(1+x\right)^N=1+\sum_{n=1}^N\left(\frac{\prod_{j=0}^{n-1}\left(N-j\right)}{n!}\right)x^n = \sum_{n=0}^N\left(\frac{\prod_{j=0}^{n-1}\left(N-j\right)}{n!}\right)x^n\text{.} \end{equation*}
These two representations probably look the same at first. Take a moment and be sure you see where they differ.
There is an advantage to using this convention (especially when programing a product into a computer), but this is not a deep mathematical insight. It is just a notational convenience and we don’t want you to fret over it, so we will use both formulations (at least initially).
Notice that we can extend the above definition of \(\binom{N}{n}\) to values \(n>N\text{.}\) In this case, \(\prod_{j=0}^{n-1}\left(N-j\right)\) will equal 0 as one of the factors in the product will be \(0\) (the one where \(n=N\)). This gives us that \(\binom{N}{n}=0\) when \(n>N\) and so
\begin{equation*} \left(1+x\right)^N=1+\sum_{n=1}^\infty\left(\frac{\prod_{j=0}^{n-1}\left(N-j\right)}{n!}\text{ } \right)x^n= \sum_{n=0}^\infty\left(\frac{\prod_{j=0}^{n-1}\left(N-j\right)}{n!}\text{ } \right)x^n \end{equation*}
holds true for any nonnegative integer \(N\text{.}\) Essentially Newton asked if it could be possible that the above equation could hold values of \(N\) which are not nonnegative integers. For example, if the equation held true for \(N=\frac{1}{2}\) , we would obtain
\begin{equation*} \left(1+x\right)^{\frac{1}{2}}=1+\sum_{n=1}^\infty\left(\frac{ \prod_{j=0}^{n-1}\left(\frac{1}{2}-j\right)}{n!}\right)x^n=\sum_{n=0}^\infty\left(\frac{ \prod_{j=0}^{n-1}\left(\frac{1}{2}-j\right)}{n!}\right)x^n \end{equation*}
or
\begin{equation} \left(1+x\right)^{\frac{1}{2}}=1+\frac{1}{2}x+\frac{\frac{1}{2}\left(\frac{1}{2}-1\right)}{2!}x^2+\frac{\frac{1}{2}\left(\frac{1}{2}-1\right)\left(\frac{1}{2}-2\right)}{3!}x^3+\cdots\text{.}\tag{15} \end{equation}
Notice that since \(1/2\) is not an integer the series no longer terminates. Although Newton did not prove that this series was correct (nor did we), he tested it by multiplying the series by itself. When he saw that by squaring the series he started to obtain \(1+x+0\,x^2+0\,x^3+\cdots\text{,}\) he was convinced that the series was exactly equal to \(\sqrt{1+x}\text{.}\)

Problem 3.2.11.

Consider the series representation
\begin{align*} \left(1+x\right)^{\frac{1}{2}}\amp =1+\sum_{n=1}^\infty\frac{\prod_{j=0}^{n-1}\left(\frac{1}{2}-j\right)}{n!}x^n\\ \amp =\sum_{n=0}^\infty\frac{\prod_{j=0}^{n-1}\left(\frac{1}{2}-j\right)}{n!}x^n\text{.} \end{align*}
Multiply this series by itself and compute the coefficients for \(x^0,\,x^1,\,x^2,\,x^3,\,x^4\) in the resulting series.

Problem 3.2.12.

Let
\begin{equation*} S(x,M)=\sum_{n=0}^M\frac{\prod_{j=0}^{n-1}\left(\frac{1}{2}-j \right)}{n!}x^n\text{.} \end{equation*}
Use a computer algebra system to plot \(S(x,M)\) for \(M=5,\,10,\,15,\,95,\,100\) and compare these to the graph for \(\sqrt{1+x}\text{.}\) What seems to be happening? For what values of \(x\) does the series appear to converge to \(\sqrt{1+x}?\)
Convinced that he had the correct series, Newton used it to find a series representation of \(\int_{x=0}^1\sqrt{1-x^2} \dx{ x}\text{.}\)

Problem 3.2.13.

Use the series \(\displaystyle \left(1+x\right)^{\frac{1}{2}}=\sum_{n=0}^\infty\frac{\prod_{j=0}^{n-1}\left(\frac{1}{2}-j\right)}{n!}x^n\) to obtain the series
\begin{align*} \frac{\pi}{4}\amp =\int_{x=0}^1\sqrt{1-x^2} \dx{ x}\\ \amp =\sum_{n=0}^\infty\left(\frac{\prod_{j=0}^{n-1}\left(\frac{1}{2}-j\right)}{n!}\text{ } \right)\left(\frac{\left(-1\right)^n}{2n+1}\right)\\ \amp =1-\frac{1}{6}-\frac{1}{40}-\frac{1}{112}-\frac{5}{1152}-\cdots\text{.} \end{align*}
Use a computer algebra system to sum the first 100 terms of this series and compare the answer to \(\frac{\pi}{4}\text{.}\)
Again, Newton had a series which could be verified (somewhat) computationally. This convinced him even further that he had the correct series.

Problem 3.2.14.

  1. Show that
    \begin{equation*} \int_{x=0}^{1/2}\sqrt{x-x^2}\dx{ x}=\sum_{n=0}^\infty\frac{(-1)^n\,\,\prod_{j=0}^{n-1}\left(\frac{1}{2}-j\right)}{\sqrt{2\,}n!\left(2n+3\right)2^n} \end{equation*}
    and use this to show that
    \begin{equation*} \pi=16\left(\sum_{n=0}^\infty\frac{(-1)^n\,\,\prod_{j=0}^{n-1}\left(\frac{1}{2}-j\right)}{\sqrt{2\,}n!\left(2n+3\right)2^n}\right)\text{.} \end{equation*}
  2. We now have two series for calculating \(\pi:\) the one from part (a) and the one derived earlier, namely
    \begin{equation*} \pi=4\left(\sum_{n=0}^\infty\frac{(-1)^n\,\,}{2n+1}\right)\text{.} \end{equation*}
    We will explore which one converges to \(\pi\) faster. With this in mind, define \(S1(N)=16\left(\sum_{n=0}^N\frac{(-1)^n\,\,\prod_{j=0}^{n-1}\left( \frac{1}{2}-j\right)}{\sqrt{2\,}n!\left(2n+3\right)2^n}\right)\) and \(S2(N)=4\left(\sum_{n=0}^N\frac{(-1)^n\,\,}{2n+1}\right)\text{.}\) Use a computer algebra system to compute \(S1(N)\)and \(S2(N)\) for \(N=5,10,15,20\text{.}\) Which one appears to converge to \(\pi\) faster?
In general the series representation
\begin{align*} \left(1+x\right)^\alpha \amp =\sum_{n=0}^\infty\left(\frac{\prod_{j=0}^{n-1}\left(\alpha-j\right)}{n!}\text{ } \right)x^n\\ \amp =1+\alpha x+\frac{\alpha\left(\alpha-1\right)}{2!}x^2+\frac{\alpha\left(\alpha-1\right)\left(\alpha-2\right)}{3!}x^3+\cdots \end{align*}
is called the binomial series (or Newton’s binomial series). This series is correct when \(\alpha\) is a non-negative integer (after all, that is how we got the series). We can also see that it is correct when \(\alpha=-1\) as we obtain
\begin{align*} \left(1+x\right)^{-1}\amp =\sum_{n=0}^\infty\left(\frac{\prod_{j=0}^{n-1}\left(-1-j\right)}{n!}\text{ } \right)x^n\\ \amp =1+(-1)x+\frac{-1\left(-1-1\right)}{2!}x^2+\frac{-1\left(-1-1\right)\left(-1-2\right)}{3!}x^3+\cdots\\ \amp =1-x+x^2-x^3+\cdots \end{align*}
which can be obtained from the geometric series \(\frac{1}{1-x}=1+x+x^2+\cdots\) .
In fact, the binomial series is the correct series representation for all values of the exponent \(\alpha\) (though we haven’t proved this yet).

Problem 3.2.15.

Let \(k\) be a positive integer. Find the power series, centered at zero, for \(f(x) = \left(1-x\right)^{-k}\) by

(a)

Differentiating the geometric series \(\left(k-1\right)\) times.

(b)

Applying the binomial series.

(c)

Compare these two results.
Figure 3.2.16. Leonhard Euler 10 
Leonhard Euler was a master at exploiting power series. In 1735, the 28 year-old Euler won acclaim for what is now called the Basel problem: to find a closed form for \(\sum_{n=1}^\infty\frac{1}{n^2}\text{.}\) Other mathematicans knew that the series converged, but Euler was the first to find its exact value. The following problem essentially provides Euler’s solution.

Problem 3.2.17. The Basel Problem.

(a)

Show that the power series for \(\frac{\sin x}{x}\) is given by \(1-\frac{1}{3!}x^2+\frac{1}{5!}x^4-\cdots\)

(b)

Use (a) to infer that the roots of \(1-\frac{1}{3!}x^2+\frac{1}{5!}x^4-\cdots\) are given by
\begin{equation*} x=\pm\pi,\,\pm 2\pi,\,\pm 3\pi,\,\ldots \end{equation*}

(c)

Suppose \(p(x)=a_0+a_1x+\cdots+a_nx^n\) is a polynomial with roots \(r_1,\,r_2,\,\ldots,r_n\text{.}\) Show that if \(a_0\neq\) \(0\text{,}\) then all the roots are non-zero and
\begin{equation*} p(x)=a_0\left(1-\frac{x}{r_1}\right)\left(1-\frac{x}{r_2}\right)\cdots\left(1-\frac{x}{r_n}\right)\text{.} \end{equation*}

(d)

Assuming that the result in part (c) holds for an infinite polynomial (power series), deduce that
\begin{align*} 1-\frac{1}{3!}x^2+\frac{1}{5!}x^4-\cdots\amp =\left(1-\left(\frac{x}{\pi}\right)^2\right)\left(1-\left(\frac{x}{2\pi}\right)^2\right)\left(1-\left(\frac{x}{3\pi}\right)^2\right)\cdots \end{align*}

(e)

Expand this product to deduce
\begin{equation*} \sum_{n=1}^\infty\frac{1}{n^2}=\frac{\pi^2}{6}.{} \end{equation*}

Problem 3.2.18. Euler’s Formula.

(a)

Use the power series expansion of \(e^x\text{,}\) \(\sin x,\) and \(\cos x\) to derive Euler’s Formula:
\begin{equation*} e^{i\theta} = cos\theta+i\sin\theta. \end{equation*}

(b)

Use Euler’s formula to derive the Addition/Subtraction formulas from Trigonometry:
\begin{equation*} \sin(\alpha\pm\beta) = \sin\alpha\cos\beta\pm\sin\beta\cos\alpha \end{equation*}
\begin{equation*} \cos(\alpha\pm\beta) = \cos\alpha\cos\beta\mp\sin\alpha\sin\beta \end{equation*}

(c)

Use Euler’s formula to show that
\begin{equation*} \sin 2\theta = 2\cos\theta\sin\theta \end{equation*}
\begin{equation*} \cos 2\theta =\cos^2\theta-\sin^2\theta \end{equation*}

(d)

Use Euler’s formula to show that
\begin{equation*} \sin 3\theta = 3\cos^2\theta\sin\theta-\sin^3\theta \end{equation*}
\begin{equation*} \cos 3\theta=\cos^3\theta-\cos\theta\sin^2\theta \end{equation*}

(e)

Find a formula \(\sin(n\theta)\) and \(\cos(n\theta)\) for any positive integer \(n\text{.}\)