Skip to main content

Chapter 5 Joseph Fourier: The Man Who Broke Calculus

Section 5.1 Joseph Fourier and His Series

Applying mathematics to physical problems such as heat flow in a solid body drew much attention in the latter part of the 1700’s and the early part of the 1800’s. One of the people to attack the heat flow problem was Jean Baptiste Joseph Fourier (1768–1839).
Figure 5.1.1. Jean Baptiste Joseph Fourier
Fourier submitted a manuscript on the subject, Sur la propagation de la chaleur (On the Propagation of Heat), to the Institut National des Sciences et des Arts in 1807. These ideas were subsequently published in La theorie analytique de la chaleur (The Analytic Theory of Heat (1822)).
To examine Fourier’s ideas, consider the example of a thin wire of length one, which is perfectly insulated and whose endpoints are held at a fixed temperature of zero. Given an initial temperature distribution in the wire, the problem is to monitor the temperature of the wire at any point \(x\) and at any time \(t\text{.}\) Specifically, if we let \(u(x,t)\) denote the temperature of the wire at point \(x\in[0,1]\) at time \(t\geq 0\text{,}\) then it can be shown that \(u\) must satisfy the one-dimensional heat equation \(\rho^2\frac{\partial^2u}{\partial x^2}=\frac{\partial u}{\partial t}\text{,}\) where \(\rho^2\) is a positive constant known as the thermal diffusivity. If the initial temperature distribution is given by the function \(f(x)\text{,}\) then the \(u\) we are seeking must satisfy all of the following.
\begin{align} \rho^2\frac{\partial^2u}{\partial x^2}\amp{}=\frac{\partial u}{\partial t},\amp{}\tag{5.1.1}\\ u(0,t)\amp{} =u(1,t)=0, \amp{} \forall\ t\amp{}\geq 0,\notag\\ u(x,0)\amp{} =f(x), \amp{} \forall\ x\amp{}\in[\,0,1].\notag \end{align}
To do this, Fourier employed what is now referred to as the method of separation of variables. Specifically, Fourier looked for solutions of the form \(u(x,t)=X(x)T(t)\text{;}\) that is, solutions where the \(x\)–part can be separated from the \(t\)-part. Assuming that \(u\) has this form, we get \(\frac{\partial^2u}{\partial x^2}=X^{\prime\prime}T\) and \(\frac{\partial u}{\partial t}=X\,T^{\prime}\text{.}\) Substituting these into equation (5.1.1) we obtain
\begin{align*} \rho^2X^{\prime\prime}T=X T^\prime\amp{}\amp{}\text{or}\amp{}\amp{} \frac{X^{\prime\prime}}{X}=\frac{T^\prime}{\rho^2T}\text{.} \end{align*}
Since the left-hand side involves no \(t\)’s and the right-hand side involves no \(x\)’s, both sides must equal a constant \(k\text{.}\) Thus we have
\begin{align*} X^{\prime\prime}=k X\amp{}\amp{}\text{and}\amp{}\amp{} T^\prime=\rho^2k T. \end{align*}

Problem 5.1.2.

Show that \(T=Ce^{\rho^2kt}\) satisfies the equation \(T^\prime=\rho^2k T\text{,}\) where \(C\) is an arbitrary constant. Use the physics of the problem to show that if \(u\) is not constantly zero, then \(k\lt 0\text{.}\)
Hint.
Consider \(\limit{t}{\infty}{u(x,t)}\text{.}\)
Using the reslut from Problem 5.1.2 that \(k\lt 0\text{,}\) we will let \(k=-p^2\text{.}\)

Problem 5.1.3.

Show that \(X=A\sin\left(px\right)+B\cos\left(px\right)\) satisfies the equation \(X^{\prime\prime}=-p^2X\text{,}\) where \(A\) and \(B\) are arbitrary constants. Use the boundary conditions \(u(0,t)=u(1,t)=0\text{,}\) \(\forall\) \(t\geq 0\) to show that \(B=0\) and \(A\sin p=0\text{.}\) Conclude that if \(u\) is not constantly zero, then \(p=n\pi\text{,}\) where \(n\) is any integer.

Problem 5.1.4.

Show that if \(u_1\) and \(u_2\) satisfy the equations
\begin{equation*} \rho^2\frac{\partial^2u}{\partial x^2}=\frac{\partial u}{\partial t} \end{equation*}
and
\begin{align*} u(0,t)=u(1,t)=0,\amp{}\amp{} \forall\ t\geq 0 \end{align*}
then \(u=A_1u_1+A_2u_2\) satisfy them as well, where \(A_1\) and \(A_2\) are arbitrary constants.
Putting all of these results together, Fourier surmised that the general solution to
\begin{align*} \rho^2\frac{\partial^2u}{\partial x^2}=\frac{\partial u}{\partial t} u(0,t)=u(1,t)=0,\amp{}\amp{}\forall\ t\geq 0 \end{align*}
could be expressed as the series
\begin{equation*} u(x,t)=\sum_{n=1}^\infty A_ne^{-(\rho n\pi)^2t}\sin\left(n\pi x\right)\text{.} \end{equation*}
All that is left is to have \(u\) satisfy the initial condition \(u(x,0)=f(x)\text{,}\) \(\forall\,x\in[\,0,1]\text{.}\) That is, we need to find coefficients \(A_n\text{,}\) such that
\begin{equation*} f(x)=u(x,0)=\sum_{n=1}^\infty A_n\sin\left(n\pi x\right)\text{.} \end{equation*}
The idea of representing a function as a series of sine waves was proposed by Daniel Bernoulli in 1753 while examining the problem of modeling a vibrating string. Unfortunately for Bernoulli, he didn’t know how to compute the coefficients in such a series representation. What distinguished Fourier was that he developed a technique to compute these coefficients. The key is the result of the following problem.

Problem 5.1.5.

Let \(n\) and \(m\) be positive integers. Show
\begin{equation*} \int_{x=0}^1\sin\left(n\pi x\right)\sin\left(m\pi x\right)\dx{ x}= \left\{\begin{matrix}0\amp \text{ if } n\neq m\\ \frac{1}{2}\amp \text{ if } n=m \end{matrix} \right.. {} \end{equation*}
Armed with the result from Problem 5.1.5, Fourier could compute the coefficients \(A_n\) in the series representation \(f(x)=\sum_{n=1}^\infty A_n \sin\left(n\pi x\right)\) in the following manner. Since we are trying to find \(A_n\) for a particular (albeit general) \(n\text{,}\) we will temporarily change the index in the summation from \(n\) to \(j\text{.}\) With this in mind, consider
\begin{align*} \int_{x=0}^1f(x)\sin\left(n\pi x\right)\dx{ x} \amp =\int_{x=0}^1\left(\sum_{j=1}^\infty A_j\text{ sin } \left(j\pi x\right)\right)\sin\left(n\pi x\right)\dx{ x}\\ \amp =\sum_{j=1}^\infty A_j\int_{x=0}^1\sin\left(j\pi x\right)\sin\left(n\pi x\right)\dx{ x}\\ \amp =\frac{A_n}{2} \end{align*}
which leads to the formula \(A_n=2\int_{x=0}^1f(x)\sin\left(n\pi x\right)d x\text{.}\)
The series \(f(x)=\sum_{n=1}^\infty A_n\sin\left(n\pi x\right)\) with
\begin{equation} A_n=2\int_{x=0}^1f(x)\sin\left(n\pi x\right)\dx{ x}\tag{5.1.2} \end{equation}
is called the Fourier (sine) series of \(\boldsymbol{f}\).

Example 5.1.6.

Let’s apply this to the function, \(f(x)=\frac{1}{2}-\abs{x-\frac{1}{2}}\text{,}\) whose graph is seen below.

Problem 5.1.7.

Let \(n\) be a positive integer. Show that if
\begin{equation*} f(x)=\frac{1}{2}-\abs{x-\frac{1}{2}} \end{equation*}
then
\begin{equation*} \int_{x=0}^1f(x)\sin\left(n\pi x\right)d x = \frac{2}{\left(n\pi\right)^2}\sin\left(\frac{n\pi}{2}\right) \end{equation*}
and show that the Fourier sine series of \(f\) is given by
\begin{align*} f(x)\amp{}=\sum_{n=1}^\infty\frac{4}{\left(n\pi\right)^2}\sin\left(\frac{n\pi}{2} \right)\sin\left(n\pi x\right)\\ \amp{}=\frac{4}{\pi^2}\sum_{k=0}^\infty\frac{\left(-1\right)^k}{\left(2k+1\right)^2}\sin\left(\left(2k+1\right)\pi x\right).{} \end{align*}
To see evidence that this series really does, in fact, represent \(f\) on \([0,1]\text{,}\) let
\begin{equation*} S_N(x)=\frac{4}{\pi^2}\sum_{k=0}^N\frac{\left(-1\right)^k}{\left(2k+1\right)^2} \sin\left(\left(2k+1\right)\pi x\right) \end{equation*}
be the \(Nth\) partial sum of the series. The sketches below display the graphs of \(S_N\) when \(N=1\text{,}\) \(N=2\text{,}\) \(N=5\text{,}\) and \(N=50\text{.}\)
Figure 5.1.8. Graph of \(S_1(x)\)
Figure 5.1.9. Graph of \(S_2(x)\)
Figure 5.1.10. Graph of \(S_5(x)\)
Figure 5.1.11. Graph of \(S_{50}(x)\)
As you can see, as we add more terms to \(S_N\) its graph looks more and more like the original function \(f(x)=\frac{1}{2}-\abs{x-\frac{1}{2}}\text{.}\) This would seem to be strong evidence that series converges to the function and therefore
\begin{equation} f(x)=\frac{4}{\pi^2}\sum_{k=0}^\infty\frac{\left(-1\right)^k}{\left(2k+1\right)^2}\sin\left(\left(2k+1\right)\pi x\right)\text{.}\tag{5.1.3} \end{equation}
is a valid representation of \(f\) as a Fourier series.
But is it?
Recall, that when we represented a function as a power series, we freely differentiated and integrated the series term by term as though it was a polynomial. Let’s do the same with this Fourier series.
To start, notice that the derivative of
\begin{equation*} f(x)=\frac{1}{2}-\abs{x-\frac{1}{2}} \end{equation*}
is given by
\begin{equation*} f^\prime(x) = \begin{cases}1\amp \text{ if } \,\text{ } 0\leq x\lt \frac{1}{2}\\ -1\amp \text{ if } \,\frac{1}{2}\lt x\leq 1 \end{cases} \text{.} \end{equation*}
This derivative does not exist at \(x=\frac{1}{2}\) and its graph is given by
Figure 5.1.12. Graph of \(f^\prime(x) = \begin{cases} 1\amp \text{ if } 0\leq x\lt \frac{1}{2}\\ -1\amp \text{ if } \frac{1}{2}\lt x\leq 1 \end{cases} \)
If we differentiate the Fourier series in equation (5.1.3) term–by–term, we obtain
\begin{equation} \frac{4}{\pi}\sum_{k=0}^\infty\frac{\left(-1\right)^k}{\left(2k+1\right)} \cos\left(\left(2k+1\right)\pi x\right) \text{.}\tag{5.1.4} \end{equation}
Let \(C_N(x)=\frac{4}{\pi}\sum_{k=0}^N\frac{\left(-1\right)^k}{\left(2k+1\right)} \cos\left(\left(2k+1\right)\pi x\right)\) be the \(Nth\) partial sum of this Fourier cosine series. The sketches below display the graphs of \(C_N\) when \(N=1\text{,}\) \(N=2\text{,}\) \(N=5\text{,}\) and \(N=50\text{.}\)
Figure 5.1.13. Graph of \(C_1(x)\)
Figure 5.1.14. Graph of \(C_1(x)\)
Figure 5.1.15. Graph of \(C_5(x)\)
Figure 5.1.16. Graph of \(C_{50}(x)\)
In fact, if we were to graph the series \(\frac{4}{\pi}\sum_{k=0}^\infty\frac{\left(-1\right)^k}{\left(2k+1\right)}\) cos\(\left(\left(2k+1\right)\pi x\right)\text{,}\) we would obtain the following graph:
Figure 5.1.17.
Notice that this agrees with the graph of \(f^\prime\text{,}\) except that \(f^\prime\) didn’t exist at \(x=\frac{1}{2}\text{,}\) and this series takes on the value \(0\) at \(x=\frac{1}{2}\text{.}\) Notice also, that every partial sum of this series is continuous, since it is a finite combination of continuous cosine functions. This agrees with what you learned in Calculus, the (finite) sum of continuous functions is always continuous. In the 1700s it was assumed (falsely) that this could be extended to infinite series, because every time a power series converged to a function, that function happened to be continuous. This never failed for power series, so this example was a bit disconcerting as it is an example of the sum of infinitely many continuous functions which is, in this case, discontinuous. Was it possible that there was some power series which converged to a function which was not continuous? Even if there isn’t, what is the difference between power series and this Fourier series?
Even more disconcerting is what happens if we differentiate the series
\begin{equation*} \frac{4}{\pi}\sum_{k=0}^\infty\frac{\left(-1\right)^k}{\left(2k+1\right)} \cos\left(\left(2k+1\right)\pi x\right) \end{equation*}
term–by–term as before. Given the above graph of this series, it appears that the derivative of it should be constantly 0, except at \(x=\frac{1}{2}\text{,}\) where the derivative doesn’t exist. But when we perform the differentiation term–by–term we obtain the series
\begin{equation*} 4\sum_{k=0}^\infty\left(-1\right)^{k+1}\sin\left(\left(2k+1\right)\pi x\right)\text{.} \end{equation*}
and if we graph the sum of the first forty terms of this series, we get:
We knew that there might be a problem at \(x=\frac{1}{2}\) but this is crazy! The series doesn’t seem to be converging to zero at all!

Problem 5.1.18.

Show that when \(x=\frac{1}{4}\)
\begin{align*} 4\sum_{k=0}^\infty\left(-1\right)^{k+1}\amp{} \sin\left(\left(2k+1\right)\pi x\right)\\ \amp{}=4\left(-\frac{1}{\sqrt{2}}+\frac{1}{\sqrt{2}}+\frac{1}{\sqrt{2}}- \frac{1}{\sqrt{2}}-\frac{1}{\sqrt{2}}+\cdots\right). \end{align*}
Problem 5.1.18 shows that when we differentiate the series
\begin{equation*} \frac{4}{\pi}\sum_{k=0}^\infty\frac{\left(-1\right)^k}{\left(2k+1\right)} \cos\left(\left(2k+1\right)\pi x\right) \end{equation*}
term by term, this differentiated series doesn’t converge to anything at \(x=\frac{1}{4}\text{,}\) let alone converge to zero. In this case, the old Calculus rule that the derivative of a sum is the sum of the derivatives does not apply for this infinite sum, though it did apply before. As if the continuity issue wasn’t bad enough before, this was even worse. Power series were routinely differentiated and integrated term-by-term. This was part of their appeal. They were treated like “infinite polynomials.” Either there is some power series lurking that refuses to behave nicely, or there is some property that power series have that not all Fourier series have.
Could it be that everything we did in Chapter 3 and Chapter 4 was bogus?
Fortunately, the answer to that question is, “No.” Power series are generally much more well–behaved than Fourier series. Whenever a power series converges, the function it converges to will be continuous. And, as long as one stays inside the interval of convergence, power series can be differentiated and integrated term–by–term. Power series have something going for them that your average Fourier series does not, but none of this is any more obvious to us than it was to mathematicians at the beginning of the nineteenth century. What they did (and we do) know was that relying on intuition was perilous and that rigorous formulations were needed to either justify or dismiss these intuitions. In some sense, the nineteenth century was the “morning after” the mathematical party that went on throughout the eighteenth century.

Aside: Convergence: Power Series vs. Fourier Series.

Problem 5.1.19.

Let \(n\) and \(m\) be positive integers. Show
\begin{equation*} \int_{x=0}^1\cos\left(n\pi x\right)\cos\left(m\pi x\right)\dx{ x}=\left\{ \begin{matrix}0\amp \text{ if } n\neq m\\ \frac{1}{2}\amp \text{ if } n=m \end{matrix} \right.\text{.} \end{equation*}

Problem 5.1.20.

Use the result of Problem 5.1.19 to show that if
\begin{equation*} f(x)=\sum_{n=1}^\infty B_n\cos\left(n\pi x\right) \end{equation*}
on \([0,1]\text{,}\) then
\begin{equation} B_m=2\int_{x=0}^1f(x)\cos\left(m\pi x\right)\dx{ x}.{}\tag{5.1.5} \end{equation}

Problem 5.1.21.

Apply the result of Problem 5.1.20 to show that the Fourier cosine series of \(f(x)=x-\frac{1}{2}\) on \([0,1]\) is given by
\begin{equation*} \frac{-4}{\pi^2}\sum_{k=0}^\infty\frac{1}{\left(2k+1\right)^2}\cos \left((2k+1)\pi x\right)\text{.} \end{equation*}
Let
\begin{equation*} C(x,N)=\frac{-4}{\pi^2}\sum_{k=0}^N\frac{1}{\left(2k+1\right)^2}\cos \left((2k+1)\pi x\right) \end{equation*}
and plot \(C(x,N)\) for \(N=1,2,5,50\) \(x\in[\,0,1]\text{.}\) How does this compare to the function \(f(x)=x-\frac{1}{2}\) on \([\,0,1]\text{?}\) What if you plot it for \(x\in[\,0,2]?\)

Problem 5.1.22.

(a)
Differentiate the series
\begin{equation*} \frac{-4}{\pi^2}\sum_{k=0}^\infty\frac{1}{\left(2k+1\right)^2}\cos \left((2k+1)\pi x\right) \end{equation*}
term by term and plot various partial sums for that series on \([\,0,1]\text{.}\) How does this compare to the derivative of \(f(x)=x-\frac{1}{2}\) on that interval?
(b)
Differentiate the series you obtained in part (a) and plot various partial sums of that on \([\,0,1]\text{.}\) How does this compare to the second derivative of \(f(x)=x-\frac{1}{2}\) on that interval?