Applying mathematics to physical problems such as heat flow in a solid body drew much attention in the latter part of the 1700’s and the early part of the 1800’s. One of the people to attack the heat flow problem was
Jean Baptiste Joseph Fourier. Fourier submitted a manuscript on the subject, Sur la propagation de la chaleur (On the Propagation of Heat), to the Institut National des Sciences et des Arts in 1807. These ideas were subsequently published in La theorie analytique de la chaleur (The Analytic Theory of Heat (1822)).
To examine Fourier’s ideas, consider the example of a thin wire of length one, which is perfectly insulated and whose endpoints are held at a fixed temperature of zero. Given an initial temperature distribution in the wire, the problem is to monitor the temperature of the wire at any point \(x\) and at any time \(t\text{.}\) Specifically, if we let \(u(x,t)\) denote the temperature of the wire at point \(x\in[0,1]\) at time \(t\geq 0\text{,}\) then it can be shown that \(u\) must satisfy the one-dimensional heat equation \(\rho^2\frac{\partial^2u}{\partial x^2}=\frac{\partial u}{\partial t}\text{,}\) where \(\rho^2\) is a positive constant known as the thermal diffusivity. If the initial temperature distribution is given by the function \(f(x)\text{,}\) then the \(u\) we are seeking must satisfy all of the following
To solve this, Fourier employed what is now referred to as Fourier’s method of separation of variables. Specifically, Fourier looked for solutions of the form \(u(x,t)=X(x)T(t)\text{;}\) that is, solutions where the \(x\)-part can be separated from the \(t\)-part. Assuming that \(u\) has this form, we get \(\frac{\partial^2u}{\partial x^2}=X^{\prime\prime}T\) and \(\frac{\partial u}{\partial t}=X\,T^{\prime}\text{.}\) Substituting these into the differential equation \(\rho^2\frac{\partial^2u}{\partial x^2}=\frac{\partial u}{\partial t}\text{,}\) we obtain
\begin{equation*}
\rho^2X^{\prime\prime}T=X T^\prime\text{ or } \frac{X^{\prime\prime}}{X}=\frac{T^\prime}{\rho^2T}\text{.}
\end{equation*}
Since the left-hand side involves no \(t\)’s and the right-hand side involves no \(x\)’s, both sides must equal a constant \(k\text{.}\) Thus we have
\begin{equation*}
X^{\prime\prime}=k X\text{ and } T^\prime=\rho^2k T\text{.}
\end{equation*}
Problem5.1.2.
Show that \(T=Ce^{\rho^2kt}\) satisfies the equation \(T^\prime=\rho^2k T\text{,}\) where \(C\text{,}\) and \(\rho\) are arbitrary constants. Use the physics of the problem to show that if \(u\) is not constantly zero, then \(k\lt 0\text{.}\)
Using the result from problem 5.1.2 that \(k\lt 0\text{,}\) we will let \(k=-p^2\text{.}\)
Problem5.1.3.
Show that \(X=A\sin\left(px\right)+B\cos\left(px\right)\) satisfies the equation \(X\,''=-p^2X\text{,}\) where \(A\) and \(B\) are arbitrary constants. Use the boundary conditions \(u(0,t)=u(1,t)=0\text{,}\)\(\forall\,t\geq 0\) to show that \(B=0\) and \(A\sin p=0\text{.}\) Conclude that if \(u\) is not constantly zero, then \(p=n\pi\text{,}\) where \(n\) is any integer.
Problem5.1.4.
Show that if \(u_1\) and \(u_2\) satisfy the equations \(\rho^2\frac{\partial^2u}{\partial x^2}=\frac{\partial
u}{\partial t}\) and \(u(0,t)=u(1,t)=0, \forall\,t\geq
0\) then \(u=A_1u_1+A_2u_2\) satisfy these as well, where \(A_1\) and \(A_2\) are arbitrary constants.
Putting all of these results together, Fourier surmised that the general solution to
All that is left is to have \(u\) satisfy the initial condition \(u(x,0)=f(x)\text{,}\)\(\forall\,x\in[\,0,1]\text{.}\) That is, we need to find coefficients \(A_n\text{,}\) such that
The idea of representing a function as a series of sine waves was proposed by Daniel Bernoulli in 1753 while examining the problem of modeling a vibrating string. Unfortunately for Bernoulli, he didn’t know how to compute the coefficients in such a series representation. What distinguished Fourier was that he developed a technique to compute these coefficients. The key is the result of the following problem.
Problem5.1.5.
Let \(n\) and \(m\) be positive integers. Show
\begin{equation*}
\int_{x=0}^1\sin\left(n\pi x\right)\sin\left(m\pi x\right)\dx{ x}= \left\{\begin{matrix}0\amp \text{ if } n\neq m\\ \frac{1}{2}\amp \text{ if } n=m \end{matrix} \right.. {}
\end{equation*}
Armed with the result from Problem 5.1.5, Fourier could compute the coefficients \(A_n\) in the series representation \(f(x)=\sum_{n=1}^\infty A_n \sin\left(n\pi x\right)\) in the following manner. Since we are trying to find \(A_n\) for a particular (albeit general) \(n\text{,}\) we will temporarily change the index in the summation from \(n\) to \(j\text{.}\) With this in mind, consider
This leads to the formula \(A_n=2\int_{x=0}^1f(x)\sin\left(n\pi x\right)d x\text{.}\)
The above series \(f(x)=\sum_{n=1}^\infty A_n\sin\left(n\pi x\right)\) with \(A_n=2\int_{x=0}^1f(x)\sin\left(n\pi x\right)\dx{ x}\) is called the Fourier (sine) series of \(\boldsymbol{f}\).
Example5.1.6.
Let’s apply this to the following function, \(f(x)=\frac{1}{2}-\abs{x-\frac{1}{2}}\text{,}\) whose graph of this function is seen below.
be the \(N^{th}\) partial sum of the series and use the graphing tool below to view the graph of \(S_N(x)\) for several values of \(N\text{.}\)
That is, \(S_N\) denotes the \(N^{th}\) partial sum of the series. We will graph \(S_N\) for \(N=1,\,2,\,5,\,50\text{.}\)
As you can see, it appears that as we add more terms to the partial sum, \(S_N\text{,}\) it looks more and more like the original function \(f(x)=\frac{1}{2}-\abs{x-\frac{1}{2}}\text{.}\) This would lead us to believe that the series converges to the function and that
is a valid representation of \(f\) as a Fourier series.
Recall, that when we represented a function as a power series, we freely differentiated and integrated the series term by term as though it was a polynomial. Let’s do the same with this Fourier series.
Again, if we let \(C_N(x)=\frac{4}{\pi}\sum_{k=0}^N\frac{\left(-1\right)^k}{\left(2k+1\right)} \cos\left(\left(2k+1\right)\pi x\right)\) be the \(N^{th}\) partial sum of this Fourier cosine series and plot \(C_N(x)\) for \(N=5\text{,}\) we obtain
In fact, if we were to graph the series \(\frac{4}{\pi}\sum_{k=0}^\infty\frac{\left(-1\right)^k}{\left(2k+1\right)}\) cos\(\left(\left(2k+1\right)\pi x\right)\text{,}\) we would obtain
Notice that this agrees with the graph of \(f^\prime\text{,}\) except that \(f^\prime\) didn’t exist at \(x=\frac{1}{2}\text{,}\) and this series takes on the value \(0\) at \(x=\frac{1}{2}\text{.}\) Notice also, that every partial sum of this series is continuous, since it is a finite combination of continuous cosine functions. This agrees with what you learned in calculus, the (finite) sum of continuous functions is always continuous. In the 1700’s, this was also assumed to be true for infinite series, because every time a power series converged to a function, that function happened to be continuous. This never failed for power series, so this example was a bit disconcerting as it is an example of the sum of infinitely many continuous functions which is, in this case, discontinuous. Was it possible that there was some power series which converged to a function which was not continuous? Even if there wasn’t, what was the difference between power series and this Fourier series?
Even more disconcerting is what happens if we try differentiating the series
term-by-term. Given the above graph of this series, the derivative of it should be constantly 0, except at \(x=\frac{1}{2}\text{,}\) where the derivative wouldn’t exist. Using the old adage that the derivative of a sum is the sum of the derivatives, we differentiate this series term-by-term to obtain the series
term by term, this differentiated series doesn’t converge to anything at \(x=\frac{1}{4}\text{,}\) let alone converge to zero. In this case, the old calculus rule that the derivative of a sum is the sum of the derivatives does not apply for this infinite sum, though it did apply before. As if the continuity issue wasn’t bad enough before, this was even worse. Power series were routinely differentiated and integrated term-by-term. This was part of their appeal. They were treated like “infinite polynomials.” Either there is some power series lurking that refuses to behave nicely, or there is some property that power series have that not all Fourier series have.
Could it be that everything we did in Chapter 4 was bogus?
Fortunately, the answer to that question is no. Power series are generally much more well-behaved than Fourier series. Whenever a power series converges, the function it converges to will be continuous. As long as one stays inside the interval of convergence, power series can be differentiated and integrated term-by-term. Power series have something going for them that your average Fourier series does not. (We need to develop the machinery to know what that something is.) None of this is any more obvious to us than it was to mathematicians at the beginning of the nineteenth century. What they did know was that relying on intuition was perilous and rigorous formulations were needed to either justify or dismiss these intuitions. In some sense, the nineteenth century was the “morning after” the mathematical party that went on throughout the eighteenth century.
Problem5.1.9.
Let \(n\) and \(m\) be positive integers. Show
\begin{equation*}
\int_{x=0}^1\cos\left(n\pi x\right)\cos\left(m\pi x\right)\dx{ x}=\left\{ \begin{matrix}0\amp \text{ if } n\neq m\\ \frac{1}{2}\amp \text{ if } n=m \end{matrix} \right.\text{.}
\end{equation*}
Let \(C(x,N)=\frac{-4}{\pi^2}\sum_{k=0}^N\frac{1}{\left(2k+1\right)^2}\cos \left((2k+1)\pi x\right)\) and plot \(C(x,N)\) for \(N=1,2,5,50\)\(x\in[\,0,1]\text{.}\) How does this compare to the function \(f(x)=x-\frac{1}{2}\) on \([\,0,1]\text{?}\) What if you plot it for \(x\in[\,0,2]?\)
term by term and plot various partial sums for that series on \([\,0,1]\text{.}\) How does this compare to the derivative of \(f(x)=x-\frac{1}{2}\) on that interval?
(b)
Differentiate the series you obtained in part a and plot various partial sums of that on \([\,0,1]\text{.}\) How does this compare to the second derivative of \(f(x)=x-\frac{1}{2}\) on that interval?