Skip to main content

Section 8.3 Exponential Growth and Exponential Notation

It is now time to solve IVP (8.2) by finding an explicit formula for \(\exp(x)\text{.}\)
But how? Nothing really presents itself as a potential solution so what should we do? Since we don’t seem to have any better options, let’s see if we can guess a solution.

DIGRESSION: The Art of Guessing Effectively.

No, really. Guessing can be surprisingly effective.
You know this of course. You’ve been doing it all of your life, but in the past — especially in math classes — guessing was probably discouraged so you tend to deny doing it, possibly even to yourself. It’s OK. Guess anyway. We won’t tell anyone.
The fact is that guessing is a tried and true solution technique and we encourage you to use it regularly. Guessing is nothing more, or less, than relying on your native intuition.
But effective guessing is a skill. You need practice to do it well and, unfortunately, you have probably not had much chance to practice that skill in the context of mathematics. So we encourage you to guess often from now on. But be aware that making a guess is a process, not an event. If you guess wrong, which is most likely, then you have a new, related problem to think about: All of your intuition said this was a good guess. Why didn’t it work? The answer to that question almost always gives some insight into the original problem. Even a bad guess can be useful.
However a bad guess, indeed any guess, in isolation is a waste of time. You must also take the time to figure out why your bad guess doesn’t work.
But the real danger in guessing — the reason it is usually discouraged — is that when you do guess correctly, or nearly so, it is very tempting to just move on from there. Don’t do that.
Guessing correctly means that your intuition was very good. But intuition is unconscious. It is at least as important to understand where a good guess came from, and why it worked as it is to understand why a bad guess didn’t work. But when you guess well it is extremely tempting to just take your good guess and run with it. This will invariably lead to confusion later. So, when you guess correctly take a few moments to think about the intuition that led you to that guess. Bring that intuition out of your unconscious mind and into your conscious mind. If you don’t do that your guesses are a waste of your time.
END OF DIGRESSION
So let’s take a guess. There is no need to get really crazy about it though. We already know that the Taylor polynomial of \(\exp(x)\) approximates \(\exp(x)\text{.}\) Maybe we can find an \(n\) large enough that the approximation becomes exact.

Problem 8.3.1.

Show that none of the following is a solution of IVP (8.2).
The notation \(3!\) is read “three factorial” and means \(3\cdot2\cdot1.\) Similarly \(4!=4\cdot3\cdot2\cdot1\) and in general \(n!=n(n-1)(n-2)\ldots3\cdot2\cdot1\)

(a)

\(y= 1+x+\frac{x^2}{2!}+\frac{x^3}{3!}\)
Hint.
Observe that if \(y(x)\) is a solution then \(y-\dfdx{y}{x}=0\text{.}\)

(b)

\(y= 1+x+\frac{x^2}{2!}+\frac{x^3}{3!}+\frac{x^4}{4!}\)

(c)

\(y= 1+x+\frac{x^2}{2!}+\frac{x^3}{3!} + \ldots + \frac{x^{10}}{10!}\)

(d)

\(y= 1+x+\frac{x^2}{2!}+\frac{x^3}{3!}+\cdots+\frac{x^{999999}}{999999!}+ \frac{x^{1000000}}{1000000!}\)
Notice that if
\begin{equation*} y= 1+x+\frac{x^2}{2!}+\frac{x^3}{3!} + \ldots + \frac{x^{999999}}{999999!} + \frac{x^{1000000}}{1000000!} \end{equation*}
then
\begin{equation*} \dfdx{y}{x} = 1+x+\frac{x^2}{2!}+\frac{x^3}{3!} + \ldots + \frac{x^{999999}}{999999!}, \end{equation*}
so that the difference between \(y\) and \(\dfdx{y}{x}\) is
\begin{equation*} y-\dfdx{y}{x} = \frac{x^{1000000}}{1000000!}. \end{equation*}
Since the difference between \(y\) and \(\dfdx{y}{x}\) is that stupid one-millionth term — which is very small — it is clear that we’ve almost got something here. Surely we can handle that last term somehow!
Sadly no, we can’t. In fact no polynomial will solve IVP (8.2).

Problem 8.3.2.

That is, show that there is no (non–zero) polynomial that solves the differential equation: \(\dfdx{y}{x}=y.\)
Hint.
Consider the degree of the polynomial and the degree of its derivative.
Since no polynomial will work a fair question is: Why have we taken you on this wild goose chase
 3 
https://www.phrases.org.uk/meanings/wild-goose-chase.html
?
In fact, we almost have a solution here. One million is a very large number, so one million factorial (\(1000000!\)) is inconceivably large. Thus for any value of \(x\) we’re likely to encounter that one millionth term, \(\frac{x^{1000000}}{1000000!},\) is so incredibly close to zero that it almost isn’t really there. And if it isn’t really there then
\begin{equation*} y= 1+x+\frac{x^2}{2!}+\frac{x^3}{3!} + \ldots + \frac{x^{999999}}{999999!} \textcolor{red}{\underbrace{+ \frac{x^{1000000}}{1000000!}}_{ \text{Not really here. Shhh!}}} \end{equation*}
and its derivative
\begin{equation*} \dfdx{y}{x}= 1+x+\frac{x^2}{2!}+\frac{x^3}{3!} + \ldots + \frac{x^{999999}}{999999!} \end{equation*}
are pretty much the same thing.
So we’re . . . done, . . . right?
No, of course not. Although that last term really is practically zero it is not actually zero, no matter how small it is. As Newton said, “In mathematics the smallest of errors must be dealt with.”
So we haven’t solved our problem in the sense of having an explicit formula for \(\exp(x)\text{.}\) But we do know a great deal about it at this point. In particular we know that the Taylor polynomial \(1+x+\frac{x^2}{2!}+\frac{x^3}{3!} + \ldots + \frac{x^{999999}}{999999!} + \frac{x^{1000000}}{1000000!}\) will be a very good approximation of \(\exp(x)\text{,}\) at least for values of \(x\) near zero.
But if there is no polynomial that solves equation (8.2) does that mean there is no solution at all? Certainly not. In fact, since that last term of the polynomial seems to be the stumbling block the solution is clear: All we need to do is not have a last term.
This is a startling idea but before we dismiss it, let’s take our own advice from Section 2.4. We’ll trust our intuition, but also examine it closely. What we’re saying is that the solution of equation (8.2) is:
\begin{equation} y= 1+x+\frac{x^2}{2!}+\frac{x^3}{3!} + \cdots\tag{8.4} \end{equation}
where the dots at the end mean that the summation goes on forever. There is no last term.
You would expect a polynomial that doesn’t end to be called an infinite polynomial but it is not. Such an expression is called an infinite series. Usually we just call it a series. A series is not a polynomial. That is, a polynomial is defined to have only finitely many terms and a series is defined to have infinitely many terms. We distinguish them from each other specifically so that we don’t confuse a series with a polynomial.
An obvious question to ask is, “Does this infinite series even mean anything?” Or, equivalently, “What does it mean to add up infinitely many numbers?” These are excellent questions which will have to be addressed eventually. But for now we won’t let them trouble us. We’ll just assume that \(y=1+x+\frac{x^2}{2!}+\frac{x^3}{3!}+\ldots\) makes sense, in the same way we assumed that differentials make sense and defer those questions until later.

Problem 8.3.3.

Having set aside those deeper questions (for now), we can show that we have found the solution of our IVP.

(a)

Differentiate the series \(\displaystyle y= 1+x+\frac{x^2}{2!}+\frac{x^3}{3!} + \ldots \) term–by–term to show that \(\displaystyle \dfdx{y}{x}= y\text{.}\)

(b)

Show that \(y(0)=1.\)
Hint.
Yes, this really is as easy as it looks.
We would be remiss if we did not mention that we have lead you up to the edge of an abyss here. Part (b) of Problem 8.2.3 is very suspect because it is not at all clear that the Sum Rule 14.2.2 can be extended to infinite sums in any meaningful manner. In fact, this is a very delicate question. Sometimes the extension is valid and sometimes it is not. This is another of the foundational questions (like “What is a differential?”) that took mathematicians nearly \(200\) years to resolve and understanding that resolution requires the use of considerably more subtle tools than we have at this point. You will learn more about this in the next course. For now we will assert the prerogative of the teacher and simply tell you that in this case term-by-term differentiation still works.
The result of Problem 8.3.3 seems to show that we have the exact solution of IVP (8.2) in the form of the infinite series,
\begin{equation} \exp(x)=1+x+\frac{x^2}{2!}+\frac{x^3}{3!}+\frac{x^4}{4!}+\ldots.\tag{8.5} \end{equation}
This actually is a correct solution of IVP (8.2), and eventually you will learn to work with infinite series solutions of IVPs directly. But, unfortunately, we don’t yet have the tools that allow us to do that. So we will have to find another way. What now?
The sketch below shows the graph of the polynomial
\begin{equation*} y= 1+x+\frac{x^2}{2!}+\frac{x^3}{3!}+ \cdots + \frac{x^{50}}{50!} \end{equation*}
which we believe, from our work in Section 7.4, should be a good approximation to \(\exp(x)\text{,}\) at least when \(x\) is near zero.
When we zoom in on the part of the graph which is near \(x=0\) we see that the following graph should be a reasonable approximation to the solution of IVP (8.2) in the interval shown.
In your earlier math courses you may have seen graphs that looked like this before. If not, then consider the following graphs of the exponential functions, \(y=2^x\) and \(y=3^x\) and notice that they are very similar to the graph of \(y= 1+x+\frac{x^2}{2!}+\frac{x^3}{3!}+ \cdots + \frac{x^{50}}{50!}\text{.}\)
This is much clearer when we graph all three functions on the same axes as seen here:
As you can plainly see near \(x=0\) they are indeed very similar.
Since the polynomial \(y= 1+x+\frac{x^2}{2!}+\frac{x^3}{3!}+ \cdots + \frac{x^{50}}{50!}\) is an approximation to the solution of IVP (8.2) it appears that either of \(y(x)=2^x\) or \(y(x)=3^x\) might be a viable candidate for the solution of our IVP. Is it possible we’ve had the solution in our hands all along? Let’s differentiate \(y(x)=2^x\) to see if solves IVP (8.2).
If \(y(x)=2^x\) then \(y(0)=2^0=1\) so the initial condition is satisfied.
To check the differential equation it is tempting to assume that we can apply the Power Rule, giving us \(\dfdx{(2^x)}{x} = x2^{x-1}\text{.}\) But this can’t possibly be correct because when \(x\) is negative then \(x2^{x-1}\) is also negative. But the slope of \(y(x)=2^x\) is positive everywhere as you can see from its graph.
In fact, none of our differentiation rules will give us the derivative of \(y=2^x.\) So we will have to go back to basics and find \(\dfdx{y}{x}\) from first principles, without using any of our Differentiation Rules.
Let \(y(x)=2^x\) and observe that \(\dx{y}\) is the (infinitesimal) difference between \(y(x+\dx{x})\) and \(y(x)\text{,}\) or
\begin{equation*} \dx{y} =y(x+\dx{x})-y(x) \end{equation*}
Since \(y(x+\dx{x})\) is equal to \(2^{x+\dx{x}}\) and \(y(x)=2^x\) we have,
\begin{equation*} \dx{y} =2^{x+\dx{x}}-2^x. \end{equation*}
Thus
\begin{equation} \dx{y} =2^x(2^{\dx{x}}-1).\tag{8.6} \end{equation}
We want to form \(\dfdx{y}{x}\) so we divide equation (8.6) by \(\dx{x}\text{,}\) giving
\begin{equation} \dfdx{y}{x}=\frac{2^x(2^{\dx{x}}-1)}{\dx{x}}.\tag{8.7} \end{equation}
The differential in the exponent is a bit concerning. We will have to be clever.
We have become comfortable thinking of \(\dx{x}\) as infinitely small, but it should be clear that if we take \(\dx{x}\) to be a very small, but finite number, say \(\dx{x}= 0.0000001,\) we can use equation (8.7) to approximate \(\dfdx{y}{x}\) as accurately as we wish.
Replacing \(\dx{x}\) with \(0.0000001\) on the right side of equation (8.7) we get
\begin{equation} \dfdx{y}{x}=\dfdx{(2^x)}{x}\approx 2^x\left[\frac{(2^{\dx{x}}-1)}{\dx{x}}\right]_{\dx{x}=0.0000001} \approx 2^x(0.7).\tag{8.8} \end{equation}
This isn’t bad for a first try! Do you see that we have almost satisfied the differential equation? We have \(\dfdx{(2^x)}{x}\approx (0.7)2^x\) when what we need is \(\dfdx{(2^x)}{x}=2^x\text{.}\) The constant factor is a bit too small. If it were \(1\) instead of \(0.7\) we’d have the solution of IVP (8.2). This is hopeful.
Performing the same calculation using \(y(x)=3^x\) we see that once again we almost have a solution:
\begin{equation} \dfdx{y}{x}=\dfdx{(3^x)}{x}\approx 3^x\left[\frac{(3^{\dx{x}}-1)}{\dx{x}}\right]_{\dx{x}=0.0000001} \approx 3^x(1.1).\tag{8.9} \end{equation}
The initial condition is still satisfied and once again the differential equation part of IVP (8.2) is almost satisfied. But this time the coefficient \(1.1\) is a bit too big.
It stands to reason that there must be some number between \(2\) and \(3\) with the property that \(y=(\text{number})^x\) satisfies IVP (8.2). For historical reasons this number, whatever it is, has been named \(e\text{.}\) So the solution of IVP (8.2) is
\begin{equation*} \exp(x)=e^x. \end{equation*}
We originally named this function \(\exp(x)\text{.}\) That is still a valid name for it, just as \(\text{sqr(}x\text{)}\) is a valid name for the function sqr\((x)=x^2\text{.}\) However, just as the formula \(y(x)=x^2\) better represents the way we usually think about the squaring function (“square the input variable”, the notation \(y(x)=e^x\) better represents the way we usually think about the natural exponential function.
You should think of the natural exponential as “this funny number \(e\) raised to the power of the input variable.” However, it is a curious fact that the modern definition of the natural exponential is not \(e^x.\) The modern definition is actually the infinite series we derived earlier:
\begin{equation*} e^x=\exp(x)=1+x+\frac{x^2}{2!}+\frac{x^3}{3!}+ \cdots. \end{equation*}
To see why we define it this way, ask yourself what the expression \(e^{1/2}\) means.
That’s easy enough. Since \(x^{1/2}=\sqrt{x}\) clearly \(e^{1/2}\) is the square root of this funny number \(e\text{:}\)
\begin{equation*} e^{1/2}=\sqrt{e}. \end{equation*}
It would not necessarily be easy to compute the square root of \(e\) but we’re only asking about the meaning of our symbols here. Since \(e\) is a positive number we can take its square root and that is what \(\sqrt{e}\) means, even if we can’t compute it. In precisely the same way \(e^{5/7}=\sqrt[7]{e^5}\text{,}\) \(e^{2/3}=\sqrt[3]{e^2}\) and in general if \(a\) and \(b\) are positive integers, \(e^{a/b}=\sqrt[b]{e^a}\text{.}\)
But what could the expression \(2^e\) possibly mean? Since \(e\) is not an integer it doesn’t mean “\(e\) copies of \(2\) multiplied together.” Since \(e\) can’t be represented as ratio of integers (because it is irrational after all) it doesn’t mean some root of \(2\) raised to a power the way that say, \(2^{5/7}\) means \(\sqrt[7]{2^5}.\) This difficulty is compounded if we ask for the meaning of \(e^\pi\) or \(\pi^e\) since both \(e\) and \(\pi\) are irrational.
And this is not just a matter of not knowing the value of \(e\text{.}\) The difficulty is built into the real numbers. The same difficulty appeared in Section 4.3.5 when we tried to extend the Power Rule to irrational exponents. Recall Comment 4.3.36 This is weird.
Even if we can’t compute it we need to find a way to give meaning to the expression \(e^x\) that works when \(x\) is irrational. Ideally, we’d like our interpretation to be consistent with our understanding that \(e^3= e\cdot e\cdot e.\) This is precisely why we define natural exponential as an infinite series. It can be shown (though we will not show it here) that if \(m\text{,}\) and \(n\) are integers with \(n\gt0\) then
\begin{equation} \exp\left(\frac{m}{n}\right)= \left(\sqrt[n]{e}\right)^m=1+\left(\frac{m}{n}\right)+\frac{\left(\frac{m}{n}\right)^n}{n!}+\frac{\left(\frac{m}{n}\right)^3}{3!}+\frac{\left(\frac{m}{n}\right)^4}{4!}+\ldots \tag{8.10} \end{equation}
If there is only one solution to IVP (8.2) (there is) then \(y(x) = e^x\) and \(y(x)=1+x+\frac{x^2}{2!}+\frac{x^3}{3!}+\ldots\) must be the same solution. Defining \(\exp(x)\) as
\begin{equation} \exp(x)=1+x+\frac{x^2}{2!}+\frac{x^3}{3!}+\frac{x^4}{4!}+\ldots,\tag{8.11} \end{equation}
rather than \(\exp(x)=e^x\) addresses all of these issues. In particular, if \(x\) is irrational, say \(x=\pi\) then \(e^\pi\) means
\begin{equation} e^\pi= 1+\pi+\frac{\pi^2}{2!}+\frac{\pi^3}{3!}+\frac{\pi^4}{4!}+\ldots.\tag{8.12} \end{equation}
Recall that we mentioned in Comment 4.3.36 that we would find two different ways to assign meaning to \(x^\alpha \) if \(\alpha{}\) is irrational. Equation (8.12) is an example of the first way to do this in the special case when \(x=e\text{.}\)
Of course we are still free to think of the natural exponential as \(e^x\text{,}\) regardless of which definition we use, and we encourage you to do that. It can be very helpful.

Problem 8.3.5.

For this problem use the approximation
\begin{equation} e^x\approx 1+x+\frac{x^2}{2!}+\frac{x^3}{3!}+ \frac{x^4}{4!}+ \frac{x^5}{5!}+ \frac{x^6}{6!}+ \frac{x^7}{7!}+ \frac{x^8}{8!}\tag{8.13} \end{equation}

(a)

Use equation (8.13) to show that \(\inverse{e} \approx 0.36788\text{.}\) Compare this with the numerical value of \(\frac1e\approx\frac{1}{2.71828}\) you get from a calculator.
Hint.
In equation (8.13) take \(x=-1.\)

(b)

Use equation (8.13) to show that \(\sqrt{e} \approx 1.64872\text{.}\) Compare this with the numerical value of \(\sqrt{e}\approx \sqrt{2.71828}\) you get from a calculator.
The function \(y(x)=e^x\) is called the natural exponential function and \(e\) is its base just as \(2\) is the base of the exponential function \(y(x)=2^x\) and \(3\) is the base of the exponential function \(y(x)=3^x\text{.}\)
Of course, we only know that \(2\lt e\lt3\) so far. How might we find a better approximation to \(e\text{?}\)

Problem 8.3.6.

Since we know that \(\exp(x)= 1+x+\frac{x^2}{2!}+\frac{x^3}{3!}+ \cdots \) solves IVP (8.2), it should be clear that we can approximate \(e = e^1\) by computing the sum of, say fifty, terms of the series.
\begin{equation*} e\approx 1+1+\frac{1^2}{2!}+\frac{1^3}{3!}+ \cdots + \frac{1^{50}}{50!}. \end{equation*}
Compute this approximation using your favorite computing technology to confirm that \(e\approx 2.718\text{.}\) (If you don’t have any computing technology available just compute the sum of the first six terms.)
The upshot of Problem 8.3.6 is that if we replace \(2\) with \(e\approx 2.718\) in equation (8.6), or if we replace \(3\) in equation (8.9) the constant the result should be closer to \(1\) than if we use \(0.7\) or \(1.1\text{,}\) respectively.
Checking this gives
\begin{equation*} \dfdx{y}{x}= e^x\left[\frac{2.718^{\dx{x}}-1}{\dx{x}}\right]_{\dx{x}=0.0000001}\approx e^x(1.00000004943). \end{equation*}
We did not get exactly \(1\) because
  1. We used an approximation of \(e\text{,}\) not \(e\) itself, and
  2. \(0.0000001\) is a very small, finite number, not a differential.
But the evidence is compelling that \(\exp(x)=e^x\) is the solution of the IVP (8.2). Henceforth then, we will reserve the letter \(e\) to designate the base of the natural exponential function.
You might ask, “Why not just figure out what \(e\) actually is, and use that? Why use the letter \(e\text{?}\)” The answer is that \(e\) is an irrational number much like \(\pi\text{.}\) Among other things this means that its decimal expansion never ends, so we use \(e\) for the same reason we use \(\pi\text{.}\)
We now have a new differentiation rule to remember. This one is particularly easy to remember: The natural exponential function is its own derivative,
\begin{equation*} \dfdx{(e^x)}{x}=e^x. \end{equation*}
But just because we have a new rule, does not mean that our old differentiation rules are ignored. As always, all of the rules work together.

Example 8.3.7.

Find the derivative of \(y=e^{x}\sin(3x).\)
Using the Product Rule 4.3.9, we have
\begin{align*} \dx{\left(e^{x}\sin(3x)\right)}\amp =e^{x}\dx{(\sin(3x))} + \sin(3x)\dx{(e^{x})}\\ \amp =e^{x}\cos(3x)\dx{(3x)} + \sin(3x)e^{x}\dx{x}\\ \amp =e^{x}\cos(3x)3\dx{x} + \sin(3x)e^{x}\dx{x}\\ \dfdx{\left(e^{x}\sin(3x)\right)}{x}\amp =e^{x}\left[3\cos(3x) + \sin(3x)\right] \end{align*}

Drill 8.3.8.

Compute \(\dx{y}\) for each of the following, and use this to find the IVP that each one solves. (Use a substitution to make each one easier on your eyes.)
  1. \(\displaystyle y=e^{2x}\)
  2. \(\displaystyle y=e^{\pi x}\)
  3. \(\displaystyle y=e^{\sqrt{2}x}\)
  4. \(y=e^{rx}\) where \(r\) is an unknown constant.

Drill 8.3.9.

For each of the following find \(\dfdx{y}{x}\text{:}\)
  1. \(\displaystyle y=\sin(e^x)\)
  2. \(\displaystyle y=e^{\sin(x)}\)
  3. \(\displaystyle y=\cos(e^x)\)
  4. \(\displaystyle y=e^{\cos(x)}\)
  5. \(\displaystyle y=e^{5x}\sin(\pi x)\)
  6. \(\displaystyle y=e^{x}\cos(3x)\)
  7. \(\displaystyle y= \dfrac{x}{e^x}\)
  8. \(\displaystyle y=\frac{e^x-1}{e^x+1}\)
  9. \(\displaystyle y= \dfrac{e^x}{4x^2+3x-5}\)
  10. \(\displaystyle y=e^{x^2+\sin(x)}\)
  11. \(\displaystyle y= \frac{3\sqrt{x}+1}{e^x}\)
  12. \(\displaystyle y= \tan(e^x)\)

Drill 8.3.10.

Show that the equation of the line tangent to each function below at the given point is the given line.
  1. Function: \(y= \dfrac{x}{e^x}\text{,}\)
    Point: \((0,0)\)
    Tangent line: \(y = x\)
  2. Function: \(y= \dfrac{x}{e^x}\text{,}\)
    Point: \(\left(1,1/e\right)\)
    Tangent line: \(y = 1/e\)
  3. Function: \(y= \dfrac{e^x}{4x^2+3x-5}\text{,}\)
    Point: \(\left(0,-1/5\right)\)
    Tangent line: \(y = \frac{8}{25}x-\frac{1}{5}\)
  4. Function: \(y= \dfrac{e^x}{4x^2+3x-5}\text{,}\)
    Point: \((1,e/2)\)
    Tangent line: \(y -\dfrac{e}{2}= -\dfrac{9e}{4}(x-1)\)

Problem 8.3.11.

(a)

Show that when we use Newton’s Method 7.2.6 to approximate the coordinates of the intersection point of the curves \(y = -x\) and \(y = e^x\text{,}\) we get the iteration formula
\begin{equation*} r_{n+1}=\frac{r_n-1}{1+e^{-r_n}} \end{equation*}

(b)

Starting with \(r_0=0\) compute \(r_1, r_2\text{,}\) and \(r_3\text{.}\) Compare your approximation with a solution obtained from whatever computing technology you prefer.

Problem 8.3.12. Find the Pattern.

(a)

Compute \(\dfdx{y}{t}\) for each of the following:
  1. \(\displaystyle y=te^t\)
  2. \(\displaystyle y=(t+1)e^t\)
  3. \(\displaystyle y=(t^2+t+1)e^t\)
  4. \(\displaystyle y=(t^3+t^2+t+1)e^t\)

(b)

From your work in part 8.3.12.a what do you expect \(\dfdx{y}{t}\) to be if
\begin{equation*} y=\left(t^n+t^{n-1}+\cdots+t+1\right)e^t. \end{equation*}
Show that your guess is correct or find the correct solution if you guessed wrong.

Drill 8.3.13.

For each of the following, assume that \(x=x(t)\text{,}\) \(y=y(t)\text{,}\) and \(z=z(t)\text{.}\) Find an equation relating \(\dfdx{x}{t}\text{,}\) \(\dfdx{y}{t}\text{,}\) and \(\dfdx{z}{t}\text{.}\)
  1. \(\displaystyle e^y=x^2+3z-2\)
  2. \(\displaystyle \sin(xy)=(e^z)^y\)
  3. \(\displaystyle \tan(x+z)=e^ye^x\)
  4. \(\displaystyle e^{xyz}=3\)
  5. \(\displaystyle x^2+y^2+z^2=2\)
  6. \(\displaystyle e^{x^2+y^2+z^2}=2\)

Problem 8.3.14. The Exponential Sum Property.

Since we’ve encouraged you to think of the expression \(e^t\) as the funny number \(e\) raised to the power \(t\text{,}\) it seems obvious that the rules of exponents apply so that \(e^{t+a} = e^t\cdot e^a\text{.}\)
But nothing we’ve said so far actually makes this a true statement. Definition 8.2.2 names the function and states one of its properties (that it is its own derivative), but that is all. This problem shows that we can use the definition to conclude that \(e^{t_a} = e^t\cdot e^a\) must also be true.

(a)

Show that \(y=e^{t+a}\) satisfies \(\dfdx{y}{t}=y\) for any constant \(a\text{.}\)
Hint.

(b)

Show that \(y=Ae^t\) also satisfies \(\dfdx{y}{t}=y\) for any constant \(A\text{.}\)

(c)

Part 8.3.14.a and part 8.3.14.b imply that \(e^{t+a}=Ae^t\text{.}\) Use this knowledge to to show that
\begin{equation*} e^{t+a}=e^t\cdot e^a. \end{equation*}

Problem 8.3.15.

(a)

For \(y=e^x\text{,}\) \(y=\sin(x)\text{,}\) and \(y=\cos(x)\) compute
  1. \(\displaystyle \dfdx{y}{x}\)
  2. \(\displaystyle \dfdxn{y}{x}{2}\)
  3. \(\displaystyle \dfdxn{y}{x}{3}\)
  4. \(\displaystyle \dfdxn{y}{x}{4}\)

(b)

Does it seem significant that \(\dfdxn{y}{x}{4}=y\) for all three functions? What does your intuition say?