8

Let's say I want to use Maclaurin series to get the series expansion $S(x)$ for $f(x) = e^x$ where $S(x) = c_0x^0 + c_1x^1 + c_2x^2 + c_3x^3 + ...$

$f(0) = S(0) = c_0 = e^0 = 1$ so that's fine.

$f'(0) = S'(0) = c_1 = e^0 = 1$ again, fine.

$f''(0) = S''(0) = 2c_2 = e^0 = 1$ so $c_2 = 1/2$, fine.

$f'''(0) = S'''(0) = 6c_3 = e^0 = 1$ so $c_3 = 1/6$, fine.

$f''''(0) = S''''(0) = 24c_4 = e^0 = 1$ so $c_4 = 1/24$, fine.

And so on, so we conclude that $c_k = \frac{1}{k!}$ so then:

$$S(x) = \sum_{k=0}^{\infty} \frac{x^k}{k!}$$

But now what I don't understand is what allows us to go "Furthermore, $S(x) = f(x)$ for all $x$"! In fact what makes it valid for us to plug any other number other than $x=0$ into this? I see this equation used as a straight-up equivalent to $e^x$ even though we used $x=0$ and nothing else.

So I thought I would try it with a neighborhood of $1$ instead to see what happens:

$S(x) = c_0(x-1)^0 + c_1(x-1)^1 + c_2(x-1)^2 + c_3(x-1)^3 + ...$

$f(1) = S(1) = c_0 = e^1 = e$ so that's fine.

$f'(1) = S'(1) = c_1 = e^1 = e$ again, fine.

$f''(1) = S''(1) = 2c_2 = e^1 = e$ so $c_2 = e/2$, fine.

$f'''(1) = S'''(1) = 6c_3 = e^1 = e$ so $c_3 = e/6$, fine.

$f''''(1) = S''''(1) = 24c_4 = e^1 = e$ so $c_4 = e/24$, fine.

Looks pretty similar:

$$S(x) = \sum_{k=0}^{\infty} \frac{e(x-1)^k}{k!}$$

and, in fact, for a neighborhood of $x=a$:

$$S(x) = \sum_{k=0}^{\infty} \frac{e^a(x-a)^k}{k!}$$

Now we have $e$ inside the function expansion for $e^x$ which seems a little circular?

I guess my question is this: We created the function $S$ to have the same-valued $n$th derivatives as $f(x)$ but only at $x=0$ or $x=1$, or $x=a$, etc, but the series representations look different depending on which neighborhood I pick, and in some cases the expansions seem to include the very number we're trying to describe.

How do we know which is "right" or that it is even "right" to use for all $x$? I know the usual response to this is that it's equivalent when $f(x)$ is analytic but that doesn't help me at all because it says the function is analytic when the Taylor series around $x_0$ converges to the function in a neighborhood around $x_0$ for all $x_0$ in the function's domain. But how do I know this?

Yet again feels circular... just working through these two examples makes me wonder, how do I know these series converge to the function itself? How do I know they're equivalent representations? Am I supposed to avoid the self-referencing?

I'm looking for some context behind how to make sense of these two Taylor series and how I am supposed to know that $e$ is analytic or that I can use $S(x)$ for any $x$ I want even if I only computed it for the neighborhood around $x=0$.

user525966
  • 5,631
  • At one complex variable you will meet an astounding result about areas of convergence for Taylor series. But as far as I remember it was pretty mysterious up until complex analysis and meromorphic functions. – mathreadler Feb 26 '18 at 20:57
  • The main reason is that it's the very definition of $\exp z$, after it's been proved this power series has an infinite radius of convergence, by the ratio test. – Bernard Feb 26 '18 at 21:22

5 Answers5

9

If $x\in\mathbb R$, you want to know why is it true that$$e^x=\sum_{k=0}^\infty\frac{x^n}{n!}.\tag1$$Well, that's because, by Taylor's theorem, you have$$(\forall N\in\mathbb{N}):e^x-\sum_{n=0}^N\frac{x^n}{n!}=\frac{e^y}{(N+1)!}x^{N+1}$$for some $y$ between $0$ and $x$; this comes from the Lagrange form of the remainder. But then$$\left|e^x-\sum_{n=0}^N\frac{x^n}{n!}\right|\leqslant\begin{cases}\frac{e^x}{(N+1)!}x^{N+1}&\text{ if }x>0\\\frac1{(N+1)!}x^{N+1}&\text{ otherwise}\end{cases}$$and therefore$$\lim_{N\to\infty}\left|e^x-\sum_{n=0}^N\frac{x^n}{n!}\right|=0.$$In other words, $(1)$ holds.

  • Sorry I do not quite understand where $\frac{e^y}{(N+1)!}x^{N+1}$ comes from. Is this the $R(x)$ remainder function? How do you know how to write it? – user525966 Feb 26 '18 at 21:26
  • @user525966 It comes from the Lagrange form of the remainder of Taylor's theorem. I've edited my answer adding a link for that. Is it clear now? – José Carlos Santos Feb 26 '18 at 21:31
  • Kind of, just trying to understand why it's stated that way, it's like it's just one additional term above the original Taylor polynomial but there's also this extra term with it, and for some reason $(n+1)!$ – user525966 Feb 26 '18 at 21:56
2

Once you learn complex analysis, these things become clear.

Here is a way to argue that these series are convergent to $e^x$. Define $$S(x) = \sum_{k=0}^{\infty} \frac{e^a(x-a)^k}{k!}$$

The ratio/root test gives that the ratio of convergence is $\infty$. Therefore, $S(x)$ is a function which is differentiable everywhere.

A simple computation shows that $S'(x)=S(x)$ and that $S(a)=e^a$. The next lemma solves the problem.

Lemma If $f(x)$ is a function which is differentiable everywhere, and $f'(x)=f(x)$ then $f(x)=Ce^x$ for some constant $C$.

Proof: Let $g(x)=\frac{f(x)}{e^x}$. Then $g'(x)=0$, and therefore $g(x)=C$ for some constant $C$.

Now, the Lemma gives that $S(x)=Ce^x$ for some $C$ and $S(a)=e^a$ gives $C=1$.

P.S. A more general way of proving this type of results is via Lagrange estimate of the remainder. This answer exploits the properties of $e^x$.

N. S.
  • 132,525
1

It can be proved by Lagrange remainder which represent the difference between the function and it's Taylor polynomial:

$$R_n(x)=|f(x)-P_n(x)|$$

See here for a related OP.

user
  • 154,566
1

I think you need to get familiar with key definitions and theorems of calculus. The fact that a function can be represented by its Taylor series under certain circumstances is covered by Taylor's theorem and one of its forms is this:

Taylor's Theorem: Let $n, p$ be positive integers such that $1\leq p\leq n$ and $a, h$ be real numbers with $h>0$. If $f:[a, a+h] \to\mathbb{R} $ is a function such that it's $n$'th derivative $f^{(n)}$ exists on $(a, a+h) $ and its $(n-1)$'th derivative $f^{(n-1)}$ is continuous on $[a, a+h] $ then there is some number $\theta \in(0, 1) $ such that $$f(a+h) =f(a) +hf'(a) +\frac{h^2}{2!}f''(a)+\dots+\frac{h^{n-1}}{(n-1)!}f^{(n-1)}{a}+R_{n}\tag{1} $$ where $$R_{n} = \frac{(1 - \theta)^{n - p}h^{n}f^{(n)}(a + \theta h)}{p(n - 1)!}\tag{2}$$ The theorem works trivially if $h=0$ and if $h$ is negative we just need to consider the intervals like $[a+h, a] $.

Next from your question I guess that the definition of $e^{x} $ being used is that $e^x$ is its own derivative and takes value $1$ at $0$. Without going into the details of this definition let's understand that the definition implies that it possesses derivatives of all orders and every derivative is $e^x$.

Now it is time to apply the Taylor's theorem for $f(x) =e^x$. We chose $a=0, p=n$ and replace symbol $h$ by $x$ and note that $f^{(n)} (0)=1$ for all $n$. We then obtain $$e^x=f(x) =1+x+\frac{x^2}{2!}+\dots+\frac{x^{n-1}}{(n-1)!}+R_n\tag{3}$$ where $R_n=x^ne^{\theta x} /n! $ for some $\theta\in(0,1)$. It is easy to show that given any $x$ we have $\lim_{n\to\infty} R_n=0$ and thus taking limit as $n\to\infty$ in equation $(3)$ we get the identity $$e^x=1+x+\frac{x^2}{2!}+\dots=\sum_{n=0}^{\infty} \frac{x^n} {n!} \tag{4}$$ To summarize one can figure out the Taylor series of a function by calculating its derivatives at a certain point, but whether the function is represented by its Taylor series depends crucially on the behavior of the remainder $R_n$ as $n\to\infty$.

The choice $a=0$ is arbitrary and using generic $a$ we get the identity (as before) $$e^x=e^a\sum_{n=0}^{\infty}\frac{(x-a)^n}{n!}\tag{5}$$ Using $(4)$ we then get the fundamental identity $$e^x=e^ae^{x-a} $$ By the way this can be also be established directly using the definition of $e^x$ (being its own derivative and taking value $1$ at $0$). So everything works out fine.

You should also study the proof of Taylor’s theorem to understand the above arguments completely. The statement of a theorem gives us a certain guarantee but it is the proof of the theorem which makes us believe that the guarantee provided is genuine.


There is another approach to prove identity $(4)$ for all real values of $x$ which just uses the definition of $e^x$ and avoids the slightly complicated Taylor's theorem. We first show that the solution to the differential equation $$f'(x) =f(x), f(0)=1\tag{6}$$ is unique and then show that $$S(x) =\sum_{n=0}^{\infty}\frac {x^n} {n!}\tag{7} $$ is a solution to $(6)$. The proof requires some not so difficult results from the theory of infinite series.

First the uniqueness of the solution to $(6)$ is established. Suppose we have two solutions $f, g$ then $F(x) =f(x) - g(x) $ satisfies $F'(x) =F(x), F(0)=0$. We show that $F(x) =0$ for all $x$. If for some $a$ we have $F(a) \neq 0$ then we consider $$\phi(x) =F(a+x) F(a-x) $$ and clearly $$\phi'(x) =F'(a+x) F(a-x) - F(a+x) F'(a-x) $$ which is equal to $$F(a+x) F(a-x) - F(a+x) F(a-x) =0$$ and therefore $\phi(x) $ is constant and then $\phi(x) =\phi(0)=F(a)F(a)>0$. But $\phi(a) =F(2a)F(0)=0$ and we get a contradiction. Thus $F(x) =0$ for all $x$ and equation $(6)$ does not possess two distinct solutions.

Now we need to show that $S(x)$ defined in $(7)$ satisfies $(6)$. First we can use ratio test to conclude that the series in $(7)$ is convergent for all $x$ and thus the definition $(7)$ makes sense. Clearly $S(0)=1$ and our real challenge is to show that $S'(x) =S(x) $. We can use Cauchy product formula to multiply two series and conclude (via binomial theorem) that $$S(a) S(b) =S(a+b) \tag{8}$$ for all values of $a, b$. Next we establish the fundamental limit $$\lim_{x\to 0}\frac {S(x) - 1}{x}=1\tag{9}$$ We have for $0<|x|<1$ $$\left|\frac{S(x) - 1}{x}-1\right|=\left|\sum_{n=2}^{\infty}\frac{x^{n-1}}{n!}\right|$$ and this does not exceed $$\frac{|x|} {2!}+\frac{|x|^2}{3!}+\dots$$ The above expression clearly does not exceed $$\frac {|x|} {2}+\frac{|x|^2}{4}+\frac{|x|^3}{8}+\dots=\frac{|x|}{2-|x|}$$ and this tends to $0$ with $x$ so that $(9)$ is established.

We now have $$S'(x) =\lim_{h\to 0}\frac{S(x+h)-S(x)}{h}=\lim_{h\to 0}\frac {S(x) S(h) - S(x)} {h} \\ =\lim_{h\to 0}S(x)\cdot\frac{S(h)-1}{h}=S(x)$$ using equations $(8),(9)$. Thus we have proved that $S(x) $ is the unique solution to $(6)$ and by definition of $e^x$ it equals $e^x$.

0

Just to flesh out the previous comments related to complex analysis:

Define a function $f : \mathbb{C} \to \mathbb{C}$ by $f(x + iy) = e^x \cos y + i e^x \sin y$. Then $f$ satisfies the Cauchy-Riemann equations, and the partial derivatives of $\operatorname{Re} f(x+iy)$ and $\operatorname{Im} f(x+iy)$ with respect to $x$ and $y$ are continuous; it follows that $f$ is complex differentiable at every point of $\mathbb{C}$. From there, a basic but striking result of complex analysis in one variable states:

Let $f : U \to \mathbb{C}$ be differentiable at every point of an open subset $U \subseteq \mathbb{C}$. Then for every $z_0 \in U$ and $\epsilon > 0$ such that $B_\epsilon(z_0) \subseteq U$, the Taylor series of $f$ with center $z_0$ converges to $f(z)$ for each $z \in B_\epsilon(z_0)$. (Here $B_\epsilon(z_0)$ denotes the open ball with center $z_0$ and radius $\epsilon$.)

Now, applying this to our $f$ implies that the Maclaurin series for $f$ converges to $f(z)$ at every point $z \in \mathbb{C}$. Then, restricting this to the real line gives that the Maclaurin series for $f$ converges to $f(x) = e^x$ for all $x \in \mathbb{R}$.

(As to how you'd come up with the equation for $f$ in the first place: you can guess a formula for $e^{iy}$ from substituting $iy$ into the candidate Maclaurin series for $e^x$, and then compare to the candidate Maclaurin series for $\cos x$ and $\sin x$. Then you might guess that an analytic continuation of $e^x$ to the complex plane should satisfy the functional equation $e^{z+w} = e^z \cdot e^w$ which the real function satisfies, so $e^{x+iy} = e^x e^{iy}$.)