0

In the following Wikipedia page about the chain rule, when dealing with the limit $\lim_{x \to a} \frac{f(g(x)) - f(g(a))}{g(x) - g(a)} \cdot \frac{g(x) - g(a)}{x - a}$ a new function $Q(y)$ is introduced to deal with the issue of division by zero when $g(x)=g(a)$. My question is why is it necessary to introduce $Q(y)$? If the limit of each of the factors in the above product are defined in their own right, doesn't that mean that the limit of the product is also defined? After all, division by zero is an issue frequently circumnavigated when taking limits.

Edit: What if the original limit was defined as: $$\begin{align} \frac{d}{dx}\left[ f(g(x)) \right] &= \lim_{h \to 0} \frac{f(g(x+h))-f(g(x))}{h} \\ &= \lim_{h \to 0} \frac{f(g(x+h))-f(g(x))}{g(x+h) - g(x)} \cdot \frac{g(x+h)-g(x)}{h} \end{align}$$

In this case, does the division by zero arise from the fact that $g(x+h)$ may equal $g(x)$ an infinite number of times as $h \to 0$?

  • 1
    As explained in the Wikipedia article, the expression would be undefined if $g(x) -g(a)$ becomes zero arbitrarily close to $a$. – Martin R Apr 12 '21 at 18:05
  • " If limit of each of the factors in the above product are defined in their own right" - this is wrong, if $g(x) = g(a)$, then limit of first factor may not be defined. – Anon Apr 12 '21 at 18:07
  • Related: https://math.stackexchange.com/questions/2490533/why-is-the-correct-proof-of-the-chain-rule-correct-what-is-actually-happening – Hans Lundmark Apr 12 '21 at 19:06

2 Answers2

3

The idea of evading division by $0$ in the use of limits is based on the plan of never actually hitting a value of dividing by $0$. It's like sneaking up to a hole in the ground and peering over the edge, instead of falling in.

The problem described on the page you linked to is this: when you are trying to sneak up on a particular hole, you might find yourself falling in a different hole as you approach. In fact there might be an endless series of holes in between you and the hole you actually wanted to sneak up on. Defining the $Q$ function was an attempt to get around that problem.

RobertTheTutor
  • 7,415
  • 2
  • 10
  • 34
1

The goal is to compute the limit$$\lim_{x\to a}\frac{f(g(x))-f(g(a))}{x-a},\tag1$$and this is done by replacing$$\frac{f(g(x))-f(g(a))}{x-a}\tag2$$with$$\frac{f(g(x))-f(g(a))}{g(x)-g(a)}\cdot\frac{g(x)-g(a)}{x-a}\tag3.$$But it's not as simple as that, because, for certain functions $g$, $(3)$ is undefined at infinitely many points near $a$. And therefore, even if the limit of $(3)$ at $a$ exist, it will not follow from that that the limit of $(2)$ at $a$ will exist too. So, something must be done about that and the trick is to replace $(3)$ with $f'(g(a))$ when $g(x)=g(a)$ and to prove that making this will not prevent us from proving that $(1)$ is equal to $f'(g(a))$.