3

Let $d(x) = f(x)g(x)$. Then:

$$d'(x) = \lim_{h \to 0} \frac{d(x+h) - d(x)}{h}$$

$$d'(x) = \lim_{h \to 0} \frac{f(x+h)g(x+h) - f(x)g(x)}{h}$$

$$d'(x) = \lim_{h \to 0} \frac{f(x+h)g(x+h) - f(x+h)g(x) + f(x+h)g(x) - f(x)g(x)}{h}$$

$$d'(x) = \lim_{h \to 0} \frac{f(x+h)(g(x+h) - g(x)) + g(x)(f(x+h) - f(x))}{h}$$

$$d'(x) = \lim_{h \to 0} \frac{f(x+h)(g(x+h) - g(x))}{h} + \lim_{h \to 0}\frac{g(x)(f(x+h) - f(x))}{h}$$

$$d'(x) = \lim_{h \to 0} f(x+h)\frac{g(x+h) - g(x)}{h} + \lim_{h \to 0}g(x)\frac{f(x+h) - f(x)}{h}$$

$$d'(x) = f(x)g'(x) + g(x)f'(x)$$

It seems to be right but I only got here because I knew there was some slick "closed-form" ahead of time and I was just trying to figure out how to get there.

When this rule was invented (many years ago, historically speaking), how did they get to the end state without necessarily knowing how it was going to end?

It makes sense to ask "What is the derivative of the product of two functions?" but then it's very easy to get stuck at this step:

$$d'(x) = \lim_{h \to 0} \frac{f(x+h)g(x+h) - f(x)g(x)}{h}$$

Before realizing you can add/subtract the same quantity and do some clever rearranging. How did anyone figure this out? Was there evidence this was a good way to go or did someone just trial-and-error it until they discovered it?

user525966
  • 5,631
  • You should remove the "is this proof valid?" part of your question because as it stands the proof is completely valid and the standard way it is proved in elementary textbooks. – Gaurang Tandon Feb 12 '18 at 05:12
  • @GaurangTandon Did that now – user525966 Feb 12 '18 at 05:22
  • "How did anyone figure this out?" - well, the "anyone" happened to be Isaac Newton and Gottfried Wilhelm Leibniz... –  Feb 12 '18 at 05:40
  • 1
    It's no secret to anyone who has done tons of math that $$ab-cd=ab-bc+bc-cd=b(a-c)+c(b-d)$$ – Gerry Myerson Feb 12 '18 at 06:11
  • Without going to some history books to look this up (which shouldn't be difficult, but I need to work on something else right now), my guess is that the product rule form arose by the following geometric and algebraic observations --- diagrams of an $L$ by $W$ rectangle and an $L+\Delta L$ by $W + \Delta W$ rectangle; approximating things like (big number 1 + small number 1)(big number 2 + small number 2), which arises in numerical work and also in algebraic work when "numbers" are "variables". – Dave L. Renfro Feb 12 '18 at 12:43
  • I recommend migrating this question to hsm.stackexchange, the History of Science and Mathematics site. – Michael Weiss Feb 12 '18 at 16:42
  • Isn't it true that one of those early guys briefly asserted that the derivative of product is product of derivatives? But of course soon found out it was wrong. – GEdgar Feb 13 '18 at 11:26
  • @GEdgar, this is correct and was discussed in detail in Margaret Baron's book. – Mikhail Katz Feb 13 '18 at 17:07

1 Answers1

3

The proof you presented does look bulky but if you use the notation of differentials as Leibniz did, it becomes far more readable. Even Leibniz did not discover the "Leibniz rule" immediately and was for a short time toying with the idea that the differential of a product might be the product of the differentials, but he quickly got the correct result as follows: $(u+du)(v+dv)-uv=udv+vdu+(du\,dv)$ but the term in parentheses is negligible compared to earlier terms because it is "second order" small, and what remains is $udv+vdu$. Leibniz was working with a generalized relation of equality "up to" a negligible term, and there was no logical fallacy in his reasoning; see this 2017 publication in Journal of General Philosophy of Science for details.

Mikhail Katz
  • 42,112
  • 3
  • 66
  • 131