3

I have to find the nth derivative of $$ f(x)=\frac{1}{(x-a)(x+a)} $$ where $a \neq 0$. So first guess I have to represent it in a more differentiable form.

The solution goes like this: $$\frac{1}{(x-a)(x+a)}=\frac{A}{(x-a)}+\frac{B}{(x+a)}, \space \space 1=A(x+a)+b(x-a)$$

Let $x=a$ then $A=\frac{1}{2a}$, and when $x=-a$, $B=\frac{-1}{2a}$.

Now, I get the first part, but how can we let $x$ to equal $a$ or to equal $-a$? Are we allowed to do such thing? What's the logic behind it?

Thanks in advance!

Nikola
  • 1,558

5 Answers5

5

I like the point of @GNU Supporter in the comment. I would go like this:

$$\frac{1}{(x-a)(x+a)}=\frac{A}{x-a}+\frac{B}{x+a}$$

$$\frac{1}{(x-a)(x+a)}=\frac{A(x+a)}{(x-a)(x+a)}+\frac{B(x-a)}{(x-a)(x+a)}$$

And now you multiply with $(x-a)(x+a)$ with condition that $x$ is not $a$ nor $-a$:

$$1 = A(x+a) + B(x-a)$$

Now $(A+B)*x = 0$, and $1=(A-B)*a$.

So $A=-B$ and $1=2A*a$.

$A=1/(2a)$ and $B=-1/(2a)$

croraf
  • 495
3

The partial fraction is true for all (defined) values so as long as the value you put in is in the 'domain' then the partial fraction must hold. Hence, we use a value that eliminates one term to find the value of the other coefficient.

  • Aha, I see. So when x equals a (since a is part of it's domain) we eliminate the other term and we find out what A is. The same procedure with finding out what B is. – Nikola Dec 09 '17 at 13:31
  • 3
    Wait a second. In which "domain" ($\Bbb{R}\setminus{\pm a}$) are you putting in "values" ($\pm a$)? – GNUSupporter 8964民主女神 地下教會 Dec 09 '17 at 13:32
  • 3
    I think there may be an interesting, subtle point here. The domain of $\frac{1}{x^2-a^2}$ doesn't include $\pm a$, but the range of acceptable $x$ values in $1=A(x+a) + B(x-a)$ can be extended continuously to include $\pm a$ and those values used to solve for $A$ and $B$ easily. – Malcolm Dec 09 '17 at 13:38
  • @Malcolm We don't need limits, which can't even by defined on polynomial rings over finite fields like $\Bbb{F}_{p^n}$ – GNUSupporter 8964民主女神 地下教會 Dec 09 '17 at 15:44
  • @GNUSupporter I realize that we don't need limits to solve for $A$ and $B$. Although I'm not exactly sure how you're finding $n^{\text{th}}$ derivatives without them. My point was that the technique the OP uses is widely taught and widely used and gives the correct answer, but even over $\Bbb{R}$ has an issue not usually addressed. My point wasn't to say that there aren't other methods to find $A$ and $B$. – Malcolm Dec 10 '17 at 00:00
  • @Malcolm I see your point. Excuse me for having my head filled with algebra. One can visualise continuous extension better---that's why your comment get upvotes. I loved such calculations as a teenager. In fact, I didn't use any new calculations skills, but introduced new names and theorems to justify such techniques, to extend it to more general setting. One can define (algebraic) derivative for polynomials. Dummit and Foote use it to simply arguments for checking the separability of polynomials in Abstract Algebra. This construction lacks geometric insights, but logically correct. – GNUSupporter 8964民主女神 地下教會 Dec 10 '17 at 01:29
  • @GNUSupporter Yes, and the algebraic derivative for polynomials is interesting and I even find those fun. I imagine the algebraic derivative can be extended to an algebraic derivative of rational functions - no limits taken. :-) Thanks. – Malcolm Dec 10 '17 at 01:43
  • @Malcolm You're welcome, though I stopped studying abstract algebra. Apart from the nature of this problem, what pushed me to post these algebraic stuff is the vague language used in this answer. In the 1st sentence, "domain" and "value" can't be $\pm a$. In the 2nd sentence, "value" suddenly become $\pm a$, and the transition "Hence" isn't explained. The question writer "understand" it & say "$a$ is in it's domain" and accepts this answer. – GNUSupporter 8964民主女神 地下教會 Dec 10 '17 at 02:05
  • The cancellation of $x+ a$ (requires $x\ne a$) in fractions followed by substitution $x=-a$ remind me of Berkeley's criticism on infinitesimal calculus in his Analyst section XIV. "All which seems a most inconsistent way of arguing, and such as would not be allowed of in Divinity." That's one of the reasons that I recalled the very definitions to clarify the stuff. – GNUSupporter 8964民主女神 地下教會 Dec 10 '17 at 02:13
2

It is just a practical way to find $A, B$ and in general works for the product of the terms on the denominator involving simple roots. Actualll $A$ is a limit value. For example to find $A$: You multiply both sides of this equality $$\frac{1}{(x-a)(x+a)}=\frac{A}{x-a}+\frac{B}{x+a}$$

by $x-a\,$ you get

$$\frac{1}{x+a}=A+\frac{B(x-a)}{x+a}.$$ Now taking the limit as $x$ goes to $a$ you find $A=1/2a$. To find $B$ you will do the same thing but this time multiply by $x+a$.

daulomb
  • 3,955
1

Original problem

In fact, the problem itself has nothing to do with , not even . In fact, it's simply an algebra question. I wonder why don't we think from the definitions of the algebraic structures in which these fractions and polynomials are living. To use , we are unnecessarily assuming the order structure "$x<y$".

\begin{align} \frac{1}{(x-a)(x+a)}&=\frac{A}{(x-a)}+\frac{B}{(x+a)}=\frac{A(x+a)+B(x-a)}{(x-a)(x+a)} \label{eq1}\tag{1}\\ 1&=A(x+a)+B(x-a) \label{eq2}\tag{2} \end{align}

Note that \eqref{eq1} takes place in the field of fractions of polynomial $\Bbb{R}(X)$. From the very definition of fraction field $$\Bbb{R}(X)=\left\lbrace \frac{p(X)}{q(X)} : p(X),q(X) \in \Bbb{R}[X] \text{ with } q(X)\ne0 \right\rbrace/\sim,$$ where $\sim$ denotes the equivalent relation $p_1(X)q_2(X)=p_2(X)q_1(X)$ for any $p_i(X),q_i(X)\in\Bbb{R}[X], i=1,2$, feel free to cancel out the denominators $(x-a)(x+a)\ne0$ in the integral domain $\Bbb{R}[X]$.

So we are left with \eqref{eq2} as an equality in $\Bbb{R}[X]$, which allows us to do substitutions mentioned in the question and in others answers.

Extended problem (in response to a reply to my comment)

  1. What if one has $$\frac{1}{(x-a_1)(x-a_2)\cdots(x-a_n)}=\frac{A_1}{x-a_1}+\cdots+\frac{A_n}{x-a_n}?$$ Answer: a special case of the next question
  2. What if one has $$\frac{r(x)}{(x-a_1)(x-a_2)\cdots(x-a_n)}=\frac{A_1}{x-a_1}+\cdots+\frac{A_n}{x-a_n}?$$ for any degree $n-1$ polynomial $r(x) \in R[X]$? (What conditions are needed are $R$? To be seen at the bottom.)
  1. Multiply both sides by the denominator to get an equality in the integral domain $R[X]$. $$\begin{aligned} r(x) =& A_1(x-a_2)\cdots(x-a_n)+\cdots+A_i(x-a_1)\cdots(x-a_{i-1})(x-a_{i+1})\cdots(x-a_n) \\ &+\cdots+A_n(x-a_1)\cdots(x-a_{n-1}) \\ =& A_1(x-a_2)\cdots(x-a_n)+(x-a_1)H(x) \quad\text{for some } H(x) \in R[X] \end{aligned}$$ In the first step, the RHS is a sum of $n$ terms $A_i \in R$ multiplied by $(x-a_1)\cdots(x-a_n)$ without $x-a_i$. (Here $a_i \in R$.)

Substitute $x=a_1$ to kill $H(x)$ in slow motion, so that we understand what assumptions in needed for $R$. $$ \require{cancel} r(a_1) = A_1(a_1-a_2)\cdots(a_1-a_n)+\cancelto{0}{(a_1-a_1)H(a_1)} \label{eq3} \tag{3} $$ Up to this stage, we have only considered $R$ as an integral domain, in which the cancellation law holds. To make $A_1$ the subject of the \eqref{eq3}, we divide $(a_1-a_2)\cdots(a_1-a_n)$ on both sides, and such operation requires stability by division. As a consequence, $R$ needs to be field just because of the last step. $$\bbox[2px, border: 1px solid black]{A_1=\frac{r(a_1)}{(a_1-a_2)\cdots(a_1-a_n)}}$$ Other coefficients $A_i$ can be determined with a similar argument.

(More rigorously, we are applying the Factor Theorem, which holds for any commutative rings. $$r(x)-A_1(x-a_2)\cdots(x-a_n)=(x-a_1)H(x).$$ RHS is divisible by $x-a_1$, so as LHS. The Factor Theorem allows us to substitute $x=a_1$ on LHS and equate it to $0$.)

1

$$\frac{1}{(x-a)(x+a)}=\frac{A}{x-a}+\frac{B}{x+a}$$implies

$$\frac{1}{x+a}=A+\frac{B(x-a)}{x+a}.$$ $$\frac{1}{x-a}=B+\frac{B(x+a)}{x-a}.$$

Taking $x=\pm a$ in both equalites we get $$A=\frac{1}{2a}~~~and ~~~B=-\frac{1}{2a}$$

Guy Fsone
  • 23,903