5

I have two functions $$f(x)=\frac{(e^{2x}-(e^x\cos x)^2)}{x^2}$$ $$g(x)=\left(\frac{e^x\sin x}{x}\right)^2$$ which I have proved algebraically to be the same however doing computations with these functions for $x\to 0$ gives different results, mainly being $g(x)$ approaches 1 and $f(x)$ seems to have no pattern at all.

My question is why does this happen and which function is "more correct". I believe $g(x)$ is more accurate to the true value as from a graph you can see the true value approaches 1 but I would like to know why this is, my guess is computational limits? Thanks.

These are the results I get from my computations using MATLAB (relative error is assuming $g(x)$ is the true value)

 x                         f(x)                      g(x)                      relative error (%)
 1.000000000000000e-01     1.217336840213945e+00     1.217336840213926e+00     1.550416515391986e-12
 1.000000000000000e-02     1.020167333771749e+00     1.020167333768841e+00     2.849975646786547e-10
 1.000000000000000e-03     1.002001667416152e+00     1.002001667333378e+00     8.260909802481186e-09
 1.000000000000000e-04     1.000200011702645e+00     1.000200016667333e+00    -4.963695172280794e-07
 9.999999999999999e-06     1.000020066754814e+00     1.000020000166667e+00     6.658681569301560e-06
 1.000000000000000e-06     1.000088900582341e+00     1.000002000001667e+00     8.690040687310665e-03
 1.000000000000000e-07     1.043609643147647e+00     1.000000200000016e+00     4.360943442574345e+00
 1.000000000000000e-08     2.220446049250313e+00     1.000000020000000e+00     1.220446004841393e+02
 1.000000000000000e-09    -2.220446049250313e+02     1.000000002000000e+00    -2.230446044809420e+04
 1.000000000000000e-10                         0     1.000000000200000e+00    -1.000000000000000e+02
 1.000000000000000e-11                         0     1.000000000020000e+00    -1.000000000000000e+02
 1.000000000000000e-12    -2.220446049250313e+08     1.000000000002000e+00    -2.220446059245872e+10
 1.000000000000000e-13     2.220446049250313e+10     1.000000000000200e+00     2.220446049149869e+12
 1.000000000000000e-14                         0     1.000000000000020e+00    -1.000000000000000e+02
BorisOZ
  • 53

3 Answers3

8

This appears to be an instance of catastrophic cancellation.

Your example shows how a numerical computation can be rearranged to avoid it.

Ruslan
  • 6,775
MPW
  • 43,638
1

This is called underflow.
The computer calculates $e^{2x}$ accurate to sixteen decimal places. So it might be out by 0.00000000000001. The same with $e^x\cos x$.
Then you divide by $x^2$. When $x=0.0000001$, the result of that error is around 1. When it calculates $\sin x$, instead of $0.00000009999999$, it stores it as $10^{-8}×0.9999999999999999$, so it is accurate to 24 places.

Empy2
  • 50,853
  • 1
    That 0.00000000000001 is a relative error, not an absolute error. Similarly, "24 places" is misleading, because you are using it to mean "an absolute error of 10^(-24)" rather than its standard usage of "a relative error of 10^(-24)". Also, this has nothing to do with underflow, which occurs for numbers of magnitude $\approx 10^{-300}$. – Federico Poloni Apr 06 '20 at 15:12
  • 2
    It's not underflow, it's loss of significance. – Ruslan Apr 06 '20 at 15:23
0

To expand on MPW's answer: let us focus on one specific operation in your first computation for $f(x)$: at some point you have to compute $e^{2x}$ and store its result in an IEEE double-precision (binary64) variable. In general, what you store will not be $e^{2x}$ but $e^{2x}(1+\varepsilon)$, for some $\varepsilon$ with $|\varepsilon| \leq 2.2 \cdot 10^{-16}$ (machine precision in double arithmetic). There are other (independent) sources of error in your computation, coming from literally every operation and intermediate result, but you can see that this one already produces a perturbation of $$ \varepsilon \frac{e^{2x}}{x^2} $$ on your computed value of $f(x)$. And $\frac{e^{2x}}{x^2} \to \infty$ when $x \to 0$.

You may be lucky and have $\varepsilon= 0$ for some special choices of $x$ (for instance $x=0$), but in the generic case you can't expect better than $|\varepsilon| \approx \cdot 10^{-16}$.