15

DawsonF[30.] returns 0. The correct value is 0.016676... At least it prints a warning message,

General::munfl: Exp[-900.] is too small to represent as a normalized
machine number; precision may be lost.

I am on Mathematica 11.3.

The problem is that DawsonF[x] is being computed as Exp(-x^2) * Erfi[x] (times constant factors), which in this case is a product of a very small quantity times a very large one, resulting in under/overflow. This is a VERY bad algorithm. The point of having a DawsonF in the first place is to bypass this multiplication and return the result without under/overflow (see the section on Numerical Recipes book, for example).

I know I can use N[DawsonF[30], 20] to obtain an accurate result, but this can be slower, and there is no reason why DawsonF could not work in machine precision.

I will submit a bug report to Wolfram, but I wanted to post this here to get some feedback before. If the community agrees please tag this as a bug.

Are there other examples like this in Mathematica of special functions that just don't work in machine precision?

Update: I submitted a bug report and they replied (see my answer). See J.M.'s answer for a workaround.

J. M.'s missing motivation
  • 124,525
  • 11
  • 401
  • 574
a06e
  • 11,327
  • 4
  • 48
  • 108

4 Answers4

10

Before DawsonF[] became built-in in Mathematica, I used the following method for (small to moderately-sized) real arguments:

dawson = With[{eps = $MachineEpsilon, e2 = $MachineEpsilon^2},
              Compile[{{z, _Real}},
                      Module[{a, b, c, d, f, h, w},
                             a = 2. z^2;
                             f = c = b = a + 1.;
                             a = w = -2. a; d = 0.;
                             If[c == 0., c = e2];
                             While[b += 2.;
                                   d = b + a d; If[d == 0., d = e2]; d = 1/d;
                                   c = b + a/c; If[c == 0., c = e2];
                                   a += w; f *= (h = c d);
                                   Abs[h - 1] > eps];
                             z/f], RuntimeAttributes -> {Listable}]];

This is based on using the Lentz-Thompson-Barnett algorithm to evaluate this CF representation for Dawson's integral. It is not the most efficient method, since power series will outperform the CF near the origin, and asymptotic series will do best for really large arguments. It is quite compact and respectably performant in its intended argument range, however.

Here is a plot of it compared against the built-in:

Plot[dawson[x] - DawsonF[x], {x, -20, 20}, Frame -> True, PlotRange -> All]

absolute error

J. M.'s missing motivation
  • 124,525
  • 11
  • 401
  • 574
4

I submitted a bug report and this is the relevant line from what they replied:

Starting with Version 11.3 underflow in no longer trapped in machine arithmetic and Mathematica does not switch automatically to arbitrary precision. This provides a more efficient way to handle numerical calculations and brings Mathematica much more in line with the IEEE 754 standard for how floating point numbers are to be handled ( https://en.wikipedia.org/wiki/IEEE_754 )

This explains the origin of the bug. Apparently the bad machine precision algorithm was there all along, but in previous versions it fell back to arbitrary precision and thus went unnoticed (though it probably impacted performance).

Hope it gets fixed soon.

See @J.M. answer for a an algorithm that works in MachinePrecision.

a06e
  • 11,327
  • 4
  • 48
  • 108
4

Based on @EddieXiao's answer to Numerical underflow for a scaled error function, you could define your own DawsonF with:

dawsonF[x_] := -(I/2) E^-x^2 Sqrt[π]+I HermiteH[-1,I x]

For example:

Chop @ dawsonF[30.] //InputForm

General::munfl: Exp[-900.] is too small to represent as a normalized machine number; precision may be lost.

0.01667594140105917

vs.

DawsonF[30`19]

0.01667594140105918

This is faster than the built-in machine precision code for DawsonF, e.g.:

dawsonF[N @ Range[-10, 10]]; //AbsoluteTiming
DawsonF[N @ Range[-10, 10]]; //AbsoluteTiming

{0.005917, Null}

{0.008466, Null}

On the other hand, @JM's compiled code is about 4 times faster:

dawson[N @ Range[30, 40]] //AbsoluteTiming
Chop @ dawsonF[N @ Range[30, 40]] //AbsoluteTiming

{0.000078, {0.0166759, 0.0161374, 0.0156326, 0.0151585, 0.0147123, 0.0142916, 0.0138943, 0.0135185, 0.0131625, 0.0128247, 0.0125039}}

{0.000323, {0.0166759, 0.0161374, 0.0156326, 0.0151585, 0.0147123, 0.0142916, 0.0138943, 0.0135185, 0.0131625, 0.0128247, 0.0125039}}

Of course, @JM's compiled code is expecting real numbers, and so complex input doesn't use the compiled code:

dawson[3. + 2. I] //AbsoluteTiming
dawsonF[3. + 2. I] //AbsoluteTiming

CompiledFunction::cfsa: Argument 3. +2. I at position 1 should be a machine-size real number.

{0.001332, 0.110514 - 0.0771238 I}

{0.001298, 0.110514 - 0.0771238 I}

The non-compiled code is about the same speed as dawsonF. Finally, as @JM mentions, his compiled code is not meant to work for all possible arguments, e.g.:

dawson[3 + 5 I]
dawsonF[3. + 5. I]
N[DawsonF[3 + 5 I], 20]

CompiledFunction::cfsa: Argument 3+5 I at position 1 should be a machine-size real number.

14.7334 + 6.88742 I

-7.78086*10^6 + 1.21475*10^6 I

-7.7808580812920342136*10^6 + 1.2147471245770455984*10^6 I

Carl Woll
  • 130,679
  • 6
  • 243
  • 355
2

How about these

DawsonF[30`20]
DawsonF[30.0000000000000000000]
0.016675941401059176

Then I think that it might not be a bug.


Update

One can use SetPrecision

DawsonF[SetPrecision[30, #]] & /@ {5, 10, 20}
{0.017, 0.01667594, 0.016675941401059176}
  • 1
    That works, but in cases where one wishes to stick to machine precision, we have the OP's problem. – J. M.'s missing motivation Sep 26 '18 at 04:48
  • @J.M.issomewhatokay. Above 20 can be changed to 10, or even 5. However it can not be MachinePrecision $ (\approx 15.95) $, which I cannot explain. – Αλέξανδρος Ζεγγ Sep 26 '18 at 04:49
  • 2
    As the OP mentioned, that is because the internal implementation is using the naive method to evaluate the Dawson integral (that is, computing Erfi[] and then multiplying with a Gaussian factor), which can be confirmed through spelunking. – J. M.'s missing motivation Sep 26 '18 at 04:57
  • 3
    However it can not be MachinePrecision ... Because SetPrecision[..] means that the computation uses arbitrary precision, which relies on different algorithms (even if the precision is quite low). – a06e Sep 26 '18 at 12:47
  • @becko: Exactly. Calculations done with machine precision do not track errors, so you never really have any guarantee about the outcome. Arbitrary precision arithmetic, on the other hand, does track errors and ensures correctness of the requested digits. That's why asking for just 5 digits of precision can still give you a more accurate result than using machine numbers. – Sjoerd Smit Sep 26 '18 at 20:58
  • @SjoerdSmit so you never really have any guarantee about the outcome. That's not entirely accurate. For example, IEEE 754 guarantees the accuracy of elementary operations (addition, subtraction, etc.). The accuracy of special functions such as sin, exp, in machine precision etc. is also well understood. You can have quite strong guarantees about the outcome if the numeric algorithms involved are sound. Machine precision has been very successful (e.g. Higham, N.J., Accuracy and stability of numerical algorithms (2002)), and arbitrary precision is too slow to replace it in many cases. – a06e Sep 26 '18 at 21:04
  • Ok, point taken. Still, the point remains that the errors are not explicitly tracked so you have to rely on the fact that each step in the calculation chain works as it should, or otherwise the whole thing falls apart. In general, that means that when you do complex compositions of operations, you have to give some thought about what's happening because it's not going to sort the errors out for you. Taking exponents of large negative numbers is one of these pitfalls, as we can see in this example. – Sjoerd Smit Sep 26 '18 at 21:09
  • @SjoerdSmit Note that in addition to error tracking, you get more bits representing even low-precision numbers than MachinePrecision numbers. Consider FullForm[1`5 + 2^-64] and FullForm[1`5 + 2^-65], whereas $MachineEpsilon equals 2^-52. – Michael E2 Sep 27 '18 at 10:59
  • @becko I would separate rounding error from underflow/overflow. Here the problem is underflow. For machine doubles, underflow begins at $MinMachineNumber, approx. 2*10^-308, and underflows to zero below $MinMachineNumber*$MachineEpsilon, approx. 5*10^-324. For arbitrary-precision numbers, underflow happens below $MinNumber, approx. 10^-n with n approx. 10^15. – Michael E2 Sep 27 '18 at 10:59