5

In contrast to earlier versions, Mathematica 11.3 and 12.0 generate lots of underflow errors from my code, e.g.

"General::munfl: Exp[-11250.] is too small to represent as a normalized machine number; precision may be lost."  

A minimal working example:

sigma=0.01;
gaussian=Function[x,1/(Sqrt[2Pi] sigma) Exp[-1/2(x/sigma)^2]]
gaussianP[x_]=D[gaussian[x],x]
gaussianP[1.5]  

The background of this error is discussed in:

New General::munfl error and loss of precision How to flush machine underflows to zero and prevent conversion to arbitrary precision?

In short: Mathematica still detects MachineNumber underflows, but it does no longer switch to arbitrary precision mode. Instead, it produces a result with less digits if any.

The solution proposed in question 170416 does not solve my problem since Exp[x] expressions are converted to E^x upon evaluation, cf. https://reference.wolfram.com/language/ref/Exp.html. Thus it is not sufficient to only modify Exp[].

Question 1: How can I restore the behavior of previous versions?

Considering the solution in 170416, how can I tell Mathematica to treat E^MachineNumber in the same way as Exp[MachineNumber]?

There are already attempts to rewrite E^x expressions in terms of Exp[x]: Format Exp[] in output Function to Expand exponentials

How can these be used in my case?

Question 2: How can I handle underflow manually?

I feel uncomfortable with Mathematica producing results of unknown precision. Instead, I would like to define locally a threshold by myself, below which the outcome of Exp[] is set to zero at MachinePrecision.

I would be grateful for any help.

  • In general any error or warning can handled manually with Check – m_goldberg May 05 '19 at 20:14
  • 1
    related: https://mathematica.stackexchange.com/q/197580/26598 – Roman May 05 '19 at 20:28
  • 1
    I think that Mathematica now underflows in accordance with IEEE 754. Can anyone confirm? – Michael E2 May 05 '19 at 22:13
  • The point of my comment is that Mathematica does not produce "results of unknown precision," but one that conforms to a defined standard. However, I have not found any documentation that explicitly confirms. This tutorial describes the new underflow behavior in a way that seems consistent with IEEE 754. – Michael E2 May 06 '19 at 11:55
  • 1
    @Michael E2 much of that tutorial is agnostic about the float point system used, and describes the constants that may be employed to characterize the system. But the part that describes the new underflow behavior seems specific to IEEE 754, since denormalized numbers are a special feature of that particular standard. Perhaps Mathematica no longer supports other floating point systems. – John Doty May 06 '19 at 13:00
  • @JohnDoty I think the lack of an explicit statement may be because machine arithmetic means the floating-point arithmetic of the individual user's machine, which WRI cannot guarantee will be IEEE-754 compliant. See under "Background & Context" of the docs for MachinePrecision (which I just stumbled across). – Michael E2 May 06 '19 at 13:22

2 Answers2

4

The old behavior was inconsistent with the principle that machine numbers have the semantics of your hardware's floating point. The new behavior conforms to this principle. Thus, if you want controlled precision in your calculations, you must use controlled precision as input, or explicitly set the precision at strategic places.

John Doty
  • 13,712
  • 1
  • 22
  • 42
  • 1
    BTW, it's still inconsistent with respect to overflow, though. Compare 10.^-400 and 1/10.^400. – Michael E2 May 06 '19 at 11:49
  • 2
    @MichaelE2 That's because there is no "obvious" machine number to overflow to. NaN might be possible, but overflowing to a bignum tends to be more useful (maybe you knew this, but it perhaps is not obvious to others). – Daniel Lichtblau May 06 '19 at 16:58
  • 1
    @DanielLichtblau IEEE specifies overflow should round to ± infinity or the maximum machine number, depending on rounding mode. Round-to-even would have it be (floating-point) infinity. But (I suppose) Mathematica's Infinity is not equivalent to machine floating-point infinity. Something like an equivalent MachineInfinity or DirectedInfinity[1.] would have to be introduced and incorporated into WL handle machine overflow, I guess. (It'd be nice if packed arrays could contain such floating-point numbers.) – Michael E2 May 06 '19 at 18:30
  • Also I agree that overflowing to bignums is useful. I thought that about underflowing, too. – Michael E2 May 06 '19 at 18:31
  • 2
    @MichaelE2 Right, DirectedInfinity[1] is not equivalent to machine float infinity. Possibly a float infinity for Mathematica would be useful though. As for underflowing to a bignum, the pains outweigh the gains. – Daniel Lichtblau May 06 '19 at 20:47
4

One idea is to use Check to find out when General::munfl happens, and then switch to a more complicated evaluation process. The more complicated evaluation process would modify the evaluations of numeric functions so that they rationalize their input first, and then numericize the output. Here is a function that does this:

SetAttributes[trappedEvaluation, HoldFirst]

trappedEvaluation[expr_] := Check[
    expr,
    numericize @ Block[{Real},
        f_?nfQ[a___, r_Real, b___] ^:= Rationalize[Unevaluated @ f[a, r, b], 0];
        expr
    ],
    General::munfl
]

nfQ = System`Dump`HeldNumericFunctionQ;

numericize[e_] := With[{res = N[e, 16]},
    Quiet @ Check[
        N[res],
        res,
        General::munfl
    ]
]

Your example:

trappedEvaluation[gaussianP[1.5]]

General::munfl: Exp[-11250.] is too small to represent as a normalized machine number; precision may be lost.

-9.206189566469410*10^-4881

Carl Woll
  • 130,679
  • 6
  • 243
  • 355
  • I think you forgot a SetAttributes[trappedEvaluation, HoldAll] or something like it. – Michael E2 May 06 '19 at 20:03
  • @MichaelE2 Good point! thanks – Carl Woll May 06 '19 at 20:05
  • Very cool, this helped me a lot! Is it also possible to use the rationalize-numericize trick when storing an intermediate result in a symbol, something like

    val = trappedEvaluation[gaussianP[1.5]]; trappedEvaluation[2. val]?

    – Quantum Sharpener May 08 '19 at 14:37
  • Let's assume I want to calculate res = trappedEvaluation[2. gaussianP[1.5]] in two steps:
    1. val = trappedEvaluation[gaussianP[1.5]] (*val is of arbitrary precision*)
    2. res = 2. val

    This results in 0. (machine precision!) for res associated with an underflow error. How can I prevent that?

    – Quantum Sharpener May 14 '19 at 21:44
  • @CarlWoll Thank you; I've found this very helpful. However, when I evaluate trappedEvaluation[N[Exp[-800]]], it still returns zero. However, if I define f[x_?NumberQ] = N[Exp[-x]], and then run trappedEvaluation[f[800]] it returns the correct value. Could you please explain what goes wrong in the first case? – Lauren Pearce Mar 12 '21 at 00:09