5

This came up looking at this How to speed up calculation of this equation (FindRoot).

Is there some sense to why FullSimplify gives zero here?

ClearAll[y]
FullSimplify[ Exp[-(-5. + y)^2]]

0.

Obviously incorrect for y->5. I don't think FullSimplify should be chopping numerically small values in any case.

george2079
  • 38,913
  • 1
  • 43
  • 110
  • Odd... note that FullSimplify[ Exp[-(-4. + y)^2]] (and other numbers) do not return zero. – bill s Mar 01 '16 at 14:55
  • See this Table[FullSimplify[ Exp[-(-N[n/1000] + y)^2]], {n, 4325, 4330}] – Artes Mar 01 '16 at 15:10
  • 1
    I am sure that this is a duplicate. Now if only I could find it ... – Szabolcs Mar 01 '16 at 15:14
  • The point where this breaks is Sqrt[Log[2^27]] (4.326080659802649..., can be positive or negative), which would point towards some sort of internal precision bug, although it's a quite odd one. – kirma Mar 01 '16 at 15:17
  • @kirma Take a look at the duplicate. Ilian explains what is going on. – Szabolcs Mar 01 '16 at 15:19
  • Even odder it goes away if the float is larger, like 27.+.. I agree this is a dup (honestly I did search though), I suppose I'll just delete. – george2079 Mar 01 '16 at 15:19
  • @Szabolcs I'm not entirely certain how that bug explains it to, ehm, sufficient precision. Something odd is going on and it's related to handling of machine-precision numbers, but that's it... – kirma Mar 01 '16 at 15:22
  • 2
    I guess the morale (from Ilian's explanation) is that one really shouldn't mix machine precision numbers with symbolic processing ... With arbitrary precision floats we don't get the problem, possibly because now Mathematica is able to recognize the precision loss: FullSimplify[Exp[-(-5.`10 + y)^2]] – Szabolcs Mar 01 '16 at 15:22
  • @Szabolcs I think it's actually limited to machine-precision numbers only, arbitrary precision reals are handled correctly at any precision. I'd say it's a bug. – kirma Mar 01 '16 at 15:29
  • This may be a numerological claim, but I just want to point out that double-precision floats which are used in machine-precision reals have 53 mantissa bits. 2*27 is 54, and that's the point where the simplification stops working. Maybe some internal transformation of the equation manages to construct a machine-precision real like 1+x where 0<x<$MachineEpsilon, and when this is rounded to 1, whole equation turns zero. Precision-tracking version, on the other hand, avoids this interpretation. – kirma Mar 01 '16 at 15:50
  • ... and the reason why FullSimplify chooses this, probably convoluted "simplification" is that 0. has a very low LeafCount. Bam, there we go. – kirma Mar 01 '16 at 15:55

0 Answers0