When I evaluate
x = 1.0000000000000000001 (* Precision of 20 digits *)
Mathematica returns
1.000000000000000000 (* Precision of 19 digits *)
When I evaluate
y = N[1.0000000000000000001,100]
it still returns
1.000000000000000000
despite the fact that we did know the original number with a precision of 20 digits so it seems it really does lose precision the moment it evaluates the number 1.0000000000000000001. Trace also shows the calculation as if I just entered the number with a precision of 19 digits
When evaluate
SetPrecision[y,30]
it returns
1.00000000000000000010000000000
though...
I read through the documentation on arbitrary precision calculations and still don't get what's going on. If Mathematica stores the numbers internally with a higher precision then why doesn't N give me the result up to the highest precision the number has. If it doesn't store any precision then why does the missing digit suddenly appear when evaluating SetPrecision?
InputFormandRealDigitsboth indicate that last digit is present.In[14]:= InputForm[x = 1.0000000000000000001] RealDigits[x] Out[15]= {{1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1}, 1} 1.000000000000000000119.` – Daniel Lichtblau Apr 05 '22 at 13:54Numerical Precision– Bob Hanlon Apr 05 '22 at 13:59