0

When I evaluate

x = 1.0000000000000000001 (* Precision of 20 digits *)

Mathematica returns

1.000000000000000000 (* Precision of 19 digits *)

When I evaluate

y = N[1.0000000000000000001,100]

it still returns

1.000000000000000000

despite the fact that we did know the original number with a precision of 20 digits so it seems it really does lose precision the moment it evaluates the number 1.0000000000000000001. Trace also shows the calculation as if I just entered the number with a precision of 19 digits When evaluate

SetPrecision[y,30]

it returns

1.00000000000000000010000000000

though...

I read through the documentation on arbitrary precision calculations and still don't get what's going on. If Mathematica stores the numbers internally with a higher precision then why doesn't N give me the result up to the highest precision the number has. If it doesn't store any precision then why does the missing digit suddenly appear when evaluating SetPrecision?

Gert
  • 1,530
  • 8
  • 22

1 Answers1

2
1.0000000000000000001
1.000000000000000000

It's precision of display....

Have you learned C or C++? It's like:

printf("%.3d",1.000001);

If you mean:

1.0000000000000000001==1.0000000000000000000

It's because the default precision has only ten numbers, You need usesetPrecision or like

1.0000000000000000001`22==1.0000000000000000000`22

There is refered in Possible Issues (https://reference.wolfram.com/language/ref/Equal.html)

It's mean setPrecision[y,30] make Precision[y] from 10 to 30... In other way, the infinite precision is you can set some value's precision infinite, not any of them is infinite defaultly.

(If you think my English is poor, it's because I'm not native English speaker....