0

I'm reading the following table (from https://www.tutorialspoint.com/cprogramming/c_data_types.htm )

enter image description here

Why the precision of the float is 6 decimal places, but I see that the float are in the interval [1.2E-38 , 3.4E38] ? So, I think that I can have 38 decimals of precision.

Which is my error?

yemino
  • 515
  • 2
  • 15

1 Answers1

4

Prof. Bangerth's comment is completely correct. To add more detail, we can refer to IEEE754 standard which defines the floats as

| sign bit | exponent bits | mantissa | 
| 1 bit    |    8 bits     |  23 bits |

Sign bit represents the sign of the number $+$ or $-$

Exponent bits are signed (using two's complement) and ranges from $-128$ to $127$

Lastly, mantissa has an implicit 1 assumption (without getting into intricacies like subnormal -or denormal- number), so a number in this representation has the form: $\pm 2^{E} \times 1.\text{mantissa}$. For example;

$3.1415=$0 10000000 10010010000111001010110

or equivalently, $+ 2^{(10000000)_2-128} \times (\color{red}1.10010010000111001010110)_2$ where red coloured $1$ is implicitly assumed to be there.

Now, if you transform the floating point representation back to decimal, you will notice that it is actually equal to $3.14149996185302734375$ which is not equal to $3.1415$. This is because mantissa has only so much space (23 bits in case of floats) and we have to round. This rounding may introduce an error of at most $2^{-23}\approx 10^{-7}$.

Depending on how you define precision, this means that you have either 6 decimal places or 6-7 decimal places of precision.

I wrote this in a rush, I may have made some mistakes. Please be critical of what I am saying here and refer to other sources. And if I said anything wrong, please let me know so I can fix it.

Abdullah Ali Sivas
  • 2,636
  • 1
  • 6
  • 20