Is there a rationale why SetPrecision works on integers except for 0?
SetPrecision[1,5]
1.0000
SetPrecision[0,5]
0
Is there a rationale why SetPrecision works on integers except for 0?
SetPrecision[1,5]
1.0000
SetPrecision[0,5]
0
If you have a value $x$ with an absolute uncertainty $dx$ the precision of $x$ is by definition:
$$\text{Precision}(x) = - \log_{10}(dx/x)$$
That is why for $x=0$ the precision is always infinity. You cannot change this by changing $dx$.
If you want to assign an absolute uncertainty to zero, you can set accuracy, which is defined as
$$\text{Accuracy}(x) =- \log_{10}(dx)$$
This can be done using SetAccuracy command:
SetAccuracy[0, 5]
0.*10^-5