ybeltukov showed empirically that $g(x)$ is numerically more precise than $f(x)$, despite the two expressions being formally mathematically equal. Why is this the case?
In your original question, you guessed that
If you could explain why, that would really help me out, I have a feeling it's f(x) as there isn't a denominator but I'm not 100% certain.
To explain why $g(x)$ is more accurate, recall Manuel --Moe-- G's comment:
...the main culprit for loss of precision is adding (or subtracting) a very large number to a very small number, as opposed to adding two numbers of comparable size. Division and multiplication have near the same effect
In the case of $f(x)$, for large values of $x$ we have $x$ (a large number), times $\sqrt{x+1}-\sqrt{x}$, which tends towards zero as $x\to\infty$. Thus, there is catastrophic loss of precision for large $x$, which is consistent with the first figure that ybeltukov's answer provided.
In contrast, $g(x)$ is the ratio of two large numbers whose sizes aren't that much different (or at least, become more different at a slower rate than in the case of $f(x)$).
More formal explanation
First, examine $f(x)$. The first term $x$ asymptotically tends as $x$, whereas the second term $\sqrt{x+1}-\sqrt{x}$ asymptotically tends as $1/\sqrt{x}$:
Series[Sqrt[x + 1] - Sqrt[x], {x, Infinity, 2}]
(*Sqrt[1/x]/2 - 1/8 (1/x)^(3/2)*)
This means that the relative sizes of the terms diverge as $x^{3/2}$.
Now examine $g(x)$. The first term $x$ asymptotically tends as $x$, whereas the second term $\sqrt{x}+\sqrt{x+1}$ tends as $\sqrt{x}$:
Series[Sqrt[x + 1] + Sqrt[x], {x, Infinity, 2}]
(*2 Sqrt[x] + Sqrt[1/x]/2 - 1/8 (1/x)^(3/2)*)
This means that their relative sizes diverge as $x^{1/2}$, which is slower than in the case of $f(x)$. This means that $g(x)$ will be more accurate for large $x$.
Also, please remember to accept the answer, if any, that solves your problem, by clicking the checkmark sign!
– Jan 29 '15 at 18:46