A method of representing numbers by a fixed number of significant digits, and the exponent of some base number. They are characterized in the form
${(significant digits)}*base^{exponent}$.
Typically, numbers are represented with respect to base = 2 (binary).
Questions tagged [floating-point]
159 questions
37
votes
2 answers
When should log1p and expm1 be used?
I have a simple question that is really hard to Google (besides the canonical What Every Computer Scientist Should Know About Floating-Point Arithmetic paper).
When should functions such as log1p or expm1 be used instead of log and exp? When should…
Tim
- 475
- 1
- 4
- 10
21
votes
4 answers
How to determine the amount of FLOPs my computer is capable of
I would like to determine the theoretical number of FLOPs (Floating Point Operations) that my computer can do. Can someone please help me with this. (I would like to compare my computer to some supercomputers just to get an idea of the difference…
Ol' Reliable
- 313
- 1
- 2
- 7
16
votes
7 answers
Robust computation of the mean of two numbers in floating-point?
Let x, y be two floating-point numbers. What's the right way to compute their mean?
The naive way (x+y)/2 can result in overflows when x and y are too large. I think 0.5 * x + 0.5 * y maybe better, but it involves two multiplications (which maybe is…
a06e
- 1,729
- 15
- 22
11
votes
3 answers
Relative comparison of floating point numbers
I have a numerical function f(x, y) returning a double floating point number that implements some formula and I want to check that it is correct against analytic expressions for all combination of the parameters x and y that I am interested in. What…
Ondřej Čertík
- 2,930
- 18
- 40
8
votes
6 answers
Testing equality of two floats: Realistic example
When does it typically make sense in programming to be testing the equality of two floating point numbers?
i.e.
a == b
where both a & b are floats.
My naive impression is that one would always test the difference against some tolerance epsilon.…
curious_cat
- 259
- 2
- 6
8
votes
1 answer
What's the right way to compare vectors in floating-point?
I know that I should use a tolerance for comparing floating point numbers. But for comparing vectors, I can think of 3 possible solutions corresponding to different distance metrics:
Compare the components of each vector individually: the vectors…
japreiss
6
votes
1 answer
cancellation problem in float-point numbers
In http://en.wikipedia.org/wiki/Floating_point#Addition_and_subtraction, it gives an example about cancellation problem in float-point numbers, see
I don't understand why it is written :
The best representation of this difference is e = -1;…
user565739
- 163
- 4
5
votes
2 answers
Computing a ratio of exponential functions without overflow issues
I'm interested in computing pointwise values of the function $u(x) = \sinh(k-kx)/\sinh(k)$ for $x \in (0,1)$, where $k = 10^{4}$. A direct computation of course results in overflow issues due to the $\exp(k)$ factor. However, $u(x)$ only takes on…
Justin Dong
- 937
- 6
- 14
4
votes
1 answer
Numerical accuracy of expression involving norm squared
I am computing the following quantity:
$$
\text{lhs} := ||a+b||^2 = ||a||^2 + 2a^\top b + ||b||^2 =: \text{rhs}
$$
for $a=c-d$, where $a,b,c,d$ are $n$-vectors.
Is there a rule of thumb for when I should have my program compute $\text{lhs}$ vs…
jjjjjj
- 325
- 1
- 9
3
votes
1 answer
How frequently scientific code uses comparisons NaN == NaN?
How frequently scientific code uses comparisons NaN == NaN?
Reason of asking: from time to time compilers / software floating-point library implementations have bugs w.r.t. comparisons with NaN. For instance, NaN == NaN incorrectly returns true,…
pmor
- 131
- 3
3
votes
2 answers
Hack for using hardware to take square roots of 128 bit numbers
I need to take integer square roots $\lfloor \sqrt{n}\rfloor$ of (lots of) 128 bit numbers $n$. Calling gmp seems to take surprisingly long (though I can't tell for sure, since gmp routines are not showing up in the profiler information).
Is there a…
H A Helfgott
- 269
- 1
- 7
3
votes
1 answer
Associativity in floating point arithmetic failing by two values
Cross-posting from math.stackexchange, since there might be people here familiar with this topic.
Assume working in floating point arithmetic with finite precision, bounded exponent and rounding to nearest.
Let $x,y$ be positive. It is not hard to…
EEE
- 33
- 3
3
votes
1 answer
IEEE-754 NaNs and missing data
I would like -if possible at all- to represent and handle missing data (in the statistical sense) within the standard IEEE-754 format. Seeing that for both SNaNs and QNaNs various bit representations are possible, I wonder if they all can arise from…
Quartz
- 171
- 5
2
votes
2 answers
Is there a Moore's law for floating-point precision, and what would it imply?
Moore's law states that the number of transistors on an integrated circuit grow exponentially, roughly doubling at a period of 20 months. This affects the amount of memory available and the speed of computation, which roughly double at the same…
shuhalo
- 3,660
- 1
- 20
- 31
2
votes
0 answers
How To Calculate Theoretical CPU FLOPS?
I actually find the formulae for peak theoretical performance:
Node performance in GFlops = (CPU speed in GHz) x (number of CPU cores) x (CPU instruction per cycle) x (number of CPUs per node)
CPU Speed and CPU cores are easy. but how can i know the…
Frankie
- 21
- 2