10

I see very few non-floating point computing libraries/packages around. Given the various inaccuracies of floating point representation, the question arises why there aren't at least some fields where this increased accuracy might be worth the intricacies of working with fixed-point.

Are there any MAJOR difficulties in using, say, a fixed point eigenvalue solver? How slow/fast, inaccurate/accurate would they be?

Related : this and this

Milind R
  • 607
  • 5
  • 16
  • Milind R, thank you for your question. I think your question is interesting, but probably inappropriate for the site. I urge you to look at the site FAQ for guidance. When I look at your question, I get the impression that it is the beginning of a rant, although I think the elements of a site-appropriate question are present. It is worth asking if there are many applications of integer arithmetic and fixed-point arithmetic in computational science, and asking for a comparison of those arithmetics to floating point. I encourage editing your post. – Geoff Oxberry Mar 01 '13 at 16:45
  • Yes it was born of a rant, but I phrased it as a seeking a justification for the status quo. My question, as you can surmise, is about why we cannot have a major shift towards integer and fixed point math in intensive numerics. Can you please edit it on my behalf? I really tried, but I don't know how my question isn't appropriate. – Milind R Mar 01 '13 at 17:00
  • 5
    I think there is an objective technical answer to this: if you run almost any scientific computation (say, a linear solve), the number of bits required for exact storage grows exponentially in time. Thus, strong support for inexactness is required for useful work. – Geoffrey Irving Mar 01 '13 at 17:06
  • @MilindR: The computational geometry community has been interested in real number computations that is highly performant and exact at the same time. I guess that all practical issues relevant to you can be observed in this area of research. An example you could search for is the library LEDA. – shuhalo Mar 01 '13 at 17:09
  • @GeoffreyIrving What about zeros in triangular matrices? Can't they be stored as anything other than inexact error prone floating point? – Milind R Mar 01 '13 at 17:19
  • @GeoffOxberry: You can see the direction this question is taking. If you still think there's zero hope of a useful objective Q&A thread with concrete answer, please delete it. – Milind R Mar 01 '13 at 17:21
  • @GeoffOxberry: And how come my question was closed by only one mod? No offense, but I always have seen questions voted to be closed by multiple mods. – Milind R Mar 01 '13 at 17:26
  • @MilindR: On any Stack Exchange site, a mod vote automatically brings a question to the close vote threshold; otherwise, 5 users would have to vote to close. Therefore, what happened here can (and does) happen on any other site. If I could vote to close this question while keeping it open, I would; I typically avoid voting to close unless other people have voted also. Here, I want to encourage what I think is an idea that needs to be addressed and also head off a flame war (which is even more work to mod, and why I chose to vote to close when I normally elect not to). – Geoff Oxberry Mar 01 '13 at 18:13
  • @all : Mea Culpa – Milind R Mar 01 '13 at 23:54

4 Answers4

5

The use of fixed point arithmetic can be appropriate under certain circumstances. Generally for scientific computing (at least in the sense that most people think of it) it is not appropriate due to the need for expressing the large dynamic ranges encountered. You mention eigenvalue problems as an example, but very often in science, one is interested in the smallest eigenvalues of a matrix (say, in computing the ground state of a quantum system). The accuracy of small eigenvalues will generally be quite deteriorated relative to large eigenvalues if you use fixed point. If your matrix contains entries which vary by large ratios, the small eigenvalues might be completely unexpressable in the working precision. This is a problem with the representation of numbers; these arguments hold regardless of how you do the intermediate computations. You could possibly work out a scaling to apply to the computed results, but now you've just invented floating point. It is easy to construct matrices whose elements are well behaved, but whose eigenvalues are exceedingly poorly behaved (like Wilkinson matrices, or even matrices with entirely integer entries). These examples are not as pathological as they might seem, and many problems at the cutting edge of science involve very poorly behaved matrices, so using fixed point in this context is a Bad Idea(TM).

You might argue that you know the magnitude of the results and you want to not waste bits on the exponent, so let's talk about the intermediates. Using fixed point will generally exacerbate the effects of catastrophic cancellations and roundoff unless you really go through great pains to work in higher precision. The performance penalty would be huge, and I would conjecture that using a floating point representation with the same mantissa bit width would be faster and more accurate.

One area where fixed point can shine is in certain areas of geometric computing. Especially if you need exact arithmetic or know the dynamic range of all the numbers beforehand, fixed point lets you take advantage of all of the bits in your representation. For example, suppose you wanted to compute the intersection of two lines, and somehow the endpoints of the two lines are normalized to sit in the unit square. In this case, the intersection point can be represented with more bits of precision than using an equivalent floating point number (which will waste bits on the exponent). Now, it is almost certainly the case that the intermediate numbers required in this calculation need to be computed to higher precision, or at least done very carefully (like when dividing the product of two numbers by another number, you need to be very careful about it). In this respect, fixed point is advantageous more from the representation standpoint rather than from a computational standpoint, and I would go so far as to say this is generally true when you can establish definite upper and lower bounds on the dynamic range of your algorithm outputs. This happens rarely.

I used to think that floating point representations were crude or inaccurate (why waste bits on an exponent?!). But over time I've come to realize that it really is one of the best possible representations for real numbers. Things in nature show up on log scales, so real data ends up spanning a large range of exponents. Also to achieve the highest possible relative accuracy requires working on log scales, making the tracking of an exponent more natural. The only other contender for a "natural" representation is symmetric level index. However, addition and subtraction are much slower in that representation, and it lacks the hardware support of IEEE 754. A tremendous amount of thought was put into the floating point standards, by a pillar of numerical linear algebra. I would think he knows what the "right" representation of numbers is.

Victor Liu
  • 4,480
  • 18
  • 28
4

As an example of why exact arithmetic/fixed point arithmetic is so rarely used, consider this:

  • In the finite element method, as in almost every other method in use in scientific computing, we arrive at linear or nonlinear systems that are only approximations to the real world. For example, in the FEM, the linear system to solve is only an approximation to the original partial differential equation (which may, itself, only be an approximation of the real world). So why put enormous effort into solving something that is only an approximation?

  • Most of the algorithms we use today are iterative in nature: Newton's method, Conjugate Gradients, etc. We terminate these iterations whenever we are satisfied that the accuracy of the iterative approximate to the solution of the problem is sufficient. In other words, we terminate before we have the exact solution. As before, why use exact arithmetic for an iterative scheme when we know that we're only computing approximations?

Wolfgang Bangerth
  • 55,373
  • 59
  • 119
  • It's frustrating to admit, but yeah, your answer basically crucifies large scale use of exact computation. I guess I won't be seeing the back of float any time soon. – Milind R Mar 05 '13 at 04:05
  • @MilindR: I'm not quite sure what you're aiming at. You seem to have a hammer and are frustrated that nobody has a nail or thinks that a hammer is a useful tool. But it's not because we don't like you -- we've thought about these issues for a long time and simply decided that the screwdriver we have is the proper tool. I find nothing frustrating about it (unless you have a hammer) as it's just a pragmatic approach -- why use exact arithmetic when we only do approximations? – Wolfgang Bangerth Mar 06 '13 at 13:33
  • It's frustrating because a perfectly normal problem could be so badly conditioned that it's effectively insoluble. As also because the ideal of arbitrary precision looked so promising, as compared to the inexact nature of floating point right from the storing the value to outputting it. – Milind R Mar 07 '13 at 01:06
  • The problem is that rounding errors are exceedingly hard to analyse. I realized this the day I started learning numerical analysis and numerical linear algebra. So a system that completely avoids the problem, making conditioning a non-issue, should be taking the world by storm right? was the thinking. Of course I understand the limitations, but they seemed more like irritants, than dealbreakers. Kind of like the increased difficulty in scaling down transistors in processors. Yes its difficult to analyse, but Intel still does it. – Milind R Mar 07 '13 at 01:09
  • 2
    If a problem is so ill-conditioned that it's difficult to solve, then its solution is not stable to perturbations. That's a problem with the original problem, not the floating point representation. Yes, maybe you can get a solution to the problem using exact representation. But the solution is not stable and so is likely not going to have anything to do with what you're really looking for. You're barking up the wrong tree if you think that the representation of numbers is the problem. – Wolfgang Bangerth Mar 07 '13 at 03:43
3

If you look at this library for correct rounding: CRlibm, you will see in the documentation that generally, algorithms must be proven accurate (with reasoned proofs). Why? The stability and speed of convergence of a result of a function does not have a "one-size-fits-all" answer. In short, there is "no free lunch" -- you have to work to prove your reasoning is correct. This is due to the behavior of the functions being modeled, not the underlying hardware (whether you use integer or floating point units, though yes, both have "gotchas", like overflow/underflow, denormal numbers, etc.) Even if the result you are looking for converges to an integer, the algorithm used to find the result is not necessarily very stable.

Eigen is a C++ library that has a variety of algorithms for solving matrices, each with different properties. This page contains a table which discusses speed vs. accuracy trade-offs for the various algorithms used for solving a matrix. I suspect the Eigen library can do what you want. :-)

mda
  • 141
  • 3
  • Thanks.. Very informative, and nice link. But doesn't the use of fixed point along with limited extent of rounding result in more accurate outputs? Since representation itself is exact to begin with, unlike floating point? – Milind R Mar 03 '13 at 04:37
  • 1
    I suggest that you attack your problem from another point of view. In introduction to logic, you learn that there are three parts to a problem's solution: definitions, reasoning, and conclusion/result. You are probably (as most of us) very used to working mostly on the "definitions" step of problem solving -- usually you can "define away" your problem; however, if you become frustrated, occasionally you have encountered a more difficult type of problem that requires more work in the "reasoning" part. – mda Mar 03 '13 at 05:10
  • I only vaguely understand you... I can't see where I can "define away" this problem, the reasoning is essential. – Milind R Mar 07 '13 at 01:11
  • Several years later, I actually understand you :-) – Milind R Jul 06 '18 at 12:29
2

For some nice examples of where high-precision arithmetic has been useful in mathematics, take a look at the book Mathematics by Experiment by Jonathan Borwein and David Bailey. There's also this sequel, which I haven't read.

David Ketcheson
  • 16,522
  • 4
  • 54
  • 105