Sounds like you know what's up with IEEE numbers. So maybe you can tell me: do they satisfy axioms like (((x divided by y) times y) divided by y) equals (x divided by y), for instance? What other axioms do they satisfy? Is there a finite basis for the identities that n-bit IEEE numbers satisfy for all n? In short, what do IEEE numbers look like through the lens of universal algebra? Jim Propp On Wednesday, April 15, 2015, William Ackerman <wba@alum.mit.edu> wrote: "Pure" IEEE floating point addition is most emphatically commutative. Same
for multiplication. What I mean is the fundamental operation as a manipulation of 64-tuples of bits. It is also completely deterministic, notwithstanding the huge amount of mistaken folklore to the contrary.
The algorithm is simple to describe: compute the *infinitely precise* result, and round (binary bankers' rounding!) to 64 bits. Simplicity itself! There are also hairy rules about denorms and such. But it's still deterministic. You are designing hardware and you find that specification difficult to support? Get out of the kitchen.
All modern (that is, last many years) hardware does this.
Now, hardware often supports a more precise significand--64 bits instead of 53, for internal reasons, and to allow "hidden" operations to make library functions more precise. These operations are often called "multiply-accumulate" or "fused multiply add". The actual registers are typically 80 bits. Nothing wrong with that. Compilers often "help" to make your code "more accurate" by using these operations, so that intermediate results of a long computation are stored in registers, at 80 bits. Plenty wrong with that. It makes your code nondeterministic. The method outlined by Tom Karzes will fix that, by storing all intermediate results in (64 bit!) memory words.
I have given talks on IEEE floating point, and how it is deterministic and not satanic at all, many times. I sometime refer to the two camps as "fundamentalists" and "secular humanists". I am blue in the face.
[older stuff deleted]