That's a very good point. I have experienced that problem before, where seemingly irrelevant changes resulted in slight differences in floating point results. Eventually, I decided that determinism was more important than a little unpredictable extra precision, so I started compiling that particular app with the following gcc option: -ffloat-store Do not store floating-point variables in registers, and inhibit other options that might change whether a floating-point value is taken from a register or memory. This option prevents undesirable excess precision on machines such as the 68000 where the floating registers (of the 68881) keep more precision than a double is supposed to have. Similarly for the x86 architecture. For most programs, the excess precision does only good, but a few programs rely on the precise definition of IEEE floating point. Use -ffloat-store for such programs, after modifying them to store all pertinent intermediate computations into variables. That solved the problem, but I found it unsettling. Floating point can be extremely annoying. Tom Henry Baker writes:
At 04:23 PM 4/14/2015, Mike Stay wrote:
Is addition commutative?
Probably not, depending upon which computer you use.
Leaving aside issues of NaN's, signaling and exponent issues, some computers have larger mantissas for "accumulation", so that very long iterated sums still "work". E.g., Intel x86 "extended precision" floats with 64-bit mantissa.
http://en.wikipedia.org/wiki/Extended_precision
Since you're now adding numbers of two different mantissa sizes, they are highly unlikely to be commutative.
"satanism" is an anagram for "mantissa"; now do you see why floating point is so hard?
http://en.wiktionary.org/wiki/mantissa
_______________________________________________ math-fun mailing list math-fun@mailman.xmission.com https://mailman.xmission.com/cgi-bin/mailman/listinfo/math-fun