Beware of Fast-Math
Alternative number representations (fixed point, rationals, posits)
- Several comments advocate fixed-point and rational arithmetic (Forth, Scheme, Lisp) as safer for many real-world quantities (money, many engineering problems).
- Rationals work well until you need trig/sqrt/irrationals; then you need polynomial/series methods or CORDIC.
- Disagreement over “floats are just fixed-point in log space”: some argue scaled integers can be faster and adequate across many domains.
- Interest in IEEE work on alternatives like posits; current draft standard mentioned but noted as not yet including full posit support, with only early hardware prototypes.
Rust’s “algebraic”/relaxed floating operations
- Rust is adding localized “algebraic” float operations that set LLVM flags for reassociation, FMA, reciprocal-multiply, no signed zero, etc.
- These are meant to allow optimizations “as if” real arithmetic holds, but are explicitly allowed to be less precise per operation.
- Naming is contentious: “algebraic” vs “real_*”, “approximate_*”, or “relaxed_*”.
- They do not guarantee determinism across platforms or builds; behavior may vary with compiler optimizations and hardware.
Fast-math, optimization levels, and IEEE 754
- Fast-math bundles many assumptions (no NaNs/inf/subnormals, associativity, distributivity, etc.). Violating them is UB.
- Contrast with -O2/-O3: those are supposed to preserve correctness; -Ofast (includes -ffast-math) is the “dangerous” one.
- Some see IEEE 754 as overly restrictive and hindering auto-vectorization; others argue the standard is essential for determinism and safety, and languages should expose intent (order matters vs not).
Precision, reproducibility, and domain-specific needs
- Some scientific/physics workloads tolerate float noise far larger than rounding effects; they report big speedups from fast-math.
- Others (CAD, robotics, semiconductor optics) say last-bit precision and strict IEEE behavior critically matter.
- Reproducibility is a major concern (e.g., audio pipelines, ranking/scoring algorithms, cross-version consistency). Fast-math can change results between builds or platforms.
- FTZ/DAZ: criticized because they’re controlled via thread-global FP state; a shared library built with unsafe math can silently change behavior in unrelated code.
- Tools/practices: Kahan summation, Goldberg’s paper, Herbie for accuracy-oriented rewrites,
feenableexcept/trapping NaNs, and proposals for languages that track precision (dependent types, Ada-style numeric specs).
Money and floating point
- Strong camp: never use binary floats for currency; prefer integers as cents or fixed-point/decimal; easier reasoning and exact sums.
- Counter-camp: many trading systems use double successfully; with 53 bits of mantissa you can represent typical money ranges to sub-cent precision, and rounding can be managed.
- Distinction drawn between accounting (needs predictability and “obvious” cents-level correctness) vs modeling/forecasting/trading (can tolerate tiny FP error).