The practical problem is that putting off division typically blows up the sizes of the eventual numerators & denominators, so that the total number of bit operations actually increases. I keep running into this problem in many guises; most recently in trying to compute more-or-less exact intersections of facets in STL (3D printing) files. We need better methods for initially approximating data with rational numbers (or some other exact field) in such a way that subsequent operations don't explode in bit complexity. We already know that initially approximating data with -- e.g., single precision numbers -- could require triple, quadruple, or higher precision in intermediate results. Perhaps there is a way to _gradually increase the precision_ to keep the computation sane, but I'm not aware of any elegant way to do this. At 04:28 PM 3/13/2014, Warren D Smith wrote:
Volker Strassen: Vermeidung von Divisionen, J.fur die reine und Angewandte Mathematik 264 (Jan 1973) 184-202 https://eudml.org/doc/151394