I would be willing to participate in the effort involved to implement...
################################################## Hal wrote:
Does anyone know if arbitrary precision math works with formula type fractals in Fractint?
Jonathan wrote: The arbitrary precision math has been implemented for only four fractal types. mandel, julia, manzpower, and julzpower.
=========================================== I have run up against the loss of precision of my Intel floating point processor on zooms in Fractint in several different images in the last week or two. Jim Muth has also mentioned reaching the limits of precision as well. This running up against the precision limitation of the current Intel floating point hardware seems to be happening more often -- most likely because the increased speed of computers allows users to more quickly zoom to the depths that get in trouble with precision limitations. In an effort to ease this problem somewhat I have the following proposal: Could the same type of logic that detects when to switch from integer to floating point math be used to switch from using doubles to long doubles as an alternative to (or possibly in addition to) switching to arbitrary precision math? <---<< In particular, I'm guessing that using long doubles would involve doing at least the following: - duplicating existing floating point code (implemented with doubles) and - replacing all the double variables and arrays with long doubles and - adding the logic to switch between the routines using the two data types. - adding reading and writing a 3rd parameter for MATHTOLERANCE similar to the existing ones supporting MATHTOLERANCE in parameter files and .gif 1989a images. - code to support the new data type throughout Fractint. Am I correct here? <---<< I would be willing to participate in the effort involved to implement something like this (assuming it is practical) -- for the fractal type formula (even as involved as I suspect it may be), since many of Fractint's built-in fractals can be implemented as formulas. To give you an idea of my experience in this area, I have used long integers in calculating large prime numbers using the MS Visual C 6.0 Visual Studio compiler. My current knowledge about Fractint's code is limited to what is described in Fractsrc.doc and Fractint.doc and a quick look at the formula type code in parser.c. Note that implementing long doubles for the fractal type 'formula' could remove some of the need to implement long doubles for each of the separate fractal types in Fractint since many of Fractint's built-in types can be calculated as formula files. Oops! I just read this in the 20.0 fractint.doc: Fractint Version 20.0 Page 154 -------------------------------- "Big as this zoom magnification is, your PC can do better using something called arbitrary precision math. Instead of using ***64 bit double*** precision to represent numbers, your computer software allocates as much memory as needed to create a data type supporting as many decimals of precision as you want." Does this mean that Fractint already uses long doubles in its implementation of fractal type 'formula?' <---<< However, I also note that FRACTSRC.DOC has: "parser.c uses the MathTypes: D_MATH: uses ***double precision*** routines, such as dStkMul, and FPUsincos This is used if we have a FPU. M_MATH . . . L_MATH . . ." This seems to conflict with the "64 bit double precision" in FRACTINT.DOC. I'm inclined to believe FRACTSRC.DOC... It looks like someone tried to make use of long doubles (but I don't know for what) but had to back off (I think I remember seeing somewhere that it was due to some compiler's libraries not correctly implementing some operations using long doubles): BIGPORT.H ----------------- /* long double isn't supported */ /* impliment LDBL as double */ typedef double LDBL; - Hal Lane ######################### # hallane@earthlink.net # ######################### ################################################## -- No virus found in this outgoing message. Checked by AVG Free Edition. Version: 7.1.408 / Virus Database: 268.13.4/476 - Release Date: 10/14/06
Hal wrote:
I have run up against the loss of precisionof my Intel floatingpoint
...
Could the same type of logic that detects when to switch from integer to floating point math be used to switch from using doubles to long doubles as an alternative to (or possibly in addition to) switching to arbitrary precision math?
This is not a new idea. In one very intense weekend about ten years ago I converted double to long double across the board in fractint and more or less got it to work. I shared this with the other authors and one fractal artist. The result more or less worked, but a lot (including formula type) was broken. By way of background, the old Microsoft DOS compiler v. 9 used for Fractint has an 80 bit long double. This is perfect for our purposes because this is the data type actually used in 8087 hardware, so there is essentially no speed penalty. But there were way too many complications. 1. All the zoom logic has to support long double. 2. All the assembler interfaces break. This includes formula type 3. data stored in GIFs becomes non-portable - 80 bit long double is not an IEEE standard, and not even a standard on the intel platform (try sizeof(long double) on Linux; it's not 10.) You gain three whole orders of magnitude (or so), and then you are flat out of precision all over again. So we give you a virtual image the size of the orbit of Jupiter and than you complain that you want a virtual image larger than the solar system <grin!> You can be sure that if we did this folks would pound the zoom key a few times and resume the run-out-of-precision lament. Though I admit nobody has ever complained about the 10^1600 arbitrary precision limit (about ten visible universes enfolded in each other one after the other) <even bigger grin!> So while on the one hand you are quite right, on the other virtually the entire source code has to be massaged to gain three orders of magnitude. The effort would probably be better spent extending the scope of arbitrary precision, or getting arbitrary to work with SOI which might speed up truly deep zooms. These days just a small amount of fractint maintinence goes on, and while there are sometimes significant improvements in that small amount, large rewrites are ruled out with the current team's resources.
To give you an idea of my experience in this area, I have used long integers in calculating largeprime numbers using the MS Visual C 6.0 Visual Studio compiler. My current knowledge about Fractint's code is limited to what is described in Fractsrc.doc and Fractint.docand a quick look at the formula type code in parser.c.
Ahhh - to the best of my knowledge, that compiler does not support long double. Microsoft took it out with visual C. And as I said, the Linux long double is something else.
Does this mean that Fractint already uses long doubles in its implementation of fractal type 'formula?'
Not really ... math that stays in the math coprocessor is calculated at 80 bits. But all the in-memory variables are only 64 bits. So your data's life at 80 bits is brutish and short.
It looks like someone tried to make use of long doubles (but I don't know for what) but had to back off (I think I remember seeing somewhere that it was due to some compiler's libraries not correctly implementing some operationsusing long doubles):
BIGPORT.H ----------------- /* long double isn't supported */ /* impliment LDBL as double */ typedef double LDBL;
You were misled. Wes Loewer uses long double only to support exponent handling in his arbitrary precision library. The infuriating thing is that the Linux long double does not allocate more space for the exponent, but only the mantissa, so under Linux the arbitrary precision library doesn't work as well. The old, now dead except for us fractint guys Microsoft 80 bit long double, divides the 16 extra bits between the mantissa and exponent, so not only is the precision higher, but the dynamic range is higher too. I am very wise and know a whole lot (a humongous amount!) from working on fractint this all these years. I also know how to set the timing and adjust the points of a car, develop film (color or black and white), and build electronic gadgets out of parts like transistors and tubes <grin!>. All just about as useful as Microsoft 80 bit long double in MSC 9 using the medium memory model! <sigh!> Not to worry, other than writing these occasional notes on the fractint list I spend little time fussing about the past! Gotta get back to supporting my wife's dual-cpu Dell laptop's connectivity to my wireless network ... Tim
Hal,
I have run up against the loss of precision of my Intel floating point processor on zooms in Fractint in several different images in the last week or two. Jim Muth has also mentioned reaching the limits of precision as well.
This running up against the precision limitation of the current Intel floating point hardware seems to be happening more often -- most likely because the increased speed of computers allows users to more quickly zoom to the depths that get in trouble with precision limitations.
In an effort to ease this problem somewhat I have the following proposal:
Could the same type of logic that detects when to switch from integer to floating point math be used to switch from using doubles to long doubles as an alternative to (or possibly in addition to) switching to arbitrary precision math? <---<<
Here is part of the problem, from the MSDN Visual Studio 2005 documentation site: Previous 16-bit versions of Microsoft C/C++ and Microsoft Visual C++ supported the long double, 80-bit precision data type. In Win32 programming, however, the long double data type maps to the double, 64-bit precision data type. The Microsoft run-time library provides long double versions of the math functions only for backward compatibility. The long double function prototypes are identical to the prototypes for their double counterparts, except that the long double data type replaces the double data type. The long double versions of these functions should not be used in new code.
In particular, I'm guessing that using long doubles would involve doing at least the following: - duplicating existing floating point code (implemented with doubles) and - replacing all the double variables and arrays with long doubles and - adding the logic to switch between the routines using the two data types. - adding reading and writing a 3rd parameter for MATHTOLERANCE similar to the existing ones supporting MATHTOLERANCE in parameter files and .gif 1989a images. - code to support the new data type throughout Fractint.
Am I correct here? <---<<
It could be done. But, the small gain in precision would come at a great cost in code size. There are currently four different math schemes in Fractint. Not all of them are used by all fractal types. There are integer, fixed point, double, and arbitrary precision math versions. There is actually one more that is used exclusively by the formula parser, which is the 80-bit long double used by the fpu. So, most of the calculations done by the formula parser are done at 80-bits (up until the very end, when the pixel is plotted). This is because the formula parser is coded in assembly language using fpu op codes and the 80-bit precision gets lost when we have to return to the C code to display the result. If you are running out of precision with the formula parser, the only thing that would help would be coding the parser with arbitrary precision math functions. Not an easy task. Also, since most of the parser calculations are done in the fpu, the switch to ap math will be like running into a brick wall from 60 mph. The eventual goal is to move away from the 16-bit version of MSC. This will stop the DOS version of Fractint in its tracks, however. Although Winfract has been updated to the current code base, there is still too much broken for it to be ready for prime time. And, of course, the move to win32 eliminates the long double type as stated above.
I would be willing to participate in the effort involved to implement something like this (assuming it is practical) -- for the fractal type formula (even as involved as I suspect it may be), since many of Fractint's built-in fractals can be implemented as formulas.
Any help would be greatly appreciated. Starting smaller would be better. I have incorporated several features that touched many areas of the code and it takes forever to work out all the bugs.
To give you an idea of my experience in this area, I have used long integers in calculating large prime numbers using the MS Visual C 6.0 Visual Studio compiler. My current knowledge about Fractint's code is limited to what is described in Fractsrc.doc and Fractint.doc and a quick look at the formula type code in parser.c.
Note that implementing long doubles for the fractal type 'formula' could remove some of the need to implement long doubles for each of the separate fractal types in Fractint since many of Fractint's built-in types can be calculated as formula files.
See above.
Oops! I just read this in the 20.0 fractint.doc:
Fractint Version 20.0 Page 154 -------------------------------- "Big as this zoom magnification is, your PC can do better using something called arbitrary precision math. Instead of using ***64 bit double*** precision to represent numbers, your computer software allocates as much memory as needed to create a data type supporting as many decimals of precision as you want."
Does this mean that Fractint already uses long doubles in its implementation of fractal type 'formula?' <---<<
The assembly language version of the formula parser does.
However, I also note that FRACTSRC.DOC has: "parser.c uses the MathTypes:
D_MATH: uses ***double precision*** routines, such as dStkMul,
and FPUsincos
This is used if we have a FPU.
M_MATH . . .
L_MATH . . ."
This seems to conflict with the "64 bit double precision"
in FRACTINT.DOC. I'm inclined to believe FRACTSRC.DOC...
No. The C version of the formula parser uses the type double (64-bit).
It looks like someone tried to make use of long doubles (but I
don't know for what) but had to back off (I think I remember
seeing somewhere that it was due to some compiler's libraries
not correctly implementing some operations using long doubles):
Yes, this is true. I don't recall why. Tim Wegner has worked on this a bit, maybe he can recall what the problems were. Jonathan
participants (3)
-
Hal Lane -
Jonathan Osuch -
Tim Wegner