Hi Rupert,
Thanks for responding. Did you try the
program? It’s free.
Of course there’s a need for
arbitrary precision, but not in the iteration loop. However, the speed seems
almost independent of depth of zoom. I am currently running a zoom over 10^10000.
You said:
So how would I do it, if I had the required product of programming
skill and free time? A recursive algorithm, like SOI. You'd need to pass the
coordinates for a rectangle and incomplete arbitrary precision iteration data
for a (hopefully) central point.
Complete iteration of central point until bailout / maxit / or possibly
periodicity check positive. (I suspect periodicity checking would be more of a
hindrance.)
Calculate other key points in turn, recursing when the approximation
breaks down.
Are you able to express this as an
algorithm that I could code to? I just can’t visualise what the code
would look like. I appreciate any help you can give.
Here are some comments that may be useful:
Perturbation
for rendering the Mandelbrot set has been around for a while. I would have
written a journal before because it's very awesome, but right from the start
there was a fundamental problem: reliability. A recent discovery by Pauldelbrot
on fractalforums.com indicates that perturbation can now be used to render the
Mandelbrot set reliably. Is the project approaching completion?
"Correctness" now appears to be achieved.
Roughly a year ago, Kevin Martin published a relatively short document about
the Mandelbrot set, containing some equations that staggered everyone. His idea
was to apply the principle of perturbation
to rendering the Mandelbrot set, and combining that with something he called
series approximation. Perturbation allows the iteration count of a pixel to be
derived from a different, fully calculated pixel "nearby" (to be
called a reference pixel). In practice this means that it's possible to
calculate just one single pixel in an image, and derive the rest using perturbation.
At great depths with millions of iterations, this saves an enormous amount of
render time, which is the main result.
Series approximation allows large number of iterations of pixels to be skipped
entirely, good for another enormous speed-up, but it doesn't stop there. In
addition, no arbitrary precision calculations are required to do the
"deriving" work. Floating point calculations, which are much faster
to perform, are sufficient. Martin concludes his document with the following
statement:
Using [the equations] means that the time taken rendering Mandelbrot
images is largely independent of depth and iteration count, and mainly
depends on the complexity of the image being created.
The implications of this
are enormous and such a theory is of course yelling to be
implemented. Along with the mathematics, Martin also published a simple
software implementation of the theory dubbed SuperFractalThing, so that
everyone could see that it works. Since then, more software developers have
started working on their own implementations.
The simple equation of the Mandelbrot set has long been famous of being
so computationally intensive that it can bring any supercomputer to it's
knees, as long as you zoom in deep enough. Although that is still the case even
with perturbation, the barrier has been shifted significantly.
Fractal extreme has been
the fastest software to calculate the Mandelbrot set for a long time, using
traditional optimizations. If the deviation above were to be rendered in
Fractal extreme, the render would take roughly 6 months. The actual image was
rendered in 6 hours using an implementation of perturbation by Botond
Kosa. What you're looking at right there is something that, without
perturbation, would have been totally out of reach for many years, no matter
how optimized the software is. As Bruce Dawson, the man behind Fractal extreme,
commented on fractalforums.com: good algorithms beat optimized code.
Although there is no
doubt that perturbation is a "good algorithm", it came with severe
problems right from the start, that Kevin Martin couldn't solve himself. If you
have been paying attention, you may have noticed the requirement of a reference
pixel to be "nearby". More specifically, usage of floating point
numbers to do the calculations requires some numbers in the perturbation
equation to be "small". Mathematically, this is completely useless,
because there's no exact definition of what "small" is. Indeed, the
results of the calculations were shown to be unreliable in many cases. It
turned out that the results were correct "most of the time", but
sometimes not. Incorrect parts of renders have since been called glitches.
I tried to look at the source code and got
totally lost. I hope we can get something going here.
Thanks,
Paul.
----------------------------------------------------------
Paul de Leeuw Computers NSW Central Coast,
Australia
Email:
pdeleeuw@deleeuw.com.au
www:
<http://www.deleeuw.com.au>
ABN
72 360 822 562
----------------------------------------------------------
From:
Fractdev [mailto:fractdev-bounces@mailman.xmission.com] On Behalf Of Rupert Millard
Sent: Sunday, 18 January 2015 7:04
PM
To:
Subject: Re: [Fractdev]
Perturbation Theory Implementation
You've said that it only uses the double float in the deep zooming
code, but where does it get X0....Xn from? There must be arbitrary precision
code somewhere.
From that two page PDF, the algorithm looks simple, doesn't it?
However we know Än gets "large" (< 4, mins) for sufficiently
large n. That's chaos theory for you, and it wouldn't look like a fractal if Än remained of the order of
10^-100 all the time. So it must do something when the approximation breaks
down - I expect it starts calculating at arbitrary precision. (Although
as we're working with floating point types, it would be quite elegant to use
that double-double / quad-double datatype here.)
So how would I do it, if I had the required product of programming
skill and free time? A recursive algorithm, like SOI. You'd need to pass the
coordinates for a rectangle and incomplete arbitrary precision iteration data
for a (hopefully) central point.
Complete iteration of central point until bailout / maxit / or possibly
periodicity check positive. (I suspect periodicity checking would be more of a
hindrance.)
Calculate other key points in turn, recursing when the approximation
breaks down.
Hope this helps. I can take a look at the source code too if that would
help.