Open ckormanyos opened 2 years ago
Handled (preliminarily) by #100 with many thanks to @S-Streulicht.
If any new/other algos found/needed, these will be handle in separate issues.
The mentioned PR #100 did not cover the wohle pertubation theory. Just the first part of the delta calculation: Chapter Perturbation The actual Glitches should not appear directly since we stull have high number of precision (20digitits), although I don't know what happens I one goes deeper. May be the precicion nees to be increased. Also the chapert Rescaling is not directly an issue, because of the high precision, and the easy way to increase it. But the chapter is interesting to be implemented because it actualy answers the question rised in the PR #100: Can we use float instead of cost intensive boost multiplecision? Well the answer is Yes. An issue with float is the limeted exponent since we have values smaller than 10^-300. To counter part it a rescaling is done by adding a precalculated saling factor instead of Z_i -> Z_i+e_i and C -> C+d which is implemented in #100 the Rescaling operated with Z_i -> Z_i+e_iS_i more precuse Z_i -> Z_i+e_i and than e_i -> Se_i. wich brings e_i close to 1 and S can be precalculated. I have to admit I don't know for sure why this is working since the Scalingfactor S should have the same issue with te exponent but in the positive diretion. Potentially the solution stps at the first aproximation of e_i in wich S dont count. The last chapter Full Iterations relevant sor the vanila Mandelbrot deals with a original Z_i: Basically it states that that we need a high precision claculation here. Which is the current state for #100.
Conclusion: Adding the rescaling as a second step might speed up things further more.
Hi @S-Streulicht thank you for these insightful clarifications above. I get all your points.
100 did not cover the wohle pertubation theory.
I know. I closed this issue with the intent of opening new issues as new methods arise.
Conclusion: Adding the rescaling as a second step might speed up things further more.
OK.
Can we use float instead of cost intensive boost multiplecision? Well the answer is Yes.
I will also be trying std::float128_t
, which is not available with MSVC but is available on modetn GCC ports. This data type handles about $34$ decimal digits of precision and values around ${\sim}10^{{\pm}4,000}$. Boost.Multiprecision has a nice wrapper for this type (where it is available).
My personal best dive has about ${\sim}10^{311}$ magnification. So I'll check if the methods of #100 can get a dive of ${\sim}10^{500}$ magnification.
See also #104
Justrecently I tried 20_zoom-very_deep with Digits 1065 calculation digits 20 Size 256 by 256 Iterations 1000000 And 4.4e-1011 And thengiven centernof cause on 11ncores it took 4008 seconds to compute An 64*64 version took 289s There are a few observations gooing that deep
Hi @S-Streulicht thank you for your report. There is really a lot of information in it. I think we might consider opining a few added tickets if any of these points need to be cleared up individually.
I've decided to architecturally support a front-end/back-end concept allowing (via static polymorphism) to support various iterative techniques. I'll start with the original full precision (the way you found this project at the start) and your recent PR. This will initially provide two iterative backend-schemes to choose from. This is being tracked in #103 and should come together in a few weeks.
Other than that I have some simplistic goals.
@S-Streulicht I have repoened this ticket based on the amount of active information being discussed.
Investigate iteration acceleration techniques such as orbit perturbation, series expansion, divergence.
There seems to be some information here, with further details on at least one iterative method here.
There is a wealth of information (code and text) here, but I have not figured out how to build/visualize the book or code yet. The author maintains a Git repository here.