Closed lrnv closed 3 years ago
Hey @lrnv ! Thanks for your interest in my project. It's been a while since I've worked on MultiFloats,jl, so my apologies in getting back to you.
You're absolutely right that a conversion to/from BigFloat
would be the easiest stop-gap solution. I'll go ahead and introduce this change for exp
, log
, and the trigonometric functions.
For a proper Float64xN
implementation of exp
, what we will want to do is argument reduction followed by a Taylor series approximation. We can then obtain log
by using Newton's method to calculate the inverse of exp
.
Since interest in MultiFloats.jl is picking back up, I'll commit some time to working on it again. You can expect a stop-gap BigFloat
conversion in a day or so, and a proper MultiFloat
implementation sometime next week!
The stop-gap are in my fork, do you want me to PR ?
Also, I have stop gaps (not commit yet but I can also PR) for a^b and the rand() function.
Le sam. 19 déc. 2020 à 00:00, David K. Zhang notifications@github.com a écrit :
Hey @lrnv https://github.com/lrnv ! Thanks for your interest in my project. It's been a while since I've worked on MultiFloats,jl, so my apologies in getting back to you.
You're absolutely right that a conversion to/from BigFloat would be the easiest stop-gap solution. I'll go ahead and introduce this change for exp, log, and the trigonometric functions.
For a proper Float64xN implementation of exp, what we will want to do is argument reduction followed by a Taylor series approximation. We can then obtain log by using Newton's method to calculate the inverse of exp.
Since interest in MultiFloats.jl is picking back up, I'll commit some time to working on it again. You can expect a stop-gap BigFloat conversion in a day or so, and a proper MultiFloat implementation sometime next week!
— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/dzhang314/MultiFloats.jl/issues/7#issuecomment-748360627, or unsubscribe https://github.com/notifications/unsubscribe-auth/ADWZQ5IQ24IIWXD24EOTCD3SVPNIZANCNFSM4UQL5ZAQ .
@lrnv Yes, I would be happy to accept such a PR! Do go ahead.
I'm not sure about argument reduction, but the taylor expension of exp() is easy to take: inverse factorials can even be stored in a fixed array, initialised at package loading time. Just store enough so that the approimation is better than Float64x8 precision.
For the log(), the taylor serie of the logarithm might be a faster alternative than the newton inversion.
Hey @lrnv , I appreciate the suggestion but that is not going to work. The Taylor series of the logarithm converges exponentially slowly. In order to get a result that is accurate to n bits, you need something like e^n terms. In contrast, Newton iteration doubles the number of correct digits in each step, so you only need log(n) iterations.
Now that i read a little about it, you are right. I think it's the same used there : https://github.com/JuliaMath/DoubleFloats.jl/blob/master/src/math/elementary/explog.jl#L236
But maybe in our case, dependending on N
, more newton iterations should be done.
I've just released MultiFloats.jl verison 1.0, which includes exp
and log
.
Hey,
In my application, I only require exponentials and logarithms uppon what is already implemented.
How hard would it be to add those two functions ? Do you know how it should be done ? I would be happy to contribute.
Edit: Could a temporary fix be introduced by convertion to BigFloat back and forth ? I know this is highly sub-optimal, but it'll allow code to at least run. Since exp/log are not the most part of my workflow, it might still provide a huge performance gain over bigfloat for me.