Closed zboralski closed 6 years ago
Marking as 1.7 to make a decision--I have no opinion on what that decision should be.
I'll add computing Mandelbrot fractals with big.Float as motivation. Ideally a whole new big.Complex type, :-) or at least big.Float.Sqrt, and possibly big.Float.Exp too. See for instance exercise 3.8, page 63, of The Go Programming Language book:
"Rendering fractals at high zoom levels demands great arithmetic precision. Implement the same fractal using four different representations of numbers: complex64, complex128, big.Float, and big.Rat. How do they compare in performance and memory usage? At what zoom levels do rendering artifacts become visible?"
(Disregard big.Rat, it's way too slow for this kind of use anyway.)
Also, see this go-nuts discussion: math/big pow() and floor() help.
@teknico Note that using complex numbers for Mandelbrot may make the implementation simpler, but it will also run a lot slower because there are a lot of shortcuts that are possible when dealing with the real and imaginary parts explicitly.
It does make sense to have Sqrt, and/or perhaps Exp instead.
We have been playing with implementations, but high-quality implementations that scale to arbitrary precision (ivy has an upper internal limit) are hard. We will get to it when we get to it.
FWIW I have basic Sqrt, Log, Exp
and Pow
implementations for big.Float
in github.com/ALTree/bigfloat
.
I haven't tested them extensively and obviously they're not as good as they would be if implemented by Robert or other go devs, but -if needed- they could be used as a starting point.
@ALTree Thanks for that information. I've played with respective implementations a while back but they were not satisfactory. I'll have a look at your code when I pick this up.
Any progress on this?
@infogulch No, sorry.
Not for 1.9 either.
These are tricky because you need some kind of arbitrary-precision constant generator for at least 'e'. Maybe they could be somewhere else?
Sqrt on the other hand is trivial and maybe we should support it. We did add Int.Sqrt not too long ago.
If we want a Float.Sqrt
, I can look into it.
@ALTree add a new issue for Float.Sqrt?
Done, it's #20460
These are tricky because you need some kind of arbitrary-precision constant generator for at least 'e'.
I made a naive implementation of this here. It's really short and it can technically generate an arbitrary precision, but it's really slow.
@infogulch the problem with that approach is that it scales poorly. 44s for 10000 decimal digits on my machine, it's far too slow to be usable. The annoying part of this is implementing methods that do well (at least) in the 1000 - 100000 decimal digits range.
For reference my proposed implementation (that I linked above) computes 10000 decimal digits in about 500 milliseconds.
package main
import (
"fmt"
"math/big"
"time"
"github.com/ALTree/bigfloat"
)
func main() {
const prec = 34000 // 34000 bits give ~10k decimal digits
start := time.Now()
e := bigfloat.Exp(big.NewFloat(1.0).SetPrec(prec))
fmt.Println(time.Since(start))
fmt.Printf("e = %.10000f\n", e)
}
$ ./e
495.675667ms
..
The algorithm in ivy takes about 1.0s for that. I'm impressed it's so fast - it's just using a Taylor series and calling math.big from the outside.
That's quite fast. The best approach is likely to have the implementation switch after a given threshold.
Series methods are quite good for small precs (up to ~5000 decimal digits?), but for example I tried with 340000 bits (necessary for 100k decimal digits), and the above snippet gives the answer in about 7 seconds, which is much faster that what a Taylor approach could do, I suspect. I stopped ivy after 2 minutes.
It's probably too late for 1.10; also needs decision. Moving to 1.11.
@ALTree FWIW, I've had good luck with continued fraction representations for large numbers of decimal digits.
Sqrt was really easy; for the others it's unclear what we can say about how good the answer can or should be. Does anyone know more about proper algorithms for computing these? I am thinking of something at the level of detail of https://members.loria.fr/PZimmermann/mca/mca-cup-0.5.9.pdf (maybe they're in there but I don't see them in the table of contents).
We can't really answer "should we add this?" without knowing how.
From my experience with radix 10 big algorithms:
log(2)
with a desired precision from a constant with a smaller precision using N[log(2), L] = c + (exp(-c) * 2 - 1) where c = N[log(2), 1 < S < L]
. (You can also start a CF from scratch, but that requires a handful of other constants.)I found https://link.springer.com/book/10.1007%2F978-1-4020-6949-9 to be beneficial for computing the continued fractions. This is pretty good for Sqrt, too: https://doi.org/10.1145/214408.214413.
WRT log
in particular, I found algorithm 2.4.4
from "Handbook..." to require the fewest number of iterations (75 vs 210 for 2.4.1 and 150 for 2.4.7).
And for exp
:
2z z^2 / 6 ∞
1 + ----- --------- K ((a_m^z^2) / 1), z ∈ ℂ, where a_m = 1 / (16 * (m-1)^2 - 4)
2-z + 1 + m=3
The one thing I like about the above exp
algorithm is you compute most of the A
half of each term using integers, up to the 1518500252th iteration. It's even better if you have a fast method of computing the inverse (1/x
). The B
half is constant after the 2nd term.
@rsc The Brent, Zimmerman book you linked has a simple method for the natural logarithm in section 4.8.2, which I implemented in Go a few months ago. I can submit the code (just for discussion) if you wish.
The book also suggest a way of implementing Exp from the Log implementation via Newton (section 4.2.5). I also already have this in Go, which I can submit.
Once you have Exp and log, you can do Pow using the standard w**z = exp(z log(w))
.
@ALTree is this something you'd wanna collaborate on? I wrote a quick and super ugly exp
implementation today and my totally unscientific test had interesting results:
big.Float CF - 7.389056098930649597: 38.914µs
decimal.Big CF - 7.3890560989306502: 259.536µs
bigfloat - 7.3890560989306502274: 353.993µs
stdlib - 7.38905609893065
Aside: big.Float
needs a FMA
.
@ericlagergren thanks for writing the benchmarks. I have a few observations.
big.Float CF - 7.38905609893064959707918726734058828589: 24.501µs
decimal.Big CF - 7.38905609893065022723042746057500781318: 87.604µs
bigfloat - 7.38905609893065022723042746057500781318: 357.734µs
stdlib - 7.38905609893065
Note how decimal.Big and bigfloat agree, but the big.Float CF result is incorrect. I'm not sure if the big.Float CF implementation is limited to a single word, but in any case at the moment we can't benchmark it on precisions bigger than 64 : )
decimal.Big CF - 3.153319ms
bigfloat - 3.161255ms
Switch to precision 10000 (3011 for the decimal) and:
decimal.Big CF - 110.464324ms
bigfloat - 23.310847ms
Switch to precision 100000 (30103 for the decimal):
decimal.Big CF - 44.37687595s
bigfloat - 658.661543ms
As I wrote above
The best approach is likely to have the implementation switch after a given threshold.
The question is: how small is that threshold? Currently, for exp, is 2048, but bigfloat
is really not tuned to be fast on small precisions. If we can tune it to be faster on smaller precisions, and the threshold drops to, say, 512, is it still worth having two different implementations? I also tested the logarithm, and the threshold seems to be even smaller (around 200 bits).
Anyway:
is this something you'd wanna collaborate on?
sure. My suggestion (if this proposal is accepted) is to go simple first (a single method), and then evaluate the need for more complicated implementations that switch methods after a given threshold. If the win, on low precisions, is huge, then it may be worth it.
Oh, it’s probbaly not entirely correct. There usually needs to be some fiddling with precision and I just guesstimated the epsilon. (It literally took about 10 minutes to write, just wanted to test it out.)
I wouldn’t compare the decimal version to bigfloat—decimal won’t scale as well because it uses a binary big.Int for its mantissa which makes rounding operations terribly slow. (I just added it in there to see a baseline I’m familiar with.)
I’m down for whatever. I think there’s enough ideas and actual implementations around to satisfy @rsc 🤞
(To clarify “binary” slowness for decimals: big.Int
is base 2, which means rounding requires dividing by a power of 10, not just chopping bits off the front of the mantissa like big.Float
does. If decimal used a base 10 mantissa it could chop digits off and wouldn’t have nearly as large of a slowdown.)
FWIW, we talked to @robpike at the proposal meeting and even he's not sure that the Ivy implementations are last-bit correct. Knowing about Taylor series is OK but knowing when to stop is hard. It's also very slow. This is not really in Go's wheelhouse and we don't know how to do it right.
We would love to see this functionality in the Go ecosystem, just not necessarily in the standard library, inside math/big. It seems like third-party packages could take care of this need for the time being. Note that it's fine to take the ivy code and pull out the routines you want as a separate package (preserving the license, of course). Note also that sometimes the last bit may not be right in that code.
The main problem with the ivy code is that the trigonometric functions have trouble reaching perfect 0.
% ivy
sin pi
-2.80651999077e-78
The other functions seem accurate. Log is slow.
The real point here is that a numeric algorithms expert is required for getting the bits guaranteed right, and we don't have one. An external implementation is welcome.
There is already an implementation in @robpike's excellent ivy calculator. However, the code is quite complex and lacks test cases.
Would it make sense to move it to the math/big package?