Closed kburns closed 2 years ago
Thanks, @kburns!
Point (1) seems to be a straight-forward bug :)
The rest are simply not implemented yet, but I would welcome contributions here! Which ufuncs would you need?
Mainly just the basic exponential and trig / trig inverse functions. On a side note -- is a software-based quad precision (with extended range) at all in-scope for you or this project, or just focusing on extended precision with double-double? Thanks!
Mainly just the basic exponential and trig / trig inverse functions.
If you're interested in working on those, I simply took the implementations from the QD library and modified them to use the C functions in this library. Simply follow u_expq
and expq
in the code to get an idea how it is set up.
On a side note -- is a software-based quad precision (with extended range) at all in-scope for you or this project, or just focusing on extended precision with double-double?
I am not sure what you mean ... ddouble
is a software-based quad precision library (105 bits of mantissa) rather than just x64 extended precision (63 bits mantissa). Or are you referring to software emulation of IEEE binary128? This should be rather slow in software, but I'd be willing to add it (as well as octuple precision etc.)
Yes I meant IEEE binary128. It looks like other packages for extended range floats in python (e.g. gmpy2) don't work with numpy nearly as well as your approach for ddouble does, so I think it might be very useful to have available basically no matter the cost.
Hm, there I actually think that it probably be faster and more powerful to generalize double-double to n
-double, similar to the MultiFloats.jl package. Then one could select the precision on the fly in increments of 53 bits ... the only downside is that the exponent range stays the same...
But again, I think it would be a nice addition, please feel free to open a PR!
Fixed by #11
Awesome, huge thanks to @AaronDJohnson and @mwallerb!
Thanks so much for this package! I'm updating some of my code to use it as an alternative to
np.longdouble
, and while it's generally gone extremely well, I've hit a few differences from other numpy types. So far I think I'm seeing:1)
np.log(ddouble.type(0))
givesnan
instead of-inf
, which you get with numpy int and float 0. 2) Powers of ddoubles (either with ints/floats/ddoubles in the exponent) are not supported. 3) Some ufuncs likearccos
do not work.I can work around some of these easily, but it would be great to know which (if any) might be possible to fix, or if support for these isn't currently planned, etc. Thanks!