JuliaLang / julia

The Julia Programming Language
https://julialang.org/
MIT License
45.77k stars 5.49k forks source link

Improve accuracy of `asind`/`acosd` #26371

Open simonbyrne opened 6 years ago

simonbyrne commented 6 years ago
julia> acosd(0.5)
60.00000000000001

julia> asind(0.5)
30.000000000000004

Now that we have asin/acos implemented in Julia, it should be possible to improve the accuracy of asind/acosd using similar extended precision tricks we use for sind/cosd.

Ellipse0934 commented 6 years ago

1) asind and acosd are already pretty accurate. Upon testing for 1e7 arbitrary values I found that abs( Float64( asind( big(r) ) ) - asind(r) ) <= eps(r) ) is true. ( r = rand() ) 2) The definition of asind just converts the radians result from asin to degrees i.e asind(x) = asin(x)*180/pi . It is not possible to get exactly 30 using this method. asin(0.5)*180/pi = 30.000000000000004 & (asin(0.5) - eps(asin(0.5))) *180/pi = 29.999999999999996 to get 30 we had to subtract eps(asin(0.5)) from asin(0.5). At times we need to add eps(asind(arg)) and at times we need to subtract. There are some cases where we need to add/subtract the epsilon twice to get the equality. I also couldn't see any pattern of when to add or when to subtract as it seemed arbitrary. I don't see any modification route here. 3) The only solution I see is to come up with a new rational polynomial approximation using the Remez's algorithm to map [-1,1] to [0, 360). or just use asind(big(arg)). 4) GNU C has arguably one of the best implementations for asin and acos but even those implementations are not always accurate for all cases. However they are always 1 or 2 floating point representations away from the actual answer just like us.

simonbyrne commented 6 years ago

Yes, what I meant was that we unroll the asind computation slightly to get a slightly higher precision so that the radian to degree conversion can be done more accurately. This is sort of the reverse of what sind does, where it does the degree to radian in slightly higher precision, which is then fed into the kernel routines.

I don't think it should require a different polynomial approximation, and we definitely don't want to do big anywhere, as that will be a performance nightmare.

arghhhh commented 6 years ago

Somewhat related: Julia doesn't have asinpi, acospi, atanpi, atan2pi. Perhaps less important than the forward variants that Julia does support (sinpi, cospi) which are useful in digital signal processing (and arguably should be used more than they are).

StefanKarpinski commented 6 years ago

It would certainly be nice to round out the set if anyone's interested in working on such a project. The sheer number of these functions does make me wonder if it would make sense to have a TrigExtras stdlib package or something...

simonbyrne commented 6 years ago

atanpi and atan2pi are mentioned as "recommended functions" in the IEEE spec, but interestingly asinpi and acospi are not.

oscardssmith commented 3 years ago

Thinking about this a little, it might make sense to make our accurate versions of the functions sinpi etc, and have sin wrap that. The advantage would be that sinpi and co will have perfect accuracy in all the special cases automatically, so it should be easier to get the rest of the function in line.

bobcassels commented 1 year ago

The current implementations of sinpi and cospi are quite accurate. They have similar accuracy to sin and cos. My naive measurements of RMS relative error (comparing to BigFloat) for all of those is on the order of:

tan is slightly worse:

For sin to use sinpi_kernel, you'd want to have a version of rem_pio2_kernel that returned the remainder as a fraction of pi. I'm not smart enough or compulsive enough to do that other than completely naively. But if someone builds that new rem_pio2_kernel, I will modify sin and cos to use it and see what the accuracy and speed are for those versions.

BTW, I see that tan uses tan_kernel. I was completely intimidated by that function, so when I (recently) implemented tanpi, I used sinpi_kernel and cospi_kernel. If someone wants to write tanpi_kernel, I'll be happy to experiment with it, and using it for tanpi, tan and tand. tanpi currently has a little worse accuracy for Float64:

Also, I experimented with implementing sind using sinpi. I lost a little bit of accuracy (5.4e-17 for Float64), but sped it up a bit (8% for Float64). I got confused when my measured speed for Float16 was slower than Float32 (64-bit ARM Mac M2), and have paused that effort for now.

oscardssmith commented 1 year ago

I think that it would probably go better to impliment sinpi with a sind kernel (but that would require writing it).