Closed p15-git-acc closed 4 years ago
I think the Sage CI failed for reasons that aren't my fault.
I'm going to close this, make some changes, and reopen. Maybe it will force Sage CI to try again.
The last commit added some accuracy regression tests, removed the wrapped incomplete upper gamma function from zeta zeros code, and improved the algorithm selection. Instead of checking the 8-norm directly, it first checks the 1-norm and infinity-norm. This should be faster in most cases and is also robust to overflow in the 8-norm; if overflow somehow causes the wrong algorithm to be selected then incrementing the precision eventually causes the other checks to kick in.
Unless there are suggestions, I don't have plans to make more changes to this PR.
It seems fine apart from the potential overflow issue with doubles. I want to be sure that it goes into acb_hypgeom_gamma_upper_asymp when given something like z = 1e10000, s = 1 (and not just by accident).
I think the Sage CI failed for reasons that aren't my fault.
Yes, don't worry about that.
OK it now uses mag_t instead of machine float. I added and tested an internal helper function to efficiently compare a^n to b^n + c^n.
The helper function isn't rigorous, because it doesn't use intervals and doesn't pay attention to mag rounding. It doesn't matter for what I'm using it for, but maybe it could be changed to one or two one-sided rigorous functions like mag_lt_norm_ui
/mag_gt_norm_ui
that return 0/1 instead of the current non-rigorous two-sided function that returns -1/0/1. I'll close this PR for now and try that.
I think the previous version using doubles is fine. You just need to add an initial check for whether z is super-huge, and in that case skip the double computation.
What if s is also super-huge though
You could check that in the same way.
If x and s are both huge then whether or not to use the asymptotic expansion depends on their relative sizes.
I've reopened this with a _mag_gt_norm_ui
function that should do the one-sided comparison with correct rounding.
If x and s are both huge then whether or not to use the asymptotic expansion depends on their relative sizes.
Correct, but if they are both huge then the non-asymptotic expansion will fail anyway.
I've reopened this with a _mag_gt_norm_ui function that should do the one-sided comparison with correct rounding.
That's fine. I will have a look.
Ok, merging this. Thanks!
When s and z are nonnegative real, the asymptotic expansion is used when
(z-c)^8 > (a*s)^8 + (b*prec)^8
wherea=1.029287542, b=0.3319411658, c=2.391097143
. For reference theacb_hypgeom_u_use_asymp
condition hasa=0, b=0.69314718055994530942, c=0
in this notation.Closes https://github.com/fredrik-johansson/arb/issues/276. Closes https://github.com/fredrik-johansson/arb/issues/166.
The magic constants were found by locating the best transition point for each point in the grid prec=16..400 s=0.5..200.5 then contour plotting the results, guessing a nonlinear model, and fitting the constants. The mean squared error for the predicted x transition point as a function of s and prec is about 1. The model looks solid enough to me that extrapolating beyond the training grid seems OK.
Machine floats are used for speed, and I'm not sure if overflow would be a problem. This PR only deals with nonnegative real s and z, so there are still algorithm selection problems when s and z have imaginary parts. I'm not totally sure what's going on with integer values of s. Surprisingly I haven't noticed any actual transition region issues, in other words I haven't noticed a point where neither the 1f1 nor the asymp algorithms were good enough, but I haven't looked for this specifically.