Open argriffing opened 9 years ago
Well, 1F1 should already do the job. We could perhaps optimize 1F1 for this important special case.
Yes, I don't have a good feel for how good the arb hypgeom functions are compared to their specializations. For example as I was learning arb I looked at a couple naive ways to compute sinc: 1) sin(x) / x 2) 0f1(3/2, -(x/2)^2) When x is near zero sin(x)/x is not so great, presumably because it doesn't know that the two instances of x are 'the same'. But the hypergeometric formula is good near zero. On the other hand at large x like 50 +/- 1e-16 the hypergeometric formula gives a useless radius like 100 or 1000 whereas the sin(x)/x radius is good. Presumably this is because sin(x) is super optimized and at large x it doesn't matter so much whether the two instances of x in sin(x)/x are the same or not. Anyway, my point isn't that 0f1 is bad, but rather that I don't have good intuition about when a hypgeom function should do the job without extra attention.
Your observations are completely correct.
In general, to evaluate f(x)
on the interval X = [m +/- r]
, we can evaluate the point value f(m)
(easy) and bound the propagated error separately. A generic bound for the propagated error is r * sup_(t in X) |f'(t)|
.
For the sinc function, we might simply use the convenient bound |f'(t)| < 1/2
.
The generic bound becomes quite poor for oscillatory functions when r
is very large, say if X = [1 +/- 1000]
. When this happens, we can return a generic enclosure, say sinc(X) = [0 +/- 1]
.
Of course, this is all only relevant for input close to zero; if |m| + r >= 1
, say, we can just evaluate sin(X) / X
directly.
You can see in arb/sin_cos.c
how this is done for sine and cosine.
This method easily generalizes to complex functions.
For the hypergeometric function, the asymptotic expansion is used when x
is sufficiently large. You can check this with x = 80 +/- 1e-16
(at 53-bit precision) or with x = 60 +/- 1e-16
and lower precision. But the transition region is problematic, as you observed.
The current algorithm selection between the hypergeometric power series and the asymptotic series is not optimal. It basically assumes that the argument is positive so that cancellation does not occur, only switching to the asymptotic series when this gives full accuracy. What it should be doing is to estimate the accuracy with either method, and pick the better one.
That alone improves accuracy, but does not completely solve the problem. Derivative bounds should ultimately be used here as well. However, working out tight bounds for derivatives of hypergeometric functions and/or Bessel functions gets difficult in the transition regions when you consider large and/or complex parameters (by the parameters of 1F1(a,b,x) I mean a, b).
By the way, 1F1 should give fine output everywhere when x is real; there is only going to be a problem for exprel with complex arguments (0F1 of real argument ~= 1F1 of imaginary argument). But we could still optimize for speed.
There is a family of relative exponential functions in gsl. I wonder if it's worth adding some of these to arb? I think the real part of exprel(i*x) is
sinc(x)
, and sinc is listed in https://github.com/fredrik-johansson/arb/issues/35.They are also related to exponential integrators where they are called phi functions, for example in http://core.ac.uk/download/pdf/1633681.pdf http://www1.maths.leeds.ac.uk/~jitse/phikrylov.pdf
A recent preprint using exprel: http://arxiv.org/pdf/1504.01804.pdf