Closed p15-git-acc closed 5 years ago
In my opinion the Codacy suggestions would make the code worse, but I could change it.
No need to change that.
Can we avoid the duplicated Illinois code (and maybe some other duplicated code if I missed it)?
Ah, since you need to call the platt_interpolate code for evaluation, perhaps that is not so easy.
At least one helper function is copy pasted with no modification, like the code for weighted arithmetic mean. Should that function be made public and reused across files, or maybe be made non-static but without adding a declaration in any header file and without documenting, and then reuse it across files?
It's not a huge function so I suppose it's not a big deal to leave two copies where they are.
The refinement in this PR is pretty bad but I'm not sure what is the best way to improve it. With the old way of direct evaluation, refinement can give arbitrarily narrow intervals. But with Platt's interpolation on a pre evaluated grid, the width of the refined intervals become limited by the grid shape and by the errors in the evaluations of the grid points. I think this causes problems with the assumptions of the Illinois refinement code in a way that yields correct but unnecessarily wide intervals.
The redundant-looking code related to Turing's method differs from the old code (the small-number-of-zeros code already in arb) in a few ways. In the old algorithm, it was essentially always possible to evaluate functions to give a definite sign by dynamically increasing precision. With the new algorithm, this is not possible after the grid has been established. This means that several operations that are guaranteed to succeed in the old code must allow the possibility of failing in the new code. Also some changes were made so the new zeros isolation loop is less efficient when a tiny number of zeros is requested but is more efficient for a large number of zeros. This was done by removing the optimization in https://github.com/fredrik-johansson/arb/pull/263 and by keeping track of the 'supergood' blocks encountered during the traversal; in the old code _isolate_hardy_z_zeros
is called repeatedly without tracking state between calls. I suppose it could be possible to put that optimization back in, but it would make the code more complicated and I predict it would not help the speed much if at all when more than a few zeros are requested.
I was thinking of putting the zeros isolation code (without refinement) in the public interface, but it would have needed acb_dirichlet.h to include arb_calc.h for the arf interval type. What do you think about this?
I think it's OK as-is.
This PR implements the first part of the plan outlined in https://github.com/fredrik-johansson/arb/pull/281 (a low level
zeta_zeros
-like function that takes about a dozen tuning parameters and that isolates zeros within a single fft region).To compare speed, I checked isolation only (not refinement) of 8000 consecutive zeros starting at n=103800788359 (the last zero in Platt's database). The old way took about half an hour and the new way took one minute, so there is some speed improvement. The parameter values were T=30610047000, A=8, B=4096, prec=300, sigma_grid=1601, K=44, J=103000, h=176431/2048, H=677/512, sigma_interp=25, Ns_max=300.