Open DiamonDinoia opened 2 months ago
I don't have much, at least not much that can be referenced. The function I use is a slightly generalized version of your exponential-of-semicircle kernel, with the square root replaced by an exponentiation with a new free parameter, which is of course in the vicinity of 0.5 (equation 29 in https://arxiv.org/abs/2010.10122).
I can offer some very hackish Python code which tries to find near optimal kernel parameters when provided with a kernel support width and an oversampling factor. This is https://gitlab.mpcdf.mpg.de/mtr/ducc/-/blob/ducc0/not_yet_integrated/kernel_helper_new.py?ref_type=heads
Using this code, I have built my kernel "database" for a grid of support widths and oversampling factors. Please note that I'm using an enhanced error model to determine the accuracy of the resulting kernel, which has more components than that of finufft; it avoids the weird feature of degrading accuracy while "improving" the kernel when getting close to the machine epsilon.
You may get better results overall with well-tuned prolate spheroidal wave functions, but I don't expect a really siginificant improvement. The standard ES kernel is already really good, and the additional parameter doesn't gain very much.
@DiamonDinoia would you test first if, for opts.kerevalmeth=1 only, switching from the "reference" evaluatekernel in finufft.cpp:onedim* to your templated ker_eval causes negligible change in errors... thanks, Alex
I do not see any difference between master and vector two using spreadtestall. I could print more digits to see if there are differences that escape the print.
Hi Marco, I'm talking about finufft itself, which uses "reference" (not Horner) kernel in onedim_* - best, Alex
On Mon, Jun 24, 2024 at 7:45 PM Marco Barbone @.***> wrote:
I do not see any difference between master and vector two using spreadtestall. I could print more digits to see if there are differences that escape the print.
— Reply to this email directly, view it on GitHub https://github.com/flatironinstitute/finufft/issues/462#issuecomment-2187642584, or unsubscribe https://github.com/notifications/unsubscribe-auth/ACNZRSXEIPP43HTBOOTPLFLZJCVS3AVCNFSM6AAAAABJWDTZISVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDCOBXGY2DENJYGQ . You are receiving this because you commented.Message ID: @.***>
-- *-------------------------------------------------------------------~^`^~._.~' |\ Alex Barnett Center for Computational Mathematics, Flatiron Institute | \ http://users.flatironinstitute.org/~ahb 646-876-5942
I'm currently working on generalizing ducc
's kernel_helper
tool. It seems to work pretty quickly now, but changing the actual kernel function requires changes in the C++ code and is therefore not something that can be done on the user side (yet...).
If you can provide me with a C++ implementation of any desired kernel function, as well as the admissible ranges of its tweaking parameters, I'm happy to compile a table of optimal parameter sets for different support widths and oversampling factors.
Update: the current HEAD of the ducc0
branch now supports specifying the kernel function in Python. Please have a look at python/demos/kernel_helper.py
, which generates parameter tables for standard ES kernels, Gaussians, and generalized ES kernels. It should be straightforward to generalize this to PSWFs.
I've added Kaiser-Bessel functions to the test script; concerning accuracy, they seem to lie between standard ES and generalized ES kernels.
@mreineck you mentioned in the past that by playing with the parameters of the spreading function it is possible to achieve more digits at the same width. Do you have a summary of your experiments that we can use as a reference?
On our side we are thinking on moving to: Prolate. More readings DMK.
The reason Prolate was not used in the past is because is expensive to evaluate at runtime. But this is not an issue anymore since we tabulate the function.