Closed tnorth closed 11 years ago
There are two small problems I have identified at the moment. First, since N
is integer, 1 / N
in the scaling transformation is zero, which leads to the significant difference in results. Second, there are some appearances of float instead of double in the generated code because the scaling transformation ignored specific complex128
type and applied min_scalar_type()
anyway (fixed by recent commit).
This still does not clarify the problem of the same error in single and double precision though, which I will investigate further.
Oops, indeed, I am used to the from future import division stuff and I missed this integer division. Thanks, waiting for your updates on that.
Thank you very much !
Hello Bogdan,
I am working with FFTs in double precision, and expected an error much less than 1e-6, as observed for single precision at pyfft docs. (Can I really have such expectations BTW ? looks like the CPU can make it...)
So I tried the (quick and dirty) attached program, which takes a gaussian as input signal. Using Tigger, I was expecting to see a big difference when using a complex128 rather than a complex64, and I don't.
Here is my output for the attached program. I have also noticed that adding the scale_const(N) transformation increases the error with a huge amount, that I didn't expected. (commenting out lines 29 and 33)
FFT error vs numpy implementation : {'double': 1.0374786210703966e-06, 'single': 5.1033833088560074e-06} FFT => IFFT error: {'double': 0.088621340270303342, 'single': 0.088621340270303342} # This is huge, and goes to 1e-8 without the scaling numpy FFT=>IFFT error (complex128): 4.35576763895e-17 numpy FFT=>IFFT error (complex64): 1.50743731241e-11 With PyFFT, single precision : 5.11018247625e-06 With PyFFT, single precision. FFT => IFFT : 2.13039431111e-08 (Also, pyFFT behaves in a weird fashion with complex128: I get NaNs in the output vector).
Is it possible that for some reason the FFT is always done in single-precision ?
Thanks!