Closed ma-laforge closed 9 years ago
Sorry, first time with GIST. Here is the correct link: https://gist.github.com/ma-laforge/5c054cb3e4cc866c9046
cc @stevengj
I used to be an expert on this (studying EE with an emphasis on signal processing, being part of the communication industry) but this is long ago.
What i remember: Assuming that a (given length) FFT is an ideal time-to-frequency is misleading, as you actually transform the waveform PLUS a windowing function. If you do nothing you see the impact of a rectangular window applied on your original function and therefore some of the energy shows up in 'other' bins than the 'central' peak (there is some math to it i don't remember).
Numerically there is also impact -and that's the main reason, why there are all this radix-N with N=2,4, etc. building blocks for FFT. Afair you loose roughly 6dB (1 bit) per stage on top of the resolution (quantisation noise) of the underlying data type and using floating point is unfortunately creating implicit non-linearities. I remember the use of block floating point, for exactly that reason (if you go implementing FFT on chip).
So actually someone should check the expected SNR for a radix-2 (i guess) impelmentation in float64, first.
@lobingera, the "one-bit-per-stage" that you are remembering is not relevant here.
For fixed-point FFTs with typical scaling, you lose "one bit per (radix-2) stage" in absolute error, or O(N) error growth for an N-point FFT. Since the overall root-mean-square magnitude grows as O(sqrt(N)), this gives a relative error growth of O(sqrt(N)), often described as "half a bit per stage".
However, in a floating-point FFT (as in Julia), the roundoff error growth is much better. The worst-case relative error grows as O(log(N)), and the root-mean-square relative error grows as O(sqrt(log(N)), which is almost indistinguishable from O(1). See http://fftw.org/accuracy/comments.html ... the essential reason for the slow error growth is that FFTs boil down to pairwise summation of the DFT. As a result, Julia's FFTs (which call FFTW), are accurate to nearly machine precision even for very large sizes.
So, if there is an accuracy problem here, it isn't due to roundoff. A big problem is a common confusion: FFTs compute the discrete Fourier transform (DFT), which is not the same thing as a continuous Fourier transform. (e.g. the DFT of a sinusoid is not a discrete delta function unless the sinusoid has a frequency that is an integer multiple of 2π/T where T is the overall sampling time.) I think @ma-laforge needs to re-think what he/she is computing.
Thanks to everyone for the response. I guess I should have done a few more tests before raising this "issue".
The problem with my old code appears be caused by accuracy issues representing time... which then translated to small amplitude errors. I fixed the more obvious errors in my gist: https://gist.github.com/ma-laforge/5c054cb3e4cc866c9046
I guess I got caught off-guard when I saw that truncating data to the lower accuracy of Float32 "fixed" the accumulated errors somehow. The idea of this improving accuracy just seemed like too much of a coincidence to be possible.
Note that I now also compute noise more directly to avoid accumulation issues with values smaller than eps(Float64) (despite me not liking how the code looks now).
The resulting SNR with the improved calculations is around 163dB. Since that is near the relative eps() of Float64, it seems adequate.
Sorry for the confusion.
Hello,
I have noticed what appears to be an accuracy issue with rfft. I have confirmed it is still present in v.0.3.10.
Ref platform: Intel i5 running Windows 7.
I have some test code in a GIST: https://gist.github.com/5c054cb3e4cc866c9046.git
Description: The test computes the SNR of a 5kHz sinusoidal signal using the output of an rfft. Since this is a pure sinewave, I should be able to get a really good SNR by using Float64 data/calculations.
However, running the test gives the following result:
The good: 134 dB is better than what a Float32 FFT returns.
The bad:
Best regards, MA