Open kousu opened 8 years ago
..wait. This causes freq_from_hps()
to give drastically wrong results. Why would that be? Is there a way to salvage it?
SciPy uses FFTPACK which is optimized for multiples of 2, 3, 5, which I made a function for here:
https://github.com/scipy/scipy/pull/3144
The function is _next_regular https://github.com/endolith/scipy/blob/master/scipy/signal/signaltools.py#L246 but planning on changing it to _next_opt_len in the future: https://github.com/endolith/scipy/blob/czt/scipy/fftpack/helper.py#L49
Oh great. So this will this end up in numpy.fft.rfft()
and then it will just always be fast? That would be excellent.
In the meantime, I don't suppose you have a clue why the padding I added is breaking your freq_from_hps()
? I would like to use this function but it's too slow over a whole corpus. Using the longer fft means lower frequencies are captured, so I guess your warning about low frequency noise applies, but I don't know how to fix that. Should it be enough to manually clip the search range?
Oh, or are you saying that I should use _next_opt_len()
instead of round2()
?
So this will this end up in numpy.fft.rfft() and then it will just always be fast?
No, but czt can replace prime-length ffts in the future
Oh, or are you saying that I should use _next_opt_len() instead of round2()?
Yes, instead of powers of 2.
I don't suppose you have a clue why the padding I added is breaking your
What are you expecting to get and what are you actually getting?
You know what? It was my sample. I was going to make a before and after test case and post it here, but as I waited and waited for the code to run across the vast majority of my corpus, I found that using a faster size or not only changes how the mistakes are made, not where. The mistakes almost always give different pitches, but the files that give mistakes trouble tend to be the same (there's 13 the unmolested version gets wrong where the round2()
d version doesn't, and 8 vice versa). I think when I posted that I was tired and not looking that closely. I thought my Oboe corpus was drastically worse under rounding.
Thanks again for your code. I've updated the PR as you requested.
actually I'd be happier if next_regular were just copied into common.py as a public function instead of importing a private name that only exists in certain scipy versions and won't exist in future versions.
Now it's a public function next_fast_len
.
I don't think this code will work as written. Have you tested it before and after with known frequencies? The new FFT length will be next_fast_len(N)
, but you're still using N
to find the frequency in Hz: fs * i_interp / N
. But i_interp
will be shifted because the new spectrum is stretched out?
I am trying to clean up the project so I can't test this yet
Thank you for this code. It is very helpful to have all the methods in estimate_frequency.py laid out and contrasted.
I'm not totally sure about the change to thd.