Closed teonbrooks closed 8 years ago
This is a limitation in scipy
, if you upgrade to latest scipy
master (or wait for 0.16) and use sosfilt
it should work.
ok cool. pardon my ignorance but this function is for second-order filters. are you saying that it can handle fourth-order filters too? I don't know that much about the orders in filter, what exactly are they referring to?
-teon
On Wed, Apr 8, 2015 at 6:41 PM, Eric Larson notifications@github.com wrote:
This is a limitation in scipy, if you upgrade to latest scipy master (or wait for 0.16) and use sosfilt it should work.
— Reply to this email directly or view it on GitHub https://github.com/mne-tools/mne-python/issues/1941#issuecomment-90970508 .
Here order refers to the number of poles or zeros. Your 4th order filter has 4 poles. Second-order sections filtering is just a particular way to perform filtering. Let me explain...
Let's say you have a system with N poles. Your transfer function is typically conceptually represented by the locations of the zeros, poles, and gain (zpk
) of the system (look this up if you're unfamiliar -- I think it's documented in scipy
decently, too). However, when you go to actually do the computation, you have to take the filter out of zpk
form and put it into something like a b, a
form, which means multiplying out the numerator and denominator polynomials to arrive at filter/delay coefficients. For poles close to the unit circle, numerical precision errors can cause the poles to effectively jump outside the unit circle following this multiplication. This yields an unstable filter, which is what our check catches.
The way to get around this is to take your Nth-order filter and re-express it as a series of 2nd- or 1st-order filters that operate on your data sequentially. Multiplying out these polynomials leads to fewer numerical precision issues. Conceptually there is no difference between what a second-order-sections filter does and what a standard b, a
form filter does. Numerically, second-order-sections filtering is more stable, and is designed to deal with exactly the issue you have encountered here.
You can filter multiple times with a less steep attenuation function as a work around
2015-04-08 19:30 GMT+02:00 Eric Larson notifications@github.com:
Here order refers to the number of poles or zeros. Your 4th order filter has 4 poles. Second-order sections filtering is just a particular way to perform filtering. Let me explain...
Let's say you have a system with N poles. Your transfer function is typically conceptually represented by the locations of the zeros, poles, and gain (zpk) of the system (look this up if you're unfamiliar -- I think it's documented in scipy decently, too). However, when you go to actually do the computation, you have to take the filter out of zpk form and put it into something like a b, a form, which means multiplying out the numerator and denominator polynomials to arrive at filter/delay coefficients. For poles close to the unit circle, numerical precision errors can cause the poles to effectively jump outside the unit circle following this multiplication. This yields an unstable filter, which is what our check catches.
The way to get around this is to take your Nth-order filter and re-express it as a series of 2nd- or 1st-order filters that operate on your data sequentially. Multiplying out these polynomials leads to fewer numerical precision issues. Conceptually there is no difference between what a second-order-sections filter does and what a standard b, a form filter does. Numerically, second-order-sections filtering is more stable, and is designed to deal with exactly the issue you have encountered here.
— Reply to this email directly or view it on GitHub https://github.com/mne-tools/mne-python/issues/1941#issuecomment-90981855 .
@Eric89GXL thanks for the explanation! very informative. so, are you saying that I should run the sosfilt twice in sequence on the data?
no just ask scipy to give the output of butter
in sos
instead of tf
form, and then use sosfilt
(once) instead of lfilter).
sosfiltand
butter` take care of pairing off poles and zeros, and doing the necessary cascaded filtering.
...do not convert a b, a
(tf
) form of filter to sos
, it will not undo the damage caused by creating the filter in b, a
form.
cool, that seems straightforward. thanks!!
@teonlamont I'm less clear on this. Could you copy a quick example?
@kingjr this is what I am currently doing for the filter. @Eric89GXL please correct me if I am doing something wrong.
order = 4 # butterworth
Fp = 0.1 # cut-off frequency
Fs = 1000.
nyq = Fs/2
sos = sp.signal.butter(order, Fp/nyq, 'highpass', False, 'sos')
iir = sp.signal.sosfilt(sos, raw_signal)
Thanks for the quick reply. Ok so you don't use MNE api then? Do you think it's worth a PR?
currently this requires the dev version of scipy to work. raw.filter
can do high-pass filtering but it breaks when the cut-off frequency is below 1 with the current version of scipy.
@Eric89GXL is there a way to do the equivalent of scipy.signal.filtfilt
with this method you suggested? it requires that b
and a
.
Yes, but it needs to be implemented in scipy
. It's actually pretty straightforward from an implementation perspective, but the API needs to be settled. If you feel like having a contrib to scipy
I could help you do it.
yeah, that would be great. i'll send you an email
Instead of emailing me, if you really want to make progress, take a look at the scipy
pull request where endolith and I implemented second-order sections filtering. You will find the relevant background reading regarding how to move forward with filtering functions that use sos
input. Then from there, you can open a scipy
issue (please tag me) with a proposal. I personally think it's fine to add scipy.signal.sosfiltfilt
or so to complement sosfilt
and lfilter
and filtfilt
. But we would have to convince others that the major API overhaul shouldn't prohibit adding that functionality now.
I can also get around to doing this, but it will probably go faster if you do it -- plus then you get to make a scipy
contrib potentially :)
sounds like a Sunday morning hacking session. It will be on my to-do list after I close my issues and pr's ;)
@Eric89GXL I doubt that I will have the time to get this implemented in scipy. it would be a lovely to have there. maybe if you have some free time, it might interest you :)
@kingjr and I were discussing the low freq cuttoff for IIR filters. The sosfilt
has been implemented with v0.16 release of scipy. Is it possible to implement this for our filtering?
Yeah, someone just needs to take the time to do it
@Eric89GXL do you think that this is something that could be done straightforwardly? I'm unable to do so now but it would be nice to have more low frequency cutoff options with IIR filtering
I too would welcome this, but am too stupid to implement it myself.
Actually I realized we have a problem, in that the equivalent of sosfiltfilt
does not exist due to API challenges with scipy.signal
. But we can probably implement something equivalent ourselves until their API comes around.
I'll try to take a look soon.
Care to summarize for the lazy busy? :)
It's a discussion between (on one side) Burkhard Maess, Erich Schröger, Andreas Widmann (the guy who wrote the default EEGLAB filter) and Darren Tanner, James Norton, Kara Morgan-Short and Steven J. Luck (the guy who basically wrote The Book on ERPs) about the dangers of filtering and the dangers of baselining. Luck et al have shown that any high pass above .1 can lead to massiv distortions and arefactual effects if you have late ERP components, so we should all baseline instead, and Widmann et al argue filters are better, particularly if you have pre-stimulus differences (often happens for language stuff). It's a back and forth.
Either way, filters above .1 Hz are probably terrible.
Could we add this summary and ref somewhere in the tutorials?
On 23 April 2016 at 10:24, jona-sassenhagen notifications@github.com wrote:
It's a discussion between (on one side) Burkhard Maess, Erich Schröger, Andreas Widmann (the guy who wrote the default EEGLAB filter) and Darren Tanner, James Norton, Kara Morgan-Short and Steven J. Luck (the guy who basically wrote The Book on ERPs) about the dangers of filtering and the dangers of baselining. Luck et al have shown that any high pass above .1 can lead to massiv distortions and arefactual effects if you have late ERP components, so we should all baseline instead, and Widmann et al argue filters are better, particularly if you have pre-stimulus differences (often happens for language stuff). It's a back and forth.
Either way, filters above .1 Hz are probably terrible.
— You are receiving this because you were mentioned. Reply to this email directly or view it on GitHub https://github.com/mne-tools/mne-python/issues/1941#issuecomment-213749473
I can in principle do that, but I'm not sure we have consensus here. MEG people (but also e.g. the EEGLAB people) very often filter the hell out of their data, whereas others would tell you that's insane, if there are any late components or baseline stuff in the data (which it usually is if you have any kind to task).
I could try and give a general discussion of what filtering practically does and is though. Like, pros and cons, and a careful and pessimistic when to use what.
FYI I made a PR to SciPy (https://github.com/scipy/scipy/pull/6274) for sosfiltfilt
which is a prerequisite to using SOS filtering in MNE-Python since we use forward-backward filtering by default.
I am unable to run a high-pass filter lower than .3Hz on my data using the Butterworth IIR filter. It's using the default of a 4th order filter but results in the following error.
I'm trying to reproduce some filter settings from a paper and these are the specs of their filtering processing. Any idea of whether this is doable?