Open edgar-bonet opened 6 years ago
Hi Edgar,
Thanks for the note.
This becomes almost trivial if you don't have a changing sampling rate, for instance, see:
Chapter19.pdf http://www.dspguide.com/CH19.PDF
And of course, there is no reason to recompute the delay.
... a better solution is to note the Taylor expansion of exp(x)
exp(x) = 1 + x + ...
If your steps are small, you can just use the expansion instead. I've done this, but you need to be careful, in case you call with too large of a tau.
In practice, the software was used on an ARM M4F, which is a number-cruncher.
PS - sorry for the brief emails, at work, I can look into this more later ...
On Mon, Feb 12, 2018 at 8:44 AM, Edgar Bonet notifications@github.com wrote:
These filters assume the sampling rate is the rate of the user program calling their input() methods. This is fine if the user just calls filter.input(analogRead(PIN)) in his loop. However, if he manages to have a fixed sampling rate (maybe by using external hardware) and his calls to input() are not quite periodic, then the filters will assume wrong timings. I have witnessed one user being bitten by this issue https://arduino.stackexchange.com/questions/49738/high-pass-filtering-accelerometer-data-on-arduino .
A related issue is the poor performance. I have benchmarked FilterOnePole::input(float) at about 245 µs average time per sample on a 16 MHz AVR-based Arduino. I suspect a significant fraction of this time is spent in computing ampFactor https://github.com/JonHub/Filters/blob/aeb294b4a66cf5dc30d5998cc221115dec95eb80/FilterOnePole.cpp#L36 as
ampFactor = exp( -1.0 / TauSamps );
given that both the floating-point division and the exponential are very CPU-intensive on 8-bit FPU-less micros.
Computing this for every sample is reasonable if the sampling rate is not constant. However, when using a fixed sampling rate, this computation only needs to be performed once, at initialization. I implemented a constant-rate low pass filter similar to FilterOnePole in LOWPASS mode and it takes only 27.8 µs per sample, i.e. it is 8.8 times faster than FilterOnePole.
I suggest putting a note somewhere, maybe in a README file, warning the users that this library is only suited for cases where the sampling rate cannot be kept constant, and that dealing with this uneven sampling comes at a high cost in terms of processing time.
— You are receiving this because you are subscribed to this thread. Reply to this email directly, view it on GitHub https://github.com/JonHub/Filters/issues/4, or mute the thread https://github.com/notifications/unsubscribe-auth/AFue1kQuG4QLKDdpkpZHUCkokrPnSH9wks5tUGpfgaJpZM4SCeNb .
I just noticed this issue is somewhat bogus. On the Playground page you already mention:
[...] the time step between update can vary, and is computed dynamically by taking the difference between the time of the current input() and the last input() to the filter.
So the user should be able to see that the library is meant for cases where a fixed sampling rate cannot be ensured.
I still think this could be stated a bit more prominently in the documentation. I will submit a pull request for this purpose. If you think the doc is clear enough as it is, feel free to ignore the pull request and close this issue.
Best regards,
Edgar.
edgar, can you post a link to your filter you implemented which took only 27.8us?
@JonHub, I recommend you make two functions (or add an extra parameter to your current function): one function for fixed frequency sampling where you don't calculate ampFactor
every time, and one for variable-frequency sampling where you do. This would solve the bulk of the efficiency problem and prevent botched filtering where the function is called with frequency jitter even though the actual samples may be done via hardware or in an ISR and have no jitter or very little jitter.
@ElectricRCAircraftGuy: As I understand it, the real selling point of this library is its ability to cope with uneven sampling. As JonHub puts it, “This becomes almost trivial if you don't have a changing sampling rate”. You can easily find online tutorials for doing an exponentially weighted moving average, which is what FilterOnePole
becomes with a fixed sampling rate. And probably libraries too. It all boils down to
return output += constant * (input - output);
Below is the one I wrote for benchmarking.
Interface:
class LowPassFilter
{
public:
LowPassFilter(float reduced_frequency)
: alpha(1-exp(-2*M_PI*reduced_frequency)), y(0) {}
float operator()(float x);
private:
float alpha, y;
};
Implementation:
float LowPassFilter::operator()(float x)
{
y += alpha*(x-y);
return y;
}
These filters assume the sampling rate is the rate of the user program calling their
input()
methods. This is fine if the user just callsfilter.input(analogRead(PIN))
in his loop. However, if he manages to have a fixed sampling rate (maybe by using external hardware) and his calls toinput()
are not quite periodic, then the filters will assume wrong timings. I have witnessed one user being bitten by this issue.A related issue is the poor performance. I have benchmarked
FilterOnePole::input(float)
at about 245 µs average time per sample on a 16 MHz AVR-based Arduino. I suspect a significant fraction of this time is spent in computingampFactor
asgiven that both the floating-point division and the exponential are very CPU-intensive on 8-bit FPU-less micros.
Computing this for every sample is reasonable if the sampling rate is not constant. However, when using a fixed sampling rate, this computation only needs to be performed once, at initialization. I implemented a constant-rate low pass filter similar to
FilterOnePole
inLOWPASS
mode and it takes only 27.8 µs per sample, i.e. it is 8.8 times faster thanFilterOnePole
.I suggest putting a note somewhere, maybe in a README file, warning the users that this library is only suited for cases where the sampling rate cannot be kept constant, and that dealing with this uneven sampling comes at a high cost in terms of processing time.