Closed tbaumann closed 1 year ago
Thanks for the issue, rather busy now so I will check asap
I find this library really cool. Especially the build in filtering stuff.
Thanks, did you already have a look at the HX711_MP library, it resolves non linearities in the sensor / setup. (Price is a bit of performance and a bit of RAM)
How about you set instead the size of the sample buffer first (via class method) and every time you call read() the value gets put in the buffer. That means you always have access to the Average/Median of the last N samples when you need it. And the programmer stays in control of the sampling rate.
(thinking out loud) mmm, I understand the concept and it definitely has its value if you sample fast.
The average and median only have meaning if the load "does not change" to reduce noise or movements. If I put the measurements in a buffer and there is (substantial) time between them then it can become doubtful if the measurements relate to the same load or not. For some applications this will make sense and be OK, for others it might cause ambiguity, uncertainty, confusion or all of them ;)
The user can always use my runningMedian or runningAverage library to create a buffer with timing between the measurements they like. Then they can even filter outliers before adding them to those buffers.
So although I understand the value of your idea, there are other ways to implement with clear responsibilities per library used, That means I won't add it to this library.
Nice task you wrote,
// This is used to check how tight the loop is. If the rate rises then the sampling rate is too high. In my tests it never rises.
If the rate does not rise, the sampling rate might be too slow. why not make it adaptive - below the rough idea....(not tested)
void ScaleTaskFunction(void *pvParameters) {
(void) pvParameters;
HX711 scale;
scale.begin(DOUT_PIN, CLK_PIN);
TickType_t xLastWakeTime;
int freq = 1;
TickType_t xFrequency = pdMS_TO_TICKS(freq); // start at an extreme high frequency
xLastWakeTime = xTaskGetTickCount();
scale.tare();
for (;;) {
if (scale.is_ready()) {
last_reading = scale.get_units(1);
xFrequency = pdMS_TO_TICKS(freq);
freq--;
} else {
freq++;
}
vTaskDelayUntil( &xLastWakeTime, xFrequency ); // This calculated the amount of time passed since last execution and sets a timer for the remaining time.
}
}
it might just oscillate!
How about you set instead the size of the sample buffer first (via class method) and every time you call read() the value gets put in the buffer. That means you always have access to the Average/Median of the last N samples when you need it. And the programmer stays in control of the sampling rate.
(thinking out loud) mmm, I understand the concept and it definitely has its value if you sample fast.
The average and median only have meaning if the load "does not change" to reduce noise or movements. If I put the measurements in a buffer and there is (substantial) time between them then it can become doubtful if the measurements relate to the same load or not. For some applications this will make sense and be OK, for others it might cause ambiguity, uncertainty, confusion or all of them ;)
The user can always use my runningMedian or runningAverage library to create a buffer with timing between the measurements they like. Then they can even filter outliers before adding them to those buffers.
I shall read into those. I'm not terribly good with this math stuff. I just have the concept that if I sample fast I can reduce the data and improve the signal at the same time. running(Average|Median) seems to be the right choice. Median or Average, I have to find out. I guess the noise isn't too crazy and average is probably cheapest. You say I could filter outliers and do averaging? Perhaps you have a hint for me. :)
I may need to play around with it to figure it out.
I will have to work with a reasonably small buffer because I do actually need to react fast to weight changes. I am building a coffee scale that can control a coffee grinder for perfect by weight dosing. In it's simplest form it would just stop the grinder when the actual rate >= target weight. But I'm pretty sure actually that I will have to predict the correct cut-off time based on the current rate of grind and the known 'runover' time of the grinder as there will be inertia in the motor.
So although I understand the value of your idea, there are other ways to implement with clear responsibilities per library used, That means I won't add it to this library.
Oh absolutely. Thanks for your hints anyway.
Nice task you wrote, If the rate does not rise, the sampling rate might be too slow. why not make it adaptive - below the rough idea....(not tested)
Thanks. Cool idea. But probably not worth it. The datasheet says the sample rate is either 10 or 80 Hz (selectable by pin). Reality shouldn't be too far off. And for everything I intend to do with the data down the line it's worth having a stable time base.
What I was thinking off though, but it's not going to be worth the effort for me is instead of polling is, once a full sample is clocked out, to enable an interrupt to wait for the pin to go high and then disable the interrupt, rad the next sample and repeat. Because the datasheet mentions the DATA pin will show when the data is ready.
If you have outliers you should use median as it will only use the middle value and none of the others. With average outliers are included for equal part, disrupting the value beyond noise level.
Assume you have 10 values around 5 and one outlier of 25 than the average would be ~7.5 while the median would be around 5 still.
If I can bring some stability into the first two milligram digits I think I'm happy. For coffee it's usually nice to have a very responsive scale.
I will run some tests.
The application is great, I would use a single read to be able to react quickly. and use 'a countdown' and if possible slow the filling speed the last 10% of the time.
There might be a correction factor needed that you need to stop filling at 98% as the inertia of systems etc will cause the last 2% to be "under way".
How much does one bean weigh? Just curious
How much does one bean weigh? Just curious
Between 0.1g and 1g roughly.
Espresso grinders are relatively slow. There are a bunch of commercial gravimetric scales out there, but I like to do something universal. The fact that people usually get decent results with just timer based grinders makes me hopeful that they dispense at a quite predictive rate. I hope I can keep up with it. I can always reduce the filtering or get rid of it. Perhaps I will only use it for the display output in the end to keep the flickering low.
Hi,
I find this library really cool. Especially the build in filtering stuff.
I'm polling my scale right now in such a loop. It creates an exact polling timer in-sync with the scales sampling frequency.
vTaskDelayUntil
takes care of that. Platform is ESP32Simplified code
This works quite nicely. I achieve maximum sampling frequency and minimal overhead. (I'm not actually sure yet if I need that high polling rate, but it's nice to have.
All further use of the data doesn't happen at that high sampling rate. My idea is that I can use the superfluous samples for smoothing.
I later want to create a graph (with much lower resolution) and calculate the rate of change over a sliding window time to make predictions.
So my actual question. You have all those read functions that read multiple values. But they all work in the way that they read the next N samples. And they are polling the scale at the maximum possible frequency.
I see you have
RunningMedian
andMedian
modules. That's really cool. I suppose I can just use that, push in new samples at the desired rate and access the median when I need it.But I was wondering I might be able to motivate you to add a similar functionality to the scale module. Right now you call the built in statistics to read the next N samples at the maximum possible rate and then return the calculated result. How about you set instead the size of the sample buffer first (via class method) and every time you call read() the value gets put in the buffer. That means you always have access to the Average/Median of the last N samples when you need it. And the programmer stays in control of the sampling rate.