Open tobyaharris opened 7 months ago
As it's a low cost solution / doesn't have to be perfect - we might have to accept this is an inherent limitation. However, we should at least instead look into minimising input data dropouts.
Todo: Create a usage guide explaining what is shown (TH: https://community.digitalshoestring.net/power-monitoring-dashboards)
Anand suggests that this is due to modbus loosing sync - may be more reliable with reduced sample rate
Customers are now explicitely concerned for this exact question - they want to set the dashboard to Last 30 Days and read off the total kWh consumed, but are wondering if the dropouts will ruin that value. As seen above the answer is they are a big problem and they would be better estimating the machine's runtime and multiplying that by typical draw.
Tried turning the sample period up from 1s to 5s (see #41 ), early results are no change (still 2.5mins up, 5mins down in a severe case).
It's accepted that sometimes out solutions will lose data connection for a few seconds.
In most Grafana dashboards, we add either
|> aggregateWindow(every: limited_window, fn: mean, createEmpty: false)
to a Flux query or for more control instead setConnect Null Values
toThreshold: <1m
in the panel options to smooth over the dips.However, neither of these affect the analysis SM. Such data dips present a problem for the integrations the analysis sm performs, as illustrated below.
In the picture below, the top graph uses a Flux query with
createEmpty: true
andConnect Null Values: Never
. The middle graph instead hasConnect Null Values
toThreshold: <1m
.On the lower graph (an integral of the middle, performed by the analysis sm), we see dips at the same time as the data breaks in the top graph.
The test load presents a constant impedance. The energy reported between timebuckets should have minimal variance.
~Perhaps some kind of smoothing needs to be applied in the data collection layer? Or does the analysis sm need it's own smoothing / point joining settings?~
I like the mentality of honest data reporting that Greg has championed (if there's a gap in the data, present exactly that rather than interpolate over it). However, what the integration is doing is worse than smoothing over gaps - it's effectively creating data with a power value of 0, for which there is even less evidence than the last known value. (Since there is no raw data to suggest that the energy use is varying significantly, I propose the analysis system should not return values suggesting this.)