Pyrrha-Platform / Pyrrha-Dashboard

This repository contains the next generation in-progress Pyrrha (created by Prometeo) solution dashboard based on the Carbon Design System and React.
Apache License 2.0
6 stars 7 forks source link

Update the chart to math minute by minute history, not just latest readings #62

Open krook opened 3 years ago

krook commented 3 years ago

Right now the chart uses LIMIT to pull the last 10, 30, 60, 240, 480 readings. These may or may not correspond to consecutive minutes.

JSegrave-IBM commented 3 years ago

@krook - a suggestion... Since every chart shows the 'live' values over time, how about we just make all the charts show 8 hours (or whatever is the longest configured window). Then when you look at a chart, you always get the same experience - "this chart shows me how X has behaved over the last 8 hours" - doesn't matter whether X is the live sensor readings, or the 4 hour average - they all change minute-to-minute.

JSegrave-IBM commented 3 years ago

then the SQL is always the same : where timestamp_mins <= (UTC now - longest window in hours)

krook commented 3 years ago

Ah, I see what you mean. So this works great for a device that has readings in this time period of the last 8 hours. But won't show anything if the device hasn't reported in that timeframe. For this case, it might be helpful to fall back to last fixed number of readings.

krook commented 3 years ago

image

image

image

krook commented 3 years ago

image

image

image

krook commented 3 years ago

Ok, I added switching logic based on whether the device is active

krook commented 3 years ago

@albertum1 this is related to the issue we've been discussing.

albertum1 commented 3 years ago

Hello Team,

I tested with the Sensor Simulator and Rules Decision to see how the data would output during some edge case scenarios. I have set the sensor simulator to output a fixed number every 10 minutes(sometimes). Meaning, every 10 minutes, a random number is generated, and if the number is even, a fixed reading (i.e., CO level 100) is sent to the database. I have realized that the averaging might be a little off by:

  1. the 10 min average goes up without new readings
  2. after the device has reset, in about 40 mins time. The 10min average shoots back to 100. Figure 1 image Figure 2 image
JSegrave-IBM commented 3 years ago

Hi @albertum1

Looking at the 10 min TWA from 12.20-12.29 When a TWA is starting up, it is pro-rated (e.g. a 10 min TWA when the system has been running for less than 10 mins). So if the system started at 12.20, then that pro-rating likely accounts for the 10 min TWA behavior from 12.20-12.29. If not - if the system was already running before 12.20 - then you'd need to look at the data from at least 12:10 on to decide if the 10-min TWAs observed from 12.20-12.29 are correct or faulty.

Looking at the 10 min TWA from 12.30-12.32 and the 30 min TWA from 12.50-12.59 A TWA can't be calculated when there is no information available within that TWA's time window. This accounts for the 10 min TWA behavior from 12.30-12.32. For each of those minutes, the sensor status for previous 10 minutes is unknown (NULL). Likewise for the 30 min TWA from 12.50-12.59 (previous 30 minutes all NULL).

Looking at the 10 min TWA from 13.00-13.07 Once a TWA is in steady-state (e.g. a 10 min TWAs has been running for more than 10 mins), it averages only what is known, it does not average anything unknown/NULL, (e.g. it does not treat NULL as 0ppm). Between 13.00-13.07, the only known information is "100ppm was seen once", after that, the sensor provides no further data. So within that 10 min window, the 10 min TWA will report 100ppm, it has no information to do anything else. (In contrast, imagine the sensor reported 0ppm for each of the next 9 mins - then you'd see the 10 min TWA decrease, as the strength of that single 100ppm observation is weakened by multiple observations of 0ppm).

I hope that helps & makes sense! If you're satisfied these explanations are correct and they match what you're seeing, perhaps your data above would make a useful addition to the unit tests? (enable the regression tests to check for consistent behaviour across code changes. Also to cover similar questions by future maintainers).