ryanw-mobile / OctoMeter

🔥Kotlin Multiplatform Desktop/Android/iOS Energy Tracker app
Other
126 stars 10 forks source link

Account screen: Provide option to cache/clear ALL meter readings and rates in DB #217

Open ryanw-mobile opened 3 months ago

ryanw-mobile commented 3 months ago

Dependencies:

23 - We need to have the RoomDB ready.

79 - Repository needs to be able to resume lazy load

If we do not have all the half-hourly meter readings and their corresponding rates stored locally, right now it is impossible to calculate any estimated cost in any of the presentation modes other than daily/half-hourly.

If we have enough cached consumption data, on Agile screen we can possibly aggregate the average consumption per half-hour slot, showing how much we generally use versus the upcoming agile rate. So we can look ahead given our habits unchanged, how much we can save/ extra we have to pay.

When returning to demo mode, we force the DB to clear.

ryanw-mobile commented 3 months ago

Some simple calculations:

Assume a user has joined the Agile tariff for one year.

Number of meter readings a day = 24 2 = 48 Number of meter readings a year = 48 365.25 = 17,532

Each time, the API can return 100 records on a page. Therefore, we need to repeat the API call for 176 times.

That's the same for Agile unit rates.

So for one year's data we have to fire 352 api calls. Not to mention if the user has more than one year's history.

We have a concern whether Octopus would block us if we consecutively fire this amount of calls without delay. On the other hand, this operation is not going to be done in a few seconds, the download process would then needs to be visible to the user, so that means:

Optionally, if we want to make things a bit more complex, we can inspect what kind of data we already have in our database, so that we do not ask for what we have.

Given if we don't split API requests, we need 176 API calls, if we split it into requests of one or two days at a time, we can better do a SQL query to check if we already have a complete set of data covering that range, so we keep on checking and requesting what we don't have.

In this way, the number of API calls will be more, BUT it is more fault tolerant - in case the transmission is interrupted, we technically can skip the portions downloaded and do not have to reload everything again.

This option seems to make more sense, as then the repository doesn't really need to return the amount of records for progress tracking - the use case instead can use the number of days processed as the progress.