Closed odannyc closed 1 year ago
The ability to ingest up to seven days of one-minute data, the maximum retained by Emporia, is available in this PR: https://github.com/jertel/vuegraf/pull/88
If this PR is accepted, I would be happy to create another PR for ingestion of historical one-hour data. Emporia currently retains one-hour data for the lifetime of the device.
This is now implemented thanks to PR #88. However, it's not automated. It requires a manually set configuration parameter and care must be taken to ensure the parameter is disabled or removed after the import completes so that historic data is not re-imported and overlapped with real-time collected data. See the README for more information. I'll leave the issue open for now, for additional comments or suggestions. As the above post mentions, it currently only accepts the most recent 7-days. More work will be needed for longer-term imported data.
Looks like the reason this is broken for some users (the 400 bad request response) is that Emporia is limiting the window in which you can query. 24h chunks seems to pass that limit, and looks like the limit is around 11h or so. The code should be modified to grab something like 12h chunks (14 times if you want 7 days).
Just setting this up for the first time. Running Influx v1 and getting the following error when trying to import historical data on first run ...
2023-01-25 20:03:14.722862 | INFO | Submitting datapoints to database; account="Primary Residence"; points=256869
2023-01-25 20:03:27.397515 | ERROR | Failed to record new usage data: (<class 'influxdb.exceptions.InfluxDBClientError'>, InfluxDBClientError('413: {"error":"Request Entity Too Large"}'), <traceback object at 0x7ff76fea3c80>)
Traceback (most recent call last):
File "/opt/vuegraf/vuegraf.py", line 284, in
InfluxDB is rejecting the input, saying it's too much data. If you're hosting your own InfluxDB you could look into adjust the max request size. If you're using a cloud-hosted InfluxDB you will need to either reduce the amount of history you import, or modify Vuegraf to split the write into multiple, smaller writes.
Thanks. I'll look into max request size, its a local influxdb.
Just created a merge request that fixes this problem by adding "usageDataPoints,batch_size=5000" to influx.write_points. Seems to only be a problem with V1 and not V2.
@jertel, if not ready for all the items on the merge request, look at adding adding this option for the current code base.
As a user of vuegraf, I want to have the option to pull all historical data to be able to analyze past
datetime.now()