bb-Ricardo / fritzinfluxdb

Writes data from fritzbox to influxdb
MIT License
144 stars 36 forks source link

Grafana: "InfluxDB Error: retention policy not found: autogen" #54

Closed deltaphi closed 1 year ago

deltaphi commented 2 years ago

I set up Fritzinfluxdb, InfluxDB 2.4 and Grafana (latest tag at the time of writing) using Docker Compose. After getting another problem fixed (see #51) it now appears that FritzInfluxDB is actually filling my Influx DB - at least, I can poke at it using the query builder and actually get some values back. However, my Grafana Dashboards remain empty.

I made sure to add the InfluxQL source in Grafana with the Authorization Token and imported the dashboards from https://github.com/bb-Ricardo/fritzinfluxdb/tree/1fd5f80e195d9249e1bb9ca394b123c9b9252afa/grafana. Unfortunately, the dashboards remain empty:

grafik

Note the error message about the AnnotationQueryRunner - maybe this is the cause of the issue? I tried googling around, but could not find anything that got me forward with InfluxDB 2.4 relating to this error.

This issue appears to be similar in symptoms to #25 and #30. In #25 the cause was a Cable Fritz!Box - which is not the case for me, I'm on DSL. In #30 the solution was to use an updated Dashboard JSON, which I believe I am already doing.

What can I do to make Grafana display the data that is clearly there in InfluxDB?

bb-Ricardo commented 2 years ago

and testing the datasource showed up green?

deltaphi commented 2 years ago

and testing the datasource showed up green?

Yes, it did.

bb-Ricardo commented 2 years ago

Mmh, and you selected this datasource on import? The same happens for the logs dashboard?

deltaphi commented 2 years ago

Correct on both cases.

TheDeepSpacer commented 2 years ago

In order to use InfluxQL with a v2 DB you need to remove the default retention policy with a command:

influx v1 dbrp create --bucket-id --db --rp autogen --org --token

bb-Ricardo commented 2 years ago

All this should be done by the script if you use admin api token for the first run. Also the script should tell you what it is doing this first setup.

After this is done you definitely should use a write only token for the particular bucket.

Did you use a admin token on first setup?

deltaphi commented 2 years ago

I did not use the Admin Token. The documentation just states to use an Admin Token to setup InfluxDB, but does not tell one which container to use the token on (the InfluxDB one or the FritzInfluxDB one?) and where to get it. I just tried to wipe everything and set up the containers from scratch. However, when I start my docker compose setup, influxdb first requires me to log in through the webinterface and create a user account plus an initial database before I can even get a hold of the Admin token. I don't know if this is already wrong - I just went through all the steps again and arrived at the same error.

deltaphi commented 2 years ago

I ran another test:

Unfortunately, I still see the same error in grafana.

deltaphi commented 2 years ago

In order to use InfluxQL with a v2 DB you need to remove the default retention policy with a command:

influx v1 dbrp create --bucket-id --db --rp autogen --org --token

Using this command manually in the influxdb container made the data appear in Grafana.

For future reference, the full command is:

influx v1 dbrp create --bucket-id <bucket-id> --db <bucket-name> --rp autogen --org <orgname> --token <token>

You can find <bucket-id> and <bucket-name> from the Webinterface: Go to "Load Data" -> "Buckets" and look for the "fritzbox" bucket.

bb-Ricardo commented 2 years ago

Thank you for the effort, I have to check this and add it to the description.

bb-Ricardo commented 2 years ago

found this here: https://community.influxdata.com/t/influxdb-error-retention-policy-not-found-autogen/26352/6

bb-Ricardo commented 2 years ago

just pushed a change to next-release to fix this issue, can you try it out? also new container next-release tag has been pushed

bb-Ricardo commented 2 years ago

any updates?

deltaphi commented 2 years ago

I have not yet had the heart to raze my entire setup and try out the new setup procedure. However, I just noticed something new:

After stopping the containers and restarting them a few hours later (trying to diagnose an entirely different problem), fritzinfluxdb cannot write to influxdb anymore. Here is the error message from fritzinfluxdb:

2022-10-05T18:05:30.870423104Z ERROR: Problem creating InfluxDB DBRP data: (422)
2022-10-05T18:05:30.870443124Z Reason: Unprocessable Entity
2022-10-05T18:05:30.870447567Z HTTP response headers: HTTPHeaderDict({'Content-Type': 'application/json; charset=utf-8', 'X-Influxdb-Build': 'OSS', 'X-Influxdb-Version': 'v2.4.0', 'X-Platform-Error-Code': 'conflict', 'Date': 'Wed, 05 Oct 2022 18:05:30 GMT', 'Content-Length': '94'})
2022-10-05T18:05:30.870452919Z HTTP response body: {
2022-10-05T18:05:30.870456491Z  "code": "conflict",
2022-10-05T18:05:30.870460482Z  "message": "another DBRP mapping with same orgID, db, and rp exists"
2022-10-05T18:05:30.870464322Z }

I also inspected the drbp mappings of influxdb:

$  docker exec -it influxdb influx v1 dbrp list
ID                      Database        Bucket ID               Retention Policy        Default Organization ID
0a06d31d4ba0d000        fritzbox        713b58dbc32d51ea        autogen                 true    05ba4462b21e4a23
0a1656a122b0b000        fritzbox        713b58dbc32d51ea        1year                   false   05ba4462b21e4a23

VIRTUAL DBRP MAPPINGS (READ-ONLY)
----------------------------------
ID                      Database        Bucket ID               Retention Policy        Default Organization ID
40ec633b9d624bda        _monitoring     40ec633b9d624bda        autogen                 true    05ba4462b21e4a23
49c7c5db57151154        _tasks          49c7c5db57151154        autogen                 true    05ba4462b21e4a23
713b58dbc32d51ea        fritzbox        713b58dbc32d51ea        autogen                 false   05ba4462b21e4a23
b61d2d825ec2ec2a        myFirstBucket   b61d2d825ec2ec2a        autogen                 true    05ba4462b21e4a23

As you can see, there are two mappings for the fritzbox bucket. If I manually remove the mapping 0a1656a122b0b000 and restart all containers, fritzinfluxdb recreates this mapping and can then write data successfully. However, when restarting the conatiners again, fritzinfluxdb fails to write data to influxdb until I remove the mapping.

For referencce, how to remove the mapping: docker exec -it influxdb influx v1 dbrp delete --id 0a1656a122b0b000.

Is this problem related to your change, or is this a different issue? I'm currently on 67d6aaa, but I only update to this commit a few minutes ago while trying to diagnose this problem. I believe that I was on 15aff85 before, but I am not absolutely certain.

Is this problem related to your change, or is this a new issue?

bb-Ricardo commented 2 years ago

Thank you for all the testing And sorry for all the forth and back. Seems to be difficult to get this right properly. I will have a look. I should run into the same problem when startinmg container again.

bb-Ricardo commented 2 years ago

Just pushed a new commit to next-release which should fix this issue.

deltaphi commented 2 years ago

So at least stopping and restarting the container now works. I still haven't gotten around to tearing my whole setup down and rebuilding it.

bb-Ricardo commented 2 years ago

Great, at least this seems fixed.

Why do you need to recreate your setup again? Just to test the retention policy issue.

Just define a second bucket and try it out with a second docker instance. You can leave the current one running. And then define another datasource, import the dashbord and select the new datasource.

deltaphi commented 2 years ago

I did make an attempt at setting up a new influxdb bucket without tearing everything down. This is complicated by the fact that I have influxdb and grafan in the same compose file as fritzinfluxdb.

I first tried to just add a second container running fritzinfluxdb. However, both containers reported that the Fritz!Box would not let them log on. I had to comment out my original container and run only the test container (differing only by the influxdb bucket the data goes to) - then the Fritz!Box accepted the credentials. I can now see the new bucket in influxdb with data coming into it.

Next, I tried to set up Grafana. I added a second datasource pointing to the other bucket (Side note: Make sure to set up another read token for grafana, if your original read token does not have universal access!). I imported the Fritz!Box Status Dashboard into Grafana a second time and pointed it at the new data source. I also made sure to change the Bucket name in the Dashboard configurations. However, the dashboard remains empty ("No Data" everywhere), now with no error message whatsoever - I'm not sure whether this is now a good thing or a bad thing.

What stuck out to me was that I also had to change the UID of the Dashboard. I'm too much of a novide in InfluxDB and Grafana to know what this does or wheter this may be the cause of "no data".

EDIT Another Test, to try and rule out the UID as a problem:

bb-Ricardo commented 2 years ago

Hi,

The UID is mostly for grafana internal use. Changing it on import is totally fine.

Dis you try to import the log dashboard? Did this one show any data.

deltaphi commented 2 years ago

Dis you try to import the log dashboard? Did this one show any data.

I did just now - it also shows "no data" with no error message being shown.

bb-Ricardo commented 1 year ago

Hi,

maybe you want to try the new all flux dashboards here, they might solve your issue:

https://github.com/bb-Ricardo/fritzinfluxdb/blob/next-release/README.md#grafana

deltaphi commented 1 year ago

Unfortunately last week my server blew up with a defective SSD. I’m currently setting up the server from scratch. This has thrown quite the wrench into all of my testing.

bb-Ricardo commented 1 year ago

Aua. This is not good. Hope you can recover from backups.

bb-Ricardo commented 1 year ago

I'm just closing this issue as I assume it fixed. I released 1.1.0 today and if this issue still occurs feel free to reopen.

thank you