At least the Storagebox API currently has a rate limit implemented, which leads to 403 Forbidden if the rate limit is actually hit. This leads to null values during metric scrapes. The documentation of hetzner_exporter suggests, that a scrape_limit of 1m should be set, which normally doesn't hit the API rate limit, however you can't always control the number of requests to the metrics endpoint: For example, what happens in a HA scenario where two or more prometheus instances hit the metrics endpoint.
What is your opinion about caching either the API responses or the calculated metrics for a specific amount of time (maybe even configurable via an environment variable?). I know this complicates the code a bit, but it should improve usablity. As I understand, at least some metrics like data usage for storage boxes doesn't even update this often (the homepage suggest this metric is calculated once every 5 min, at least in the webui)
At least the Storagebox API currently has a rate limit implemented, which leads to
403 Forbidden
if the rate limit is actually hit. This leads to null values during metric scrapes. The documentation of hetzner_exporter suggests, that a scrape_limit of 1m should be set, which normally doesn't hit the API rate limit, however you can't always control the number of requests to the metrics endpoint: For example, what happens in a HA scenario where two or more prometheus instances hit the metrics endpoint.What is your opinion about caching either the API responses or the calculated metrics for a specific amount of time (maybe even configurable via an environment variable?). I know this complicates the code a bit, but it should improve usablity. As I understand, at least some metrics like data usage for storage boxes doesn't even update this often (the homepage suggest this metric is calculated once every 5 min, at least in the webui)