Hello! Thanks for your work on this project. It's super useful getting this data into Prometheus.
I've started using the v6.0.0-alpha.1 release since it makes it easier to get 429 response rates. Overall the release is working great.
The problem I'm having is that the MissDurationSeconds histogram only has three buckets for durations greater than 1 second (2.5, 5, and 10). In v5.0.0, there were double the number of buckets for durations greater than 1 second (2, 4, 8, 16, 32, 60).
In practice, I think this means I'm getting less accurate data on p99 miss latency. I'm seeing about a 300-400ms difference compared to before. Obviously, this issue will be experienced differently by users based on their specific response time patterns.
If it's desirable, I'm happy to submit a PR to either switch the bucket values back to their previous configuration, or to add a command-line option (e.g. -miss-duration-buckets 0.005,0.01,0.025,0.05,0.1,0.25,0.5,1,2,4,8,10) to allow the bucket configuration to be specified at runtime.
Fastly's API returns a significant amount of buckets:
The miss_histogram object is a histogram. Each key is the upper bound of a span of 10 milliseconds, and the values are the number of requests to origin during that 10ms period. Any origin request that takes more than 60 seconds to return will be in the 60000 bucket.
From my limited querying of the API, I seem to see the following pattern for buckets from Fastly:
Hello! Thanks for your work on this project. It's super useful getting this data into Prometheus.
I've started using the v6.0.0-alpha.1 release since it makes it easier to get 429 response rates. Overall the release is working great.
The problem I'm having is that the
MissDurationSeconds
histogram only has three buckets for durations greater than 1 second (2.5, 5, and 10). In v5.0.0, there were double the number of buckets for durations greater than 1 second (2, 4, 8, 16, 32, 60).In practice, I think this means I'm getting less accurate data on p99 miss latency. I'm seeing about a 300-400ms difference compared to before. Obviously, this issue will be experienced differently by users based on their specific response time patterns.
If it's desirable, I'm happy to submit a PR to either switch the bucket values back to their previous configuration, or to add a command-line option (e.g.
-miss-duration-buckets 0.005,0.01,0.025,0.05,0.1,0.25,0.5,1,2,4,8,10
) to allow the bucket configuration to be specified at runtime.Fastly's API returns a significant amount of buckets:
From my limited querying of the API, I seem to see the following pattern for buckets from Fastly: