Closed ananth-racherla closed 5 years ago
Hi @ananth-racherla
i can only reassure you that ..._sum
is indeed in seconds. You can artificially make just one call to some endpoint and see it for yourself.
Also I found increase
function being more reliable than rate
when aggregating within a specific time frame. Could you try it?
I'm afraid I can't help you beyond that. Closing the issue due to the lack of activity.
Will do, thanks for the tip
Just facing a similar issue. I have express-prom-bundle
measuring all my incoming requests, and all of them seem to in the low milliseconds range. However, I have an endpoint for a simple healthcheck. All it does is send a 200 response, with nothing else. It's being reported as taking 1 -1.5secs. Orders of magnitude higher than any other endpoint I have, that should take longer to respond. Indeed, if I divide that value by 1000, I get response times of 1 - 1.5 MILLIseconds, which is more more feasible.
So right now I've got incoming metrics, using the same middleware, and reporting in different units. Not really sure on how to tackle that.
Wondering if you could help me figure this out. I am using express prom bundle deployed in a cluster mode.
I am trying to determine what the average response time is for a particular path. This is the query I have:
I plot this as a line chart in Grafana with y-axis units set to
seconds
. However this results in absurdly large response times. The times appear more realistic if the unit is changed tomilliseconds
. If the values are aggregated in second buckets why does the choice of milliseconds as the unit make more sense?Please delete/close the issue if this is not an appropriate forum for this.