Open zezic opened 3 years ago
I'm definitely open to improvements here. Early versions of Goose did not do any of this rounding, however this lead to massive data structures and horrible performance when displaying running and final metrics. I'm happy to review a PR that works to make this more flexible.
Goose is currently tracking the response time of each request in milliseconds: https://github.com/tag1consulting/goose/blob/main/src/metrics.rs#L305 https://github.com/tag1consulting/goose/blob/main/src/metrics.rs#L352
It also logs when the request was made with millisecond granularity: https://github.com/tag1consulting/goose/blob/main/src/metrics.rs#L295 https://github.com/tag1consulting/goose/blob/main/src/goose.rs#L1606
Perhaps GooseRequestMetric
and GooseRequestMetricTimingData
could be turned into generics with a default implementation using milliseconds, allowing you to provide your own microsecond implementations. Some thought would need to go into how this would work.
I looking into using a compressed histogram format like https://docs.rs/tdigest/latest/tdigest/index.html to track large numbers of request accurately without using much space. Would something like this be accepted?
I could look into to making a generic aggregate type to allow for multiple implementations.
Yes, if performance isn't negatively impacted this is definitely interesting. Making it generic to allow multiple implementations isn't necessary but would certainly be appreciated.
Hi! In our company we started using Goose to run various performance tests and one of them is the latency test. We need to control the metrics after changes in our code to detect any performance degradation. But here is the problem – the latency of some endpoints of our service is about ~2000 microseconds. Currently we use custom loadtest function to report latency as microseconds instead of milliseconds, but according to this procedure Goose rounds numbers by 1000 when they reach values of 1000: https://github.com/tag1consulting/goose/blob/dd8144646ae682128a9fe46decb3be6f27eb0717/src/metrics.rs#L539-L553
It makes it impossible to detect difference between, lets say, 2200 microseconds and 2500 microseconds.
Is it possible to add options to Goose to make it support measuring latency in microseconds with more precise (configurable?) rounding?