Open ChristophDanielSchulze2019 opened 3 years ago
hi @ChristophDanielSchulze2019, that's a great idea! The new metrics engine in v2.0 can support this too.
Does the new engine support this? The documentation does not seem to mention that feature explicitly, and this ticket is still open. :)
hi @ChristophDanielSchulze2019 👋 Yep, the new metrics engine has support for that, but that functionality is currently not exposed to the user. You can see the code that computes percentiles here: https://github.com/artilleryio/artillery/blob/master/core/lib/ssms.js#L624
Regarding your original use case:
success conditions for latency checks include only two possible percentiles: 95 and 99. Our project's acceptance criteria are based on the 90th percentile, which currently cannot be checked by Artillery.
This can be done today out-of-the-box with the updated ensure
plugin. http.response_time.p75
/ p90
/ p95
/ p99
/ p999
may now be used. See an example here: https://www.artillery.io/docs/guides/guides/test-script-reference#threshold-checks
Hey @hassy, that's great, thanks! In the examples, I only saw 95 and 99, hence my question. :)
I tried using p75 for 75th LCP percentile but the result was different than I expected:
For LCP the 95th percentile is 2416.8. I'd expect the 75th percentile to be smaller than 2416.8. But the ensure check claimed that the 75th percentile was not less than 2500. It meant the 75th percentile was 2500 or higher.
Here's what I have in the YML file:
Any help is appreciated.
This feature request is about being more flexible in how percentiles can be used as latency success conditions.
As per the documentation, success conditions for latency checks include only two possible percentiles: 95 and 99. Our project's acceptance criteria are based on the 90th percentile, which currently cannot be checked by Artillery.
My proposal is not to introduce more percentiles, because it would be unclear when to stop. We want the 90th percentile, the next project wants the 97th percentile, and before you know it Artillery tracks all possible percentiles.
Much rather, we propose to scan the test script for percentiles that are actually being referenced and compute exactly those.