Closed ktchani closed 1 week ago
Hi!
there are a couple features that at least solve parts of your problem:
Mark the response as failed based on response time: https://docs.locust.io/en/stable/writing-a-locustfile.html#validating-responses
Set exit code of the locust process based on checking some metric: https://github.com/SvenskaSpel/locust-plugins?tab=readme-ov-file#command-line-options
I do like your idea of rules/thresholds that would mark the request as failed (or maybe a third state: ”ok but with failed rule”) if it can be implemented cleanly, so if you or someone else would like to make a PR I would definitely consider it. Not sure it would be easy to do though..
Thanks for a quick response. I'll try to take a look this week and maybe come up with a PoC.
In the mean time, should I mark this issue as closed, or keep it up for discussion?
If it can be done in a plugin then that is nice, but I dont mind having this in core (if it can be done cleanly)
You can leave it open!
This issue is stale because it has been open 60 days with no activity. Remove stale label or comment or this will be closed in 10 days.
This issue was closed because it has been stalled for 10 days with no activity. This does not necessarily mean that the issue is bad, but it most likely means that nobody is willing to take the time to fix it. If you have found Locust useful, then consider contributing a fix yourself!
@cyberw can you re-open this issue? I'm currently working on this! Thanks
This issue was closed because it has been stalled for 10 days with no activity. This does not necessarily mean that the issue is bad, but it most likely means that nobody is willing to take the time to fix it. If you have found Locust useful, then consider contributing a fix yourself!
Prerequisites
Description
I've worked pretty extensively with the Locust framework. Locust is a fantastic bare bones (lightweight) framework. I really enjoy using it, and kudos to all the contributors. That being said, it seems to me (unless I'm missing something), that it is very pass/fail oriented. It does not have out of the box solutions for setting/applying custom failure rules. What is considered a failure is very different depending on the use case. Therefore, I believe that the pass/fail is insufficient.
As an example: If I have a test that hits an endpoint:
some_example/resource?id=123
There are no options to specify what my expectations of this endpoint are. If the endpoint responds in 2000 ms, I might very well consider that a critical failure. Currently, my options are as follows:
I suggest implementing an ability to:
A basic approach would be the extend the request method to accept an optional rule arg and process the rules downstream. The actual rule application would need to occur at the end of the test run since attributes could consist of multiple test run aggregations such as averages. I found it difficult to understand how the output json is generated. Ideally, this would just enhance that output.