Open beberlei opened 2 years ago
Wouldn't this be a bit weird for comparison analysis? With constant wait times and concurrency, higher response times lead to lower requests per interval. By changing wait times dynamically, you introduce another variable that can't as easily be controlled as constant values.
Are you targeting a different scenario (i.e. max requests per interval as an upper bound of an installation) than I'm thinking of? Would an even lower wait time maybe be more suited for that to keep all workers busy? This would unfortunately remove a bit of "real-world-likeness" from the measurements, though, as users don't usually browse a shop as fast as possible. I'm not sure which tradeoff is better 🙂
@kleinmann I guess the problem is with looking at "output" variables orders / hours and pageviews / hours. If the response time is slower (1 second vs 2 second) then both output variables get much lower. But Since there are already wait times in the Locust scenarios, both cases could still leave the store not at its limit. So depending on the stores response time, with the same "concurrent threads", a different orders / pageviews per hour as a result.
For future reference: The problem I am describing here is known as "coordinated omission" and described in this talk https://www.youtube.com/watch?v=lJ8ydIuPFeU and summarized in this blog post http://highscalability.com/blog/2015/10/5/your-load-generator-is-probably-lying-to-you-take-the-red-pi.html
Lets assume a worker $n with their wait time of 10 seconds and always the same user story makes about 1000 requests per load test duration at 1 second response times. However when the response time decreases to 2 seconds, then since its waiting and not "catching up", it means that we make less requests / minute.
We should reduce the constant wait from 10 seconds to 5 and increase the time.Sleep, but make it dynamic:
This way unless the response times get slower than the unacceptable bracket, the pressure is always the same, i.e. the same amount of requests is made with the same user concurrency.