Closed stephen-harris closed 3 years ago
@stephen-harris It's a great idea! I will merge this ASAP after reviewing. Can you add some unittests for this?
@myzhan I've added a couple of tests. The scale-out verifies that only new tasks are submitted (i.e. the existing ones are kept). The scale-in checks the appropriate tasks are removed, and no new tasks are created.
I had to expose setTaskExecutor
as a protected method to inject the spy, but that seems reasonable.
I also added LocustTestHelper
class so I could set the runner of the Locust singleton. This is necessary as otherwise the TestTask
exceptions, and has to be re-submitted. It's a bit of hack - the other option was make Locust.setRunner
public. That seemed worse as Locust.setRunner
exists only for tests.
I didn't do any assertions on the futures
map because that's more of an implementation detail, and would have involved exposing more of Runner
's internals.
FYI, I have comfirmed that locust removes the call to reset stats, we should remove it as well.
So when I first encountered the issue I had assumed that it was the stopping and recreating the threads that was causing it. Fixing that did improve it, but it wasn't until I removed the call to reset stats that the issue went away completely.
I've since tested this with just removing this line - i.e. the threads are all stopped and restarted on spawn message, and that appears to be sufficient to fix the issue.
So unless there is any particular reason to try and keep threads, the smaller and simpler fix here might be just to remove the call to reset stats. (Though its conceivable a task with a slow onStart()
call might still a drop in RPS). What do you think?
I think we should keep threads, avoid lots of creation. If we are ok with this PR, I will merge it.
I've just tested with two tasks and a 1:3 weights and it correctly scaled up each task.
User (per worker) | Task (3) threads | Task (1) threads |
---|---|---|
5 | 4 | 1 |
10 | 8 | 3 |
15 | 11 | 4 |
20 | 15 | 5 |
Obviously due to rounding the ratio of users doesn't work out exactly 1:3, but that's unrelated to this issue. So I'm happy for this PR to be merged.
Thanks again! I have merged into master branch and a minor version will be published soon.
FYI, 2.0.1 has been released.
When running a custom load test shape, there would a be drop in the RPS whenever there was a change in the number of users.
Locust version: 2.0.0b3 Locust4j version: master
Steps to reproduce
Run a test with a custom load shape. E.g.
I noticed that all threads were being terminated and recreated on each spawn message, and that the stast were being reset. This PR:
ThreadPoolExecutor.submit()
The expected image above was taken with this patch applied.