locustio / locust

Write scalable load tests in plain Python 🚗💨
https://locust.cloud
MIT License
25.06k stars 3k forks source link

Slave count doesn't get updated in the UI if no more slaves are alive #62

Closed bogdangherca closed 5 years ago

bogdangherca commented 11 years ago

Hi,

I was running some simple tests with locust (which is so cool btw) and I noticed that if you end up with no slaves connected, the UI does not reflect this change. The slave count in the UI sticks to 1 in this case. Also, it would be nice to get a warning message in the webUI if you start swarming with no slaves connected. Now, you get this warning only in the command line.

I could provide a quick fix for this if you'd like.

Thanks!

heyman commented 11 years ago

The slave count sounds like a bug and should be fixed. Thanks for reporting, and a fix would definitely be appreciated :).

I guess some kind of warning in the web UI wouldn't be bad either, but please do two separate pull requests if you give them a shot.

nmccready commented 11 years ago

I second the issue, are the slaves actually there and the number is invalid? Or is the slave number correct?

heyman commented 11 years ago

Ok, thanks for reporting! Hopefully I'll get time to go over some waiting pull requests and issues, early next week.

nmccready commented 11 years ago

It might not be entirely in accurate. I am trying to spawn 20 slaves per machine; running ps aux, I never see more than 10-12 locust instances. VR, Nick On Tuesday, May 7, 2013, Jonatan Heyman notifications@github.com wrote:

Ok, thanks for reporting! Hopefully I'll get time to go over some waiting pull requests and issues, early next week.

— Reply to this email directly or view it on GitHub.

Nicholas McCready

Personal Email: nmccready@gmail.com Web site: http://codemonkeyseedo.blogspot.com/ Twitter: nmccready

nmccready commented 11 years ago

Ok this bug did come again and I did verify that it does report inaccurately at times. This instance for example was supposed to be 14 slaves and it was 12.

To help you count each machine you can use the command below

ps aux | grep py | grep -v grep | awk '{print $12}' | wc -l
bogdangherca commented 11 years ago

@nem: starting slaves works as it should for me. how exactly are you trying to spawn slaves?

nmccready commented 11 years ago

@bogdangherca: nvm slaves is working fine. I was using a script that was for an older version of locust. I am not entirely sure if the older version worked this way or not... The old script targeting 0.5.1 would start the master last. I reversed the script to start the master first and it fixed the issue. This is specified here https://github.com/locustio/locust/blob/master/docs/quickstart.rst

BTW that url should be in the Documentation site somewhere or the text should be in the Documentation for the latest version. I've noticed many "gotchas" of documentation gems that are on Github and not on the doc site.

bogdangherca commented 11 years ago

@nem: Indeed, starting the master last was your problem. You should start the master first in order for the slaves to connect to it. Anyway, glad it worked fine for you.

nmccready commented 11 years ago

nvm looks like it was chrome gist problem, it worked fine in safari .

Here is the gist https://gist.github.com/nmccready/5547455 . Anyways my issue is not being able to start a user amount beyond the slave amount. At least the reported users is never larger than the slave amount.

So the gist is to determine if something is wrong with setup.

nmccready commented 11 years ago

FYI this has started working, IE user count > than slave count.

vorozhko commented 7 years ago

Hello guys, I has similar issue. Master and web UI doesn't reflect the actual slaves count. We run locust in Docker containers. My setup include 200 slaves, so 200 docker containers. Master has registered them all. But, when containers number was downscaled to 10, master and web UI still show 200 slaves assigned.

Our docker is using latest locustio package from pip. Any advice is appreciated.

Thanks!

jtpio commented 7 years ago

@vorozhko: I ran into the same issue with the following setup:

The problem is related to how the containers are stopped. For a locust slave to be properly terminated and the number correctly updated, it needs to send the quit message when handling the SIGTERM event by calling the shutdown function.

In my case, the entrypoint for the container is a shell script which starts locust as a child process. It means that the shell script will be assigned PID 1 and the locust script a different PID. When docker stop is called, it sends the SIGTERM event to the process with PID 1. But if it is not handled, it waits 10s and then kills the container (and locust can't shutdown gracefully).

The locust start-up I'm using is mostly inspired by https://github.com/peter-evans/locust-docker. With that setup, the easy fix was to prepend exec to replace the shell with the python program, so locust gets PID 1 and can handle the SIGTERM signal:

exec $LOCUST_PATH $LOCUST_FLAGS

Another way to fix this problem would be to handle the case when a slave is disconnected in the locust code itself (socket closed or similar).