Closed Freyert closed 6 years ago
are you sure the host under test is listening and reachable via HTTP from the slave nodes? (i.e. can you curl it?)
@cgoldberg thanks for encouraging me to double check! It turns out that on AWS VMs in the same security group do not have access to each other. Fortunately you can specify the security group as a source and it will adjust appropriately. Hopefully this will be useful in hindsight! Thanks!
:tada:
Description of issue / feature request
I have a single master and 2 workers clustered in AWS. I can see from the UI that the master knows there are two workers, but when I hit the button to swarm it never moves onto the next page.
It does start making requests to get the error and response status, but these all just return empty.
I even get this log:
[2018-04-21 15:45:18,969] ip-172-31-21-173/INFO/locust.runners: Sending hatch jobs to 2 ready clients
but nothing else.Expected behavior
I expect locusts to hatch on the worker nodes, and for the ui to start displaying metrics.
Environment settings (for bug reports)
Steps to reproduce (for bug reports)
locust -f locust_static_rpc.py --master
locust -f locust_static_rpc.py --slave --master-host=AWS_PRIVATE_IP