Open mukutbhattacharjee opened 6 years ago
I don't think this is related to Locust, but it's more likely related to AWS IoT and some restriction related to the number of connections from a single client. In HTTP tests I've been able to create thousands of users in a single load generator, the only limit being the EC2 instance size and the number of open files (defined in /etc/security/limits.com), but it seems you already tried with those options and your EC2 instance size should be more than enough.
I experienced a similar problem and was able to overcome the issue by launching a distributed Locust test (multiple slave load generators). To save cost, I think each slave could run in a t2.medium. Here is a link to the Locust documentation on how to run distributed tests: https://docs.locust.io/en/latest/running-locust-distributed.html
Unfortunately, there's no clear limit that explains this behaviour in the AWS IoT Service Limits page: https://docs.aws.amazon.com/general/latest/gr/aws_service_limits.html
Taking a closer look at this issue, it is more likely to be related to this MQTT Paho issue: https://github.com/eclipse/paho.mqtt.python/issues/238 rather than an AWS IoT service limit.
As mentioned in the previous comment, one workaround is to launch distributed tests, where a single load generator doesn't hatch more than 340 users (which could likely be done with a t2.micro per slave - probably no need to launch a t2.medium as suggested in the previous comment) . However, this approach becomes impractical for tests with a large number of users (i.e. > 10K, >100K)
I'll keep this issue open and use it to keep track of a solution that simplifies the process of launching many load generators, likely through the use of containers or a similar solution.
I'm hitting the same issue with MQTT/Paho adopting your sample, this time against Azure IoT so definitely is not IoT causing it. Showing only 337 connected though Locust says 500 were spawned. Any update on a solution to get around this?
We have in our backlog adding support for automating the launch of Docker containers in ECS, which we think will solve the problem. However, we don't have it in place yet.
Hello, I have been working with locust mqtt script for last couple of days. I am not able to connect more than 334 mqtt locusts per VM in any way. I am using the following configurations
BROKER: AWS IoT broker VM: AWS EC2 t2.xlarge
The VM has everything maxed out including number of open file descriptors, virtual memory etc.
Even if I start with 400, 500 or any number of locusts more than 334, They are hatched successfully according to logs, but the number of CONNECTs requests never exceeds 334.
Any idea about this issue? I will provide logs and other configuration details if required.