locustio / locust

Write scalable load tests in plain Python 🚗💨
https://locust.cloud
MIT License
24.88k stars 2.98k forks source link

Even with min_wait and max_wait == 0, I cannot break 100 requests per second. Why is that? #1030

Closed didip closed 5 years ago

didip commented 5 years ago

Description of issue

I have 1 large box to perform load testing. I have all of its file descriptors and memory to perform load testing. But locust never exceeds 100 rps.

Expected behavior

I want to push locust to run at 10000 rps.

Actual behavior

I can only do 100 rps. Why?

Environment settings

Steps to reproduce (for bug reports)

# locustfile.py
class MainTaskSet(TaskSet):
    @task
    def get_root(self):
        with self.client.get('/', catch_response=True, verify=False) as response:
            try:
                body = json.loads(response.content)
                if len(body) <= 0:
                    response.failure(self.bad_json_message(response))

            except Exception:
                response.failure(self.bad_json_message(response))

class LocustTests(HttpLocust):
    task_set = MainTaskSet
    min_wait = 0
    max_wait = 0

# bash
locust -f locustfile.py --no-web  --host=http://remote.example.com -c 10000 -r 10000 --run-time 10m
cgoldberg commented 5 years ago

I can only do 100 rps. Why?

Are you monitoring your environment? Where's the bottleneck?

reedstrm commented 5 years ago

Using other load test tools, I've hit the limit of net.core.somaxconn: $ sysctl net.core.somaxconn net.core.somaxconn = 128 I kick that up to 1024 on both server and load-test machine.

aldenpeterson-wf commented 5 years ago

This definitely sounds like a server bottleneck to me.

didip commented 5 years ago

I don't think the server is the bottleneck because on the same server, I could spawn 3000 rps using github.com/tsenart/vegeta. 5000 is not a big deal as well.

yorek commented 5 years ago

I had a similar problem here: https://github.com/locustio/locust/issues/1015...the most meaningful answer is "remember, locust only runs in a single process, so it won't make use of multiple cpu cores unless you use multiple slaves."

didip commented 5 years ago

I like the whole idea of Locust overall; slick UI and master slaves architecture. However...

IMO, the whole hatch rate idea is too complicated and hard to understand. See def hatch() https://github.com/locustio/locust/blob/master/locust/runners.py#L102

Why not simplify and spawn everything all at once?

cgoldberg commented 5 years ago

IMO, the whole hatch rate idea is too complicated and hard to understand

that's subjective ... It's pretty straight forward to me.

Why not simplify and spawn everything all at once?

because that's not always the workload we want to simulate. We need the ability to make virtual user arrival not happen all at once.... However, you are welcome to set your hatch rate to a level that essentially does what you are asking for. I'm not removing the hatch rate feature unless it's replaced by a better way to model increasing workloads.

there's nothing actionable in this issue... closing.