locustio / locust

Write scalable load tests in plain Python 🚗💨
https://locust.cloud
MIT License
24.89k stars 2.98k forks source link

Allow a fixed RPS rate #646

Closed ghost closed 4 years ago

ghost commented 7 years ago

Description of issue / feature request

Please allow specifying a fixed RPS rate.

While the current model is nice for modelling users, it is not very useful for modelling more complex modern web applications which often will exhibit an exact known waiting behavior rather than something controlled by a more unpredictable distribution.

Currently, I see no good way modelling this with locust and we are having huge trouble in our current project working around by guessing settings needlessly to roughly get to the actual RPS the web app knowingly sets off.

Expected behavior

RPS setting is available as alternative (including a way to guarantee e.g. exactly 1 request per second even if the actual request itself has a varying duration, e.g. a 200ms request response and a 500ms request response won't lead to strong variations of the interval)

Actual behavior

I can't find an RPS setting, and requests taking longer seems to make users wait more instead of allowing to have some sort of predictable fixed interval behavior (as is unrealistic for real users of course, but not unrealistic for many automated web clients).

cyberw commented 5 years ago

I have released my code as part of locust-plugins: https://github.com/SvenskaSpel/locust-plugins/blob/master/locust_plugins/tasksets.py

I have also built support for this into my tool for automated distributed locust runs: https://github.com/SvenskaSpel/locust-plugins (basically it just divides the RPS rate equally between all locust processes)

cyberw commented 4 years ago

Solved by #1118. For global RPS control (as opposed to per-locust control) you still need to have some custom code (like the one provided by locust-plugins), but maybe I can add that to locust itself now that I'm a maintainer...

DavideRossi commented 4 years ago

@dterei Yes, that is correct. We're user centric rather than RPS centric. Locust provides you with a framework to define user behaviour using code, and then you select the number of these simulated users that you want to run against the system you're load testing.

In many cases it's much more relevant to be able to say "our system can handle X number of simultaneous users", than "our system kan handle Y requests/second". Even though it's often quite easy to determine Y as well, by just simulating more users until the system that is being tested can no longer handle the load.

I am late to this, but let me say that this is true for websites. But different kinds of platforms can be load-tested. In my case I deal with IoT systems, where a request does not usually depend on the response time of the previous one. There are (several) scenarios in which a constant rate makes fully sense.

Jasnoor1 commented 4 years ago

I fount and realized 2 ways to generate the fixed RPS count

1. To change the RPS count from wait_function changing (recommended way):

This wait function generates stable 100 RPS in the desired ranges with 100 users The start wait time should be 1000 ms.

class LoadTests(HttpLocust):
    host = base_uri
    task_set = UserScenario
    wait_function = lambda self: self.fixed_rps_wait_function(100)

    def __init__(self):
        super(LoadTests, self).__init__()
        self.my_wait = 1000

    def fixed_rps_wait_function(self, desired_rps):
        # Will increase and decrease tasks wait time in range of 99.8 - 100.7 rps
        current_rps = runners.global_stats.total.current_rps
        if current_rps < desired_rps - 0.2:
            # the minimum wait is 10 ms
            if self.my_wait > 10:
                self.my_wait -= 4
        elif current_rps > desired_rps + 0.7:
            self.my_wait += 4
        print("Current RPS: {}".format(current_rps))
        print("Default wait is: {}".format(self.my_wait))
        return self.my_wait

2. To change the users count during the test run from hooks (only for master mode, and hard to adapt to the desired number of slaves):

hatching and killing users to get desired RPS count during the LoadTest (desired - 100 RPS using 100 users) Hooks:


##### Every slave will spin up 2 users
####### The users count and desire_rps should be counted according to the slaves number
def on_report_to_master(client_id, data, **kw):
    # Executes before on_slave_report
    # Validate data statistics on slave
    clients_number = runners.locust_runner.num_clients
    hatch_rate = runners.locust_runner.hatch_rate
    print("Clients number: {}".format(clients_number))
    rps_mid = data['stats_total']['num_reqs_per_sec'].values()
    if len(rps_mid) >= 1:
        rpss = list(rps_mid)
        rpss.sort()
        if max(rpss) < 100:
            clients_number += 2
            runners.locust_runner.start_hatching(clients_number, hatch_rate)
            events.hatch_complete.fire(user_count=clients_number)
        if max(rpss) >= 103:
            clients_number -= 1
            runners.locust_runner.start_hatching(clients_number, hatch_rate)
            events.hatch_complete.fire(user_count=clients_number)

def on_slave_report(client_id, data, **kw):
    # Executes after on_report_to_master
    # Print data statistics on master
    rps_number = runners.global_stats.total.current_rps
    clients_number = runners.locust_runner.num_clients
    hatch_rate = runners.locust_runner.hatch_rate
    print("Users number: {}".format(data['user_count']))

Add hooks to locustfile

###### RPS listeners WORKING ONLY WITH ONE NODE and MASTER mode
events.report_to_master += on_report_to_master
events.slave_report += on_slave_report

@savvagen Can u please explain me the wait_function more accurately? How u have calculated this thing?

guwenyu1996 commented 4 years ago

@cyberw I'm trying to use your solution in locust version 1.2.3. It seems runners does not have the attribute runners.locust_runner. Can you specify the version for your code?

cyberw commented 4 years ago

Hi @guwenyu1996 ! Unfortunately I havent had time to keep that up to date (and I had some weird issues with the RPS rate). You're on your own for now...