locustio / locust

Write scalable load tests in plain Python 🚗💨
MIT License
24.64k stars 2.96k forks source link

Setup/teardown hooks #59

Closed klausbrunner closed 6 years ago

klausbrunner commented 11 years ago

It'd be extremely useful to have dedicated setup and teardown functionality in Locust (or if there is something like this already, to have it documented).

My rough idea would be:

Thoughts? (Have I missed something that already exists?)

heyman commented 10 years ago

Hi!

I'm really sorry that you haven't gotten a reply until now.

There's currently no functionality that does exactly what you describe. When I've been in need of some setup code for my locust tests, I've been putting it at the module level of my test scripts. But that's not run every time a test is started/stopped, and there's no tear down.

I'm curious if you have, or have had, a specific use-case where this was needed (I'm sure there are such cases, I would just like to hear about it).

klausbrunner commented 10 years ago

In my case it's an application that needs to ingest test data (some of which are randomly selected) before running meaningful load tests, and then make sure they're gone afterwards to avoid bloating the data store. Setup can be a fairly lengthy process.

Sure, you can call external scripts before and after test runs, but that's quite inconvenient, especially if the teardown phase needs to know things from the setup phase (e.g. generated IDs). And it just makes sense to have a self-contained test package instead of a bunch of different things glued together.

(I'm no longer using locust, but the lack of a good setup/teardown facility is one reason that made me switch to another solution.)

GeoSpark commented 10 years ago

I too need this functionality - I am creating and reading files through a RESTful interface, and some clean-up between runs is needed. I figured it needed a companion to the on_start() function, rather than an event, so I have overridden TaskSet::run() and captured the GreenletExit exception:

class MyTaskSet(TaskSet):
    def run(self, *args, **kwargs):
        try:
            super(MyTaskSet, self).run(args, kwargs)
        except GreenletExit:
            if hasattr(self, "on_stop"):
                self.on_stop()
            raise

Of course the ideal solution would be to put my exception-handling code in the relevant place in core.py I could put in a patch or a pull request if you want.

sfitts commented 10 years ago

I'd also like this functionality -- in my case in order to create and then delete user accounts (using a single, dummy account isn't possible in my case). The user creation I'm doing in the __init__ of my Locust class and that works fine, what I don't have is the corresponding cleanup. For that what I think I'd like is an EventHook called on test stop, but other solutions could work as well.

daubman commented 10 years ago

For cleanup we've used the quitting event, which might also work for you if you don't mind quitting rather than just stopping (depending on how much cleanup you really need, if you just persistently track created things for the entire runtime, then cleanup on quit might be fine) - we do something like:

import threading
import functools

QUIT_HANDLED = False
quit_lock = threading.Lock()
def _quit(client, delete_sessions):
    global QUIT_HANDLED
    if not QUIT_HANDLED:
        with quit_lock:
            if not QUIT_HANDLED:
                QUIT_HANDLED = True
                #...cleanup code here

#...actual code

class APIUser(Locust):
    task_set = APILikeTaskDistribution

    #         min  sec  ms
    min_wait = 30 * 1000
    avg_wait = 2 * 60 * 1000
    max_wait = 5 * 60 * 1000

    def __init__(self):
        super(APIUser, self).__init__()
        events.quitting += functools.partial(_quit, self.client, True)

But I agree, a more uniform/easy approach to teardown (that works on stop and not just quit) would be a nice feature.

sfitts commented 10 years ago

Thanks -- I looked at the quitting event and will likely use it as you suggest. Nice to know that it works for someone else.

That will work fine for the actual deployed version of the tests since we'll shutdown after the run. For development having something at the test level would be more convenient (and there may be other cases where quitting won't work).

sfitts commented 10 years ago

On a somewhat related note, does anyone have a technique for performing per-user work (aka per-locust) that must be done before the locust should be considered fully hatched? I tried putting this in the __init__ of the locust (as described above), but that doesn't work since the locusts are constructed asynchronously. I need this for a couple of reasons:

Ideally I'd like to set the locusts off in groups of N, with an N second pause between groups. The count of clients and spawn rate sound like they can do this, but they don't really. Instead each client is created and starts running (with no real difference between init work and task running work) and there is an M second pause between starting each client (where M is clients/spawn rate).

heyman commented 10 years ago

@sfitts: One slightly hacky solution to that would be to acquire a semaphore that you release at the _locust.events.hatchcomplete event, and wait for that semaphore when the locusts/tasksets start.

Here's a working example:

from locust import HttpLocust, TaskSet, task, events

from gevent.coros import Semaphore
all_locusts_spawned = Semaphore()
all_locusts_spawned.acquire()

def on_hatch_complete(**kw):
    all_locusts_spawned.release()

events.hatch_complete += on_hatch_complete

class UserTasks(TaskSet):
    def on_start(self):
        all_locusts_spawned.wait()

    @task
    def index(self):
        self.client.get("/")

class WebsiteUser(HttpLocust):
    host = "http://127.0.0.1:8089"
    min_wait = 2000
    max_wait = 5000
    task_set = UserTasks

One caveat though. If you're running Locust distributed, there's still a possibility for some requests to happen before all locusts has hatched. That's because there's no synchronisation of the _hatchcomplete events between the slaves, so for example if one machine is much slower for some reason, it might lag behind in the spawning of the locust instances.

Also, since there is no event to listen for when the test stops, there's no easy way of re-acquiring the semaphore once the test has stopped. Since there's clearly a need for it, we should add starting and stopping events into the next release of locust.

sfitts commented 10 years ago

@heyman: Thanks for the suggestion and the time putting together the example. I'm not expecting any kind of distributed coordination, just need to throttle things on a local basis. So something along these lines should work well.

mwildehahn commented 10 years ago

I'm also looking to support ingesting test data that can be referenced when executing a task.

I have a django app with various models/factories. I'm planning on writing a script that will generate the models i need for the load test within the django app. My plan is then to adjust the locust runner to take an "initial_data" argument which can be referenced within the task. If a master was passed this information, it could also send it along to the slaves when sending the hatch event.

Is there some other way that I can do that currently? Does that seem like a reasonable extension to the current architecture?

shawngustaw commented 8 years ago

This issue is pretty old and I'm looking for something along the lines of what's been discussed. Has there been any progress?

samjiks commented 8 years ago

any progress on this, would be great.

mohanraj-r commented 8 years ago

The on_start() can be used as a setup I guess? And not sure if the events.quitting can be used to create a hook that acts as a teardown? Would be nice to have a on_stop() that can be defined similarly to on_start().

swordmaster2k commented 7 years ago

+1 for an on_stop() feature. I have some custom websockets started on their on greenlet and having an on_stop handler would enable me to tear them down gracefully.

rmandar16 commented 7 years ago

+1 for on_stop() feature...Have some common tear-down tasks, to be executed.

josh-cain commented 7 years ago

Another +1 for on_stop! would be immensely helpful

Jim-Lambert-Bose commented 6 years ago

Another +1 for on_stop()

jdabello commented 6 years ago

+1000

ad34 commented 6 years ago

+1

aldenpeterson-wf commented 6 years ago

This was addressed in https://github.com/locustio/locust/pull/658 and will be released in the next release of Locust! 🎉

ad34 commented 6 years ago

awesome :) I am curently testing a websocket based title and it will help a lot because stopping the test dont close the websockets