Closed djonsson closed 4 years ago
I found that specifying the number of requests using -n <number>
does seem to work. But what I don't find is a direct way of setting a timeout on the main locust thread. I only want to run my locusts for a 2 or 3 hour period and then all the locusts to be killed and the thread to stop, but there doesn't seem to be a way of doing this. The stop_timeout
attribute of HttpLocust
just sets a lifetime for individual locusts, not the main thread.
Is there a way of doing this?
Your locust's attributes are accessible in your task sets via the TaskSet.locust
attribute. A possible solution is to add a max_session_time
and start_time
attributes in your Locust
and add code in your task method that checks to see if the time difference between current and start times exceeds max_session_time
and if so exits.
I've tried this in my locust file and it works.
Can you please share the code for the same
locust -H <my_url> -f <my_locust_file> -c 120 -n 10000 --no-web --only-summary
With this command line I'm seeing what sounds a bit like the original question in this issue. The process never ends, and also never shows the final results, until I hit Ctrl+C (which obviously is not at my disposal when running in Docker or otherwise in CI). I'm happy to open a new issue, but this seemed similar enough.
@sverhagen, when extending Locust class, override _stoptimeout attr (it's in secs) or set in command line --num-request option
I have used the -n
I saw this issue in master-slave mode v0.7.5. And to cope with it is using
def _counter():
total_reqs = 0
total_failures = 0
stats = runners.locust_runner.request_stats
for key in sorted(stats.iterkeys()):
r = stats[key]
total_reqs += r.num_requests
total_failures += r.num_failures
return total_reqs, total_failures
def die_on_stat_analysis(client_id, data):
"""
Only master handles this.
Currently each slave runs max_request independently.
"""
total_reqs, total_failures = _counter()
max_requests = LocustEx.max_requests
if max_requests is not None:
if total_failures + total_reqs >= max_requests:
msg = 'Stopping on max_requests={}/{} reached'.format(total_failures + total_reqs, max_requests)
logger.warn(msg)
poison_pill(msg)
events.slave_report += die_on_stat_analysis
web_base = 'http://localhost:8089'
def poison_pill(msg=''):
try:
console_logger.warn("Taking poison pill, due to {}".format(msg))
gevent.sleep(10) # let reports complete its job
requests.post(web_base+'/shutdown', data=msg)
except Exception as e:
console_logger.error("Pill caused {}-burp".format(e))
raise e
@web.app.route("/shutdown", methods=["POST"])
def shutdown():
"""
Extremely rude way to stop itself
"""
from locust.web import logger
logger.warn('Shutting down all')
# custom code here for collecting stats
runners.locust_runner.stop()
if LocustEx.is_master:
runners.locust_runner.quit()
try:
# stopping flask's 'serve_forever'
raise ShutDownException('Shutdown is requested')
finally:
# Hackery:
# To return control, need to kill the rest greenlets (shrug).
import gc
gevent.sleep(5)
gevent.killall([obj for obj in gc.get_objects() if isinstance(obj, greenlet)])
very rude, but very robust
Hi all!
Any update on the issue? I can reproduce this, master and slave nodes are not quitting on reaching max number of requests.
Dockerfile:
#partailly created from christianbladescb/locustio
FROM python:3
RUN pip install locustio pyzmq && \
mkdir /locust
WORKDIR /locust
ONBUILD ADD . /locust
ONBUILD RUN test -f requirements.txt && pip install -r requirements.txt; exit 0
EXPOSE 8089 5557 5558
requirements.txt
pathlib==1.0.1
docker-compose:
version: '3'
services:
locust-master:
build: &common-image
context: .
hostname: locust-master
env_file: &common-env
- locust.env
volumes: &common-volumes
- ./:/locust
ports:
- 8089:8089
- 5557:5557
- 5558:5558
command:
- locust
- -c 10
- -r 10
- -n 5
- --no-web
--only-summary
- --expect-slaves=1
- --master
- --csv=requests
locust-slave:
build:
context: .
env_file:
- locust.env
volumes:
- ./:/locust
command:
- locust
- --slave
- --master-host=locust-master
- --master-port=5557
execute with docker-compose.exe -f .\docker-compose.locust.yml -p locust up --force-recreate
Hi @djonsson (Hej Daniel! :) ) As the support for -n has been removed I think this is no longer an issue. But I'm hoping we will reintroduce -n (https://github.com/locustio/locust/issues/1085) so I'm leaving the ticket open in case we do that and want to look at this then.
@cyberw Hey (Lars!), thanks for looking into this :)
I'm going to go ahead and close this. I'm less convinced that we'll put the -n flag back in (as it was before anyway), so keeping this open doesnt make much sense.
We are using Locust in a continuous integration environment, spawning load tests through jenkins, and we have noticed that on occasion the running thread never stops even though number of requests should be fulfilled.
The machine that runs the test is stuck in an output loop, printing the last line of the log over and over again indefinitely (no increased network traffic or CPU usage is seen on the machine), and never reaching the summary of the test.
We have built a lot of customization around Locust, but if I remember correctly this behavior is found in 'vanilla locust' as well.
Is anyone else experiencing these issues with Locust or is it just us?