locustio / locust

Write scalable load tests in plain Python 🚗💨
https://locust.cloud
MIT License
25.05k stars 2.99k forks source link

"Stop" button doesn't always stop workers #1586

Closed irvintim closed 4 years ago

irvintim commented 4 years ago

Describe the bug

In my test environment I am running a distributed loadtest with 3 workers in GUI mode. Intermittently (seemingly generally on tests following the initial test after spin up of the hosts), when pressing the "STOP" button on the master GUI web interface, the feedback will say "Stopping" and looking at the output on the workers console they will also report that they are "Stopping" but the number of users never decreases and the load test continues. We have to "CTRL-C" each worker separately to get the test to stop.

Expected behavior

When the "STOP" button is pressed the load test should stop immediately on all workers (or at least within a few seconds).

Actual behavior

The workers and the GUI report the status as "STOPPING" and the test continues -- the "STOP" button is now gone so the only recourse is to log on to each worker and stop the test manually.

Output from one of the workers on the command line is: "[2020-10-06 23:12:49,585] locust02.YYYY.XXXX.local/INFO/locust.runners: Stopping 666 users"

Steps to reproduce

On the 3 workers we run this command: locust -f locustfile.py --worker --master-host=172.20.2.254 On the master host we run this command: locust -f locustfile.py --master

On the GUI we set the number of users to 2000 and the spawn rate of 100. (We have tried 6000 and 300, and 1000 and 100.) The problem is intermittent.

locustfile.py is shown below

Environment

deviceIdStart = 1000000000000000000 deviceIdRange = 10000 client_id = os.getenv("CLIENTID") client_secret = os.getenv("CLIENTSECRET")

ballots = ["test"]

candidates = { "test": { "T1": "test1", "T2": "test2", "T3": "test3", "T4": "test4", "T5": "test5", "T6": "test6", "T7": "test7", "T8": "test8", } }

def candidate(): ballot_id = random.choice(ballots) candidate_list = list(candidates[ballot_id].keys()) candidate_id = random.choice(candidate_list) candidate_name = candidates[ballot_id][candidate_id]

return ballot_id, candidate_id, candidate_name

def device_id(): dIdS = int(os.getenv("DEVIDSTART", deviceIdStart)) dIdR = int(os.getenv("DEVIDRANGE", deviceIdRange)) return random.randrange(dIdS, dIdS + dIdR)

class UserBehavior(HttpUser): min_wait = 2000 max_wait = 9000 host = os.getenv("TARGET_URL")

def __init__(self, parent):
    super(UserBehavior, self).__init__(parent)

    self.token = ""
    self.headers = {}
    self.tokenExpires = 0

def on_start(self):
    self.token = self.login()

    self.headers = {
        "Authorization": "%s %s"
        % (self.token["token_type"], self.token["access_token"])
    }

    self.tokenExpires = time.time() + self.token["expires_in"] - 120

def login(self):
    """
    Gets the token for the user
    :rtype: dict
    """
    global client_id
    global client_secret

    url = os.getenv("AUTH_URL")
    print("Get token with %s" % url)
    response = requests.post(
        url,
        headers={
            "X-Client-Id": client_id,
            "X-Client-Secret": client_secret,
            "cache-control": "no-cache",
        },
    )
    try:
        content = json.loads(response.content)
        print("Access token: %s" % content.get("access_token"))
        return content
    except:
        print("Error in getToken(): %s" % content.get("error_msg"))
        return None

@task
def vote(self):
    if self.tokenExpires < time.time():
        self.token = self.login()
        if self.token:
            self.tokenExpires = time.time() + self.token["expires_in"] - 120
        else:
            print("Unable to get SAT Token")
            return None
    selection = candidate()
    message = {
        "Id": "TEST-P",
        "dId": device_id(),
        "bId": selection[0],
        "sIds": selection[1],
        "sTexts": selection[2],
    }
    response = self.client.post(
        "/api/v1/test?partner=test", message, headers=self.headers
    )

# vim: set fileencoding=utf-8 :
cyberw commented 4 years ago

Hi! Any chance you can try this on latest master? Coult this be related to https://github.com/locustio/locust/issues/1535 ? (@max-rocket-internet fyi)

Is there any heavy code here somewhere that might block gevent from doing its work? (Switching between greenlets)

irvintim commented 4 years ago

I will try it on the latest master -- and will let you know if I see any change. Thanks for the link to the other report -- I don't think it's exactly the same, but the mention of the "gevent.sleep(0)" that they added gives me an idea of something to try that might help with my problem. I can also temp remove the "randrange" call and see if that has any impact. I'll report back my findings.

max-rocket-internet commented 4 years ago

@irvintim would be great if you could enable debug logging on the master, using master branch as I added more logging there to debug this issue.

irvintim commented 4 years ago

@cyberw and @max-rocket-internet I retested with the latest master: [irvin@ip-172-20-2-53 voting-multi-ballot]$ locust --version locust 1.2.3 and I haven't been able to reproduce my problem -- I didn't change anything else in my test yet (nor have I enabled debug logging yet) -- just trying apples-to-apples and so far so good. This problem is a bit intermittent so I'll activate debug logging tomorrow and continue testing it and will let you know if I run into an issue.

BTW this is what I used to install from master: sudo pip3 install -e git://github.com/locustio/locust.git@master#egg=locust

Thanks for the help!!

irvintim commented 4 years ago

I am running the latest master branch on the master node and worker nodes and can't get my initial problem to reoccur. Since this issue has been intermittent in the past I am not sure I am ready to declare absolute victory quite yet, I feel pretty good about it right now.

I ran the master node in debug mode and below is the output -- this was running my test at full bore (6000 users with swan rate of 300) -- this level usually fails to stop fairly often, so the fact that we have now run it a dozen times without the problem is encouraging.

Besides the locust version change (and the debug flag) everything else about the test was identical to when we had this problem in the past.

[irvin@ip-172-20-2-53 voting-multi-ballot]$ locust -f locustfile.py --master --loglevel=DEBUG      
[2020-10-09 18:02:55,142] ip-172-20-2-53.XXX.localus-west-2.compute.internal/INFO/locust.main: Starting web interface at http://0.0.0.0:8089 (accepting connections from all network interfaces)
[2020-10-09 18:02:55,153] ip-172-20-2-53.XXX.localus-west-2.compute.internal/INFO/locust.main: Starting Locust 1.2.3
[2020-10-09 18:02:55,202] ip-172-20-2-53.XXX.localus-west-2.compute.internal/INFO/locust.runners: Client 'ip-172-20-2-34.XXX.localus-west-2.compute.internal_e14e85acc0024d6a812cbccd79295653' reported as ready. Currently 1 clients ready to swarm.
[2020-10-09 18:02:55,206] ip-172-20-2-53.XXX.localus-west-2.compute.internal/INFO/locust.runners: Client 'ip-172-20-2-187.XXX.localus-west-2.compute.internal_b5fde0bdc9d9416ebe69c4a7dcb13da9' reported as ready. Currently 2 clients ready to swarm.
[2020-10-09 18:02:55,210] ip-172-20-2-53.XXX.localus-west-2.compute.internal/INFO/locust.runners: Client 'ip-172-20-2-152.XXX.localus-west-2.compute.internal_9ec3e64b85bf4932a15a59b0d37f67d4' reported as ready. Currently 3 clients ready to swarm.
[2020-10-09 18:03:04,965] ip-172-20-2-53.XXX.localus-west-2.compute.internal/INFO/locust.runners: Sending spawn jobs of 2000 users and 100.00 spawn rate to 3 ready clients
[2020-10-09 18:03:04,966] ip-172-20-2-53.XXX.localus-west-2.compute.internal/DEBUG/locust.runners: Sending spawn message to client ip-172-20-2-34.XXX.localus-west-2.compute.internal_e14e85acc0024d6a812cbccd79295653
[2020-10-09 18:03:04,966] ip-172-20-2-53.XXX.localus-west-2.compute.internal/DEBUG/locust.runners: Sending spawn message to client ip-172-20-2-187.XXX.localus-west-2.compute.internal_b5fde0bdc9d9416ebe69c4a7dcb13da9
[2020-10-09 18:03:04,966] ip-172-20-2-53.XXX.localus-west-2.compute.internal/DEBUG/locust.runners: Sending spawn message to client ip-172-20-2-152.XXX.localus-west-2.compute.internal_9ec3e64b85bf4932a15a59b0d37f67d4
[2020-10-09 18:03:04,966] ip-172-20-2-53.XXX.localus-west-2.compute.internal/DEBUG/locust.runners: Updating state to 'spawning', old state was 'ready'
[2020-10-09 18:03:11,221] ip-172-20-2-53.XXX.localus-west-2.compute.internal/WARNING/locust.runners: Worker ip-172-20-2-187.XXX.localus-west-2.compute.internal_b5fde0bdc9d9416ebe69c4a7dcb13da9 exceeded cpu threshold (will only log this once per worker)
[2020-10-09 18:03:11,222] ip-172-20-2-53.XXX.localus-west-2.compute.internal/WARNING/locust.runners: Worker ip-172-20-2-34.XXX.localus-west-2.compute.internal_e14e85acc0024d6a812cbccd79295653 exceeded cpu threshold (will only log this once per worker)
[2020-10-09 18:03:11,223] ip-172-20-2-53.XXX.localus-west-2.compute.internal/WARNING/locust.runners: Worker ip-172-20-2-152.XXX.localus-west-2.compute.internal_9ec3e64b85bf4932a15a59b0d37f67d4 exceeded cpu threshold (will only log this once per worker)
[2020-10-09 18:04:08,917] ip-172-20-2-53.XXX.localus-west-2.compute.internal/DEBUG/locust.runners: Updating state to 'running', old state was 'spawning'
[2020-10-09 18:06:00,969] ip-172-20-2-53.XXX.localus-west-2.compute.internal/DEBUG/locust.runners: Stopping...
[2020-10-09 18:06:00,970] ip-172-20-2-53.XXX.localus-west-2.compute.internal/DEBUG/locust.runners: Updating state to 'stopping', old state was 'running'
[2020-10-09 18:06:00,970] ip-172-20-2-53.XXX.localus-west-2.compute.internal/DEBUG/locust.runners: Sending stop message to client ip-172-20-2-34.XXX.localus-west-2.compute.internal_e14e85acc0024d6a812cbccd79295653
[2020-10-09 18:06:00,970] ip-172-20-2-53.XXX.localus-west-2.compute.internal/DEBUG/locust.runners: Sending stop message to client ip-172-20-2-187.XXX.localus-west-2.compute.internal_b5fde0bdc9d9416ebe69c4a7dcb13da9
[2020-10-09 18:06:00,970] ip-172-20-2-53.XXX.localus-west-2.compute.internal/DEBUG/locust.runners: Sending stop message to client ip-172-20-2-152.XXX.localus-west-2.compute.internal_9ec3e64b85bf4932a15a59b0d37f67d4
[2020-10-09 18:06:01,245] ip-172-20-2-53.XXX.localus-west-2.compute.internal/INFO/locust.runners: Removing ip-172-20-2-187.XXX.localus-west-2.compute.internal_b5fde0bdc9d9416ebe69c4a7dcb13da9 client from running clients
[2020-10-09 18:06:01,245] ip-172-20-2-53.XXX.localus-west-2.compute.internal/INFO/locust.runners: Client 'ip-172-20-2-187.XXX.localus-west-2.compute.internal_b5fde0bdc9d9416ebe69c4a7dcb13da9' reported as ready. Currently 3 clients ready to swarm.
[2020-10-09 18:06:01,288] ip-172-20-2-53.XXX.localus-west-2.compute.internal/INFO/locust.runners: Removing ip-172-20-2-152.XXX.localus-west-2.compute.internal_9ec3e64b85bf4932a15a59b0d37f67d4 client from running clients
[2020-10-09 18:06:01,289] ip-172-20-2-53.XXX.localus-west-2.compute.internal/INFO/locust.runners: Client 'ip-172-20-2-152.XXX.localus-west-2.compute.internal_9ec3e64b85bf4932a15a59b0d37f67d4' reported as ready. Currently 3 clients ready to swarm.
[2020-10-09 18:06:01,717] ip-172-20-2-53.XXX.localus-west-2.compute.internal/INFO/locust.runners: Removing ip-172-20-2-34.XXX.localus-west-2.compute.internal_e14e85acc0024d6a812cbccd79295653 client from running clients
[2020-10-09 18:06:01,717] ip-172-20-2-53.XXX.localus-west-2.compute.internal/DEBUG/locust.runners: Updating state to 'stopped', old state was 'stopping'
[2020-10-09 18:06:01,717] ip-172-20-2-53.XXX.localus-west-2.compute.internal/INFO/locust.runners: Client 'ip-172-20-2-34.XXX.localus-west-2.compute.internal_e14e85acc0024d6a812cbccd79295653' reported as ready. Currently 3 clients ready to swarm.
cyberw commented 4 years ago

Seems to have been fixed in 1.2.3.

max-rocket-internet commented 4 years ago

I think this is not resolved had it today with version 1.3.1 after starting and then stopping the test from the UI:

Screenshot 2020-11-02 at 15 58 20

max-rocket-internet commented 4 years ago

Does this make sense? 2 already running before restarting a test? There should be 0 running no?

[2020-11-02 15:05:28,877] locust-c=xxx-worker-5f5588b769-94tl4/INFO/locust.runners: Spawning 0 users at the rate 0.1 users/s (2 users already running)...
[2020-11-02 15:05:28,877] locust-xxxx-worker-5f5588b769-94tl4/INFO/locust.runners: All users spawned: AbstractUser: 0, HKUser: 0, SGUser: 0, THUser: 0, TWUser: 0, MYUser: 0, HKUserV2: 0, SGUserV2: 0, THUserV2: 0, TWUserV2: 0, MYUserV2: 0 (2 already running)
cyberw commented 4 years ago

Interesting. Yes, that looks weird. Do you have the full log?

Looks like you have quite a few different user types, maybe one of them is hanging somehow?

Can you share as much as possible of your locust file?

cyberw commented 4 years ago

Oh, and lets keep the discussion in #1535 I think.

zhenhuaplan commented 1 year ago

It seems that this problem still exists in the current version. At present, this problem often occurs when I use locust. After clicking the STOP button, the state always shows STOPING,The following is my master's log information

[2022-12-01 12:10:43,200] srv969220428/INFO/locust.main: Starting web interface at http://0.0.0.0:8089 (accepting connections from all network interfaces)
[2022-12-01 12:10:43,245] srv969220428/INFO/locust.main: Starting Locust 2.8.4
[2022-12-01 12:10:43,294] srv969220428/INFO/locust.runners: Client 'srv969220428_2bdeee017d6d4fbf8ce76061cc2c6690' reported as ready. Currently 1 clients ready to swarm.
[2022-12-01 12:10:43,296] srv969220428/INFO/locust.runners: Client 'srv969220428_ce2e9d937a144478a9e2760ab58af518' reported as ready. Currently 2 clients ready to swarm.
[2022-12-01 12:10:43,303] srv969220428/INFO/locust.runners: Client 'srv969220428_0150cfd7f59b4f45a0f2f3eb58132a72' reported as ready. Currently 3 clients ready to swarm.
[2022-12-01 12:10:43,335] srv969220428/INFO/locust.runners: Client 'srv969220428_611f6ce743b84365b32bc2c629252be5' reported as ready. Currently 4 clients ready to swarm.
[2022-12-01 12:10:43,347] srv969220428/INFO/locust.runners: Client 'srv969220428_e43b2377b93d4b09b85818fe4fe0558a' reported as ready. Currently 5 clients ready to swarm.
[2022-12-01 12:10:43,480] srv969220428/INFO/locust.runners: Client 'srv969220428_dff0727ceeab400db1892731d709b029' reported as ready. Currently 6 clients ready to swarm.
[2022-12-01 12:10:43,594] srv969220428/INFO/locust.runners: Client 'srv969220428_0c57f876a0be4eb9a5a405ca8660ecd0' reported as ready. Currently 7 clients ready to swarm.
[2022-12-01 12:10:43,601] srv969220428/INFO/locust.runners: Client 'srv969220428_f3659be744c84a73b0dd3c894c660731' reported as ready. Currently 8 clients ready to swarm.
[2022-12-01 12:10:46,364] srv969220428/INFO/locust.runners: Client 'srv91862111_a95135e3c58841c6b195171a6c902d2d' reported as ready. Currently 9 clients ready to swarm.
[2022-12-01 12:10:46,372] srv969220428/INFO/locust.runners: Client 'srv91862111_d17deff1ed0943559e5ed030fc291e97' reported as ready. Currently 10 clients ready to swarm.
[2022-12-01 12:10:46,401] srv969220428/INFO/locust.runners: Client 'srv91862111_44a0695572174f118f1a566d82806e83' reported as ready. Currently 11 clients ready to swarm.
[2022-12-01 12:10:49,721] srv969220428/INFO/locust.runners: Client 'srv8020201013_700b2270942b447abeba767474f04206' reported as ready. Currently 12 clients ready to swarm.
[2022-12-01 12:10:49,722] srv969220428/INFO/locust.runners: Client 'srv8020201013_778ae1bc828f41d3b0573738c4753860' reported as ready. Currently 13 clients ready to swarm.
[2022-12-01 12:10:49,722] srv969220428/INFO/locust.runners: Client 'srv8020201013_6b7943b6b0dd46419dabf9c4dfda13ff' reported as ready. Currently 14 clients ready to swarm.
[2022-12-01 12:10:49,726] srv969220428/INFO/locust.runners: Client 'srv8020201013_16209a44c3f941a4926f6eb2a137f4e8' reported as ready. Currently 15 clients ready to swarm.
[2022-12-01 12:27:36,873] srv969220428/INFO/locust.runners: Sending spawn jobs of 500 users at 20.00 spawn rate to 15 ready clients
[2022-12-01 12:27:55,189] srv969220428/WARNING/locust.runners: Worker srv8020201013_16209a44c3f941a4926f6eb2a137f4e8 exceeded cpu threshold (will only log this once per worker)
[2022-12-01 12:27:55,189] srv969220428/WARNING/locust.runners: Worker srv8020201013_778ae1bc828f41d3b0573738c4753860 exceeded cpu threshold (will only log this once per worker)
[2022-12-01 12:28:00,193] srv969220428/WARNING/locust.runners: Worker srv8020201013_700b2270942b447abeba767474f04206 exceeded cpu threshold (will only log this once per worker)
[2022-12-01 12:28:00,199] srv969220428/WARNING/locust.runners: Worker srv8020201013_6b7943b6b0dd46419dabf9c4dfda13ff exceeded cpu threshold (will only log this once per worker)
[2022-12-01 12:28:01,142] srv969220428/INFO/locust.runners: All users spawned: {"ODYSSEY": 500} (500 total users)
[2022-12-01 12:29:07,328] srv969220428/INFO/locust.runners: Removing srv969220428_dff0727ceeab400db1892731d709b029 client from running clients
[2022-12-01 12:29:07,328] srv969220428/INFO/locust.runners: Removing srv969220428_0c57f876a0be4eb9a5a405ca8660ecd0 client from running clients
[2022-12-01 12:29:07,328] srv969220428/INFO/locust.runners: Removing srv969220428_611f6ce743b84365b32bc2c629252be5 client from running clients
[2022-12-01 12:29:07,328] srv969220428/INFO/locust.runners: Client 'srv969220428_dff0727ceeab400db1892731d709b029' reported as ready. Currently 13 clients ready to swarm.
[2022-12-01 12:29:07,329] srv969220428/INFO/locust.runners: Client 'srv969220428_0c57f876a0be4eb9a5a405ca8660ecd0' reported as ready. Currently 14 clients ready to swarm.
[2022-12-01 12:29:07,329] srv969220428/INFO/locust.runners: Client 'srv969220428_611f6ce743b84365b32bc2c629252be5' reported as ready. Currently 15 clients ready to swarm.
[2022-12-01 12:29:07,329] srv969220428/INFO/locust.runners: Removing srv969220428_ce2e9d937a144478a9e2760ab58af518 client from running clients
[2022-12-01 12:29:07,329] srv969220428/INFO/locust.runners: Client 'srv969220428_ce2e9d937a144478a9e2760ab58af518' reported as ready. Currently 15 clients ready to swarm.
[2022-12-01 12:29:07,330] srv969220428/INFO/locust.runners: Removing srv969220428_0150cfd7f59b4f45a0f2f3eb58132a72 client from running clients
[2022-12-01 12:29:07,330] srv969220428/INFO/locust.runners: Client 'srv969220428_0150cfd7f59b4f45a0f2f3eb58132a72' reported as ready. Currently 15 clients ready to swarm.
[2022-12-01 12:29:07,330] srv969220428/INFO/locust.runners: Removing srv969220428_e43b2377b93d4b09b85818fe4fe0558a client from running clients
[2022-12-01 12:29:07,330] srv969220428/INFO/locust.runners: Client 'srv969220428_e43b2377b93d4b09b85818fe4fe0558a' reported as ready. Currently 15 clients ready to swarm.
[2022-12-01 12:29:07,332] srv969220428/INFO/locust.runners: Removing srv91862111_a95135e3c58841c6b195171a6c902d2d client from running clients
[2022-12-01 12:29:07,332] srv969220428/INFO/locust.runners: Client 'srv91862111_a95135e3c58841c6b195171a6c902d2d' reported as ready. Currently 15 clients ready to swarm.
[2022-12-01 12:29:07,333] srv969220428/INFO/locust.runners: Removing srv91862111_d17deff1ed0943559e5ed030fc291e97 client from running clients
[2022-12-01 12:29:07,333] srv969220428/INFO/locust.runners: Removing srv91862111_44a0695572174f118f1a566d82806e83 client from running clients
[2022-12-01 12:29:07,334] srv969220428/INFO/locust.runners: Client 'srv91862111_d17deff1ed0943559e5ed030fc291e97' reported as ready. Currently 14 clients ready to swarm.
[2022-12-01 12:29:07,334] srv969220428/INFO/locust.runners: Client 'srv91862111_44a0695572174f118f1a566d82806e83' reported as ready. Currently 15 clients ready to swarm.
[2022-12-01 12:29:07,334] srv969220428/INFO/locust.runners: Removing srv969220428_2bdeee017d6d4fbf8ce76061cc2c6690 client from running clients
[2022-12-01 12:29:07,334] srv969220428/INFO/locust.runners: Client 'srv969220428_2bdeee017d6d4fbf8ce76061cc2c6690' reported as ready. Currently 15 clients ready to swarm.
[2022-12-01 12:29:07,337] srv969220428/INFO/locust.runners: Removing srv8020201013_6b7943b6b0dd46419dabf9c4dfda13ff client from running clients
[2022-12-01 12:29:07,337] srv969220428/INFO/locust.runners: Client 'srv8020201013_6b7943b6b0dd46419dabf9c4dfda13ff' reported as ready. Currently 15 clients ready to swarm.
[2022-12-01 12:29:07,337] srv969220428/INFO/locust.runners: Removing srv8020201013_778ae1bc828f41d3b0573738c4753860 client from running clients
[2022-12-01 12:29:07,338] srv969220428/INFO/locust.runners: Removing srv969220428_f3659be744c84a73b0dd3c894c660731 client from running clients
[2022-12-01 12:29:07,338] srv969220428/INFO/locust.runners: Client 'srv969220428_f3659be744c84a73b0dd3c894c660731' reported as ready. Currently 14 clients ready to swarm.
[2022-12-01 12:29:07,338] srv969220428/INFO/locust.runners: Client 'srv8020201013_778ae1bc828f41d3b0573738c4753860' reported as ready. Currently 15 clients ready to swarm.
[2022-12-01 12:29:07,339] srv969220428/INFO/locust.runners: Removing srv8020201013_16209a44c3f941a4926f6eb2a137f4e8 client from running clients
[2022-12-01 12:29:07,339] srv969220428/INFO/locust.runners: Client 'srv8020201013_16209a44c3f941a4926f6eb2a137f4e8' reported as ready. Currently 15 clients ready to swarm.
[2022-12-01 12:29:07,409] srv969220428/INFO/locust.runners: Removing srv8020201013_700b2270942b447abeba767474f04206 client from running clients
[2022-12-01 12:29:07,409] srv969220428/INFO/locust.runners: Client 'srv8020201013_700b2270942b447abeba767474f04206' reported as ready. Currently 15 clients ready to swarm.
[2022-12-01 12:29:08,254] srv969220428/WARNING/locust.runners: Worker srv8020201013_700b2270942b447abeba767474f04206 exceeded cpu threshold (will only log this once per worker)
[2022-12-01 12:29:08,260] srv969220428/WARNING/locust.runners: Worker srv8020201013_16209a44c3f941a4926f6eb2a137f4e8 exceeded cpu threshold (will only log this once per worker)
[2022-12-01 12:29:08,262] srv969220428/WARNING/locust.runners: Worker srv8020201013_6b7943b6b0dd46419dabf9c4dfda13ff exceeded cpu threshold (will only log this once per worker)
[2022-12-01 12:29:08,267] srv969220428/WARNING/locust.runners: Worker srv8020201013_778ae1bc828f41d3b0573738c4753860 exceeded cpu threshold (will only log this once per worker)