Open ramgiteng opened 2 years ago
you have specified an arrivalRate
, which tells the runners how many users you want to try and simulate arriving on your site per second.
https://www.artillery.io/docs/guides/guides/test-script-reference#constant-arrival-rate
If you want a fixed number of arrivals, I would try using arrivalCount
instead
Thanks @GeekyDeaks Just curious - I thought the configuration duration: 30, arrivalRate: 1, and rampTo: 3 will ramp up to 3 users in 30 seconds.
Or does the config mean that 1 VU will be created every second (=30 VUs in total) for the specified duration? I will try out what you suggested in the meantime and get back.
I find the terminology a bit confusing to be honest. Those two links I provided above have a short description and config which explains the options in a way that made sense to me. There is still something not quite right though as I would have expected the number of vusers to increase from 10 per interval in your test with the settings you have shown, but it doesn't seem to be the case
@GeekyDeaks Yes, some confusion with rampTo as well. I had a question below regarding this. What does rampTo actually supposed to do when coupled with arrivalRate as shown below?
phases:
- duration: 10
arrivalRate: 1
rampTo: 50
Does this mean that at the end of the duration (10 seconds), 50 VU's will have been created?
did you spot the section below the rampTo example in the documentation? https://www.artillery.io/docs/guides/guides/test-script-reference#how-do-ramps-work
It shows what the rampTo is equivalent to in individual phases
Yes I did, however when I tried the config.yml with "arrivalRate = 0", artillery threw an error saying "This should not happen".
I found these two links as well to dive a little deeper about "rampTo". Please check them out.
https://gitter.im/shoreditch-ops/artillery?at=5a131839cc1d527f6b93fa75 https://stackoverflow.com/questions/58099805/artillery-how-exactly-does-rampto-work-in-artillery
I tried out various permutations of rampTo and arrivalRate, and the results are somewhat inconsistent. Will get back once I try them out again after sometime
Yes I did, however when I tried the config.yml with "arrivalRate = 0", artillery threw an error saying "This should not happen".
That's pretty strange as the latest code should check for undefined not falsey:
I'd be inclined to look in node_modules and set some break points to check what is going on as the diags you posted at the start seem to indicate that it's not behaving correctly. I'll be running some load tests next week and I'll check the rampTo is working as expected at the same time
Ok, I took another look this morning and the This should not happen
error is because of this line incorrectly checking for falsey:
Also, rampTo
does not appear to be being honoured. Using the config in your original post, I get the following from the phases debug logs:
$ DEBUG=phases npx artillery run -q ramptest.yaml
⠦ 2022-05-12T06:15:26.568Z phases rampTo: tick = 1000ms; r0 = 1; periods = 1; ticksPerPeriod = 30; period length = 30s
⠸ 2022-05-12T06:15:57.569Z phases ticksElapsed: 31; upping probability or stopping
2022-05-12T06:15:57.569Z phases done: ticksElapsed = 31; currentRate = 1; rampTo = 1
This also aligns with what you originally posted as your rate seemed to be stuck at 10 (i.e. 1 per second)
Not sure if your original request is sorted with the arrivalCount
and you want to maybe raise another ticket for the rampTo
?
ok, one last thing of note. Things behave much better when using larger values for rampTo
:
$ DEBUG=phases npx artillery run -q ramptest.yaml
⠴ 2022-05-12T06:38:05.147Z phases rampTo: tick = 200ms; r0 = 1; periods = 5; ticksPerPeriod = 30; period length = 6s
⠙ 2022-05-12T06:38:11.356Z phases ticksElapsed: 31; upping probability or stopping
2022-05-12T06:38:11.356Z phases update: currentRate = 2 - p = 0.4
2022-05-12T06:38:11.356Z phases expecting ~12 arrivals before updating again
⠧ 2022-05-12T06:38:17.548Z phases ticksElapsed: 31; upping probability or stopping
2022-05-12T06:38:17.549Z phases update: currentRate = 3 - p = 0.6
2022-05-12T06:38:17.549Z phases expecting ~18 arrivals before updating again
⠹ 2022-05-12T06:38:23.751Z phases ticksElapsed: 31; upping probability or stopping
2022-05-12T06:38:23.751Z phases update: currentRate = 4 - p = 0.8
2022-05-12T06:38:23.751Z phases expecting ~24 arrivals before updating again
⠇ 2022-05-12T06:38:29.947Z phases ticksElapsed: 31; upping probability or stopping
2022-05-12T06:38:29.948Z phases update: currentRate = 5 - p = 1
2022-05-12T06:38:29.948Z phases expecting ~30 arrivals before updating again
⠸ 2022-05-12T06:38:36.149Z phases ticksElapsed: 31; upping probability or stopping
2022-05-12T06:38:36.149Z phases done: ticksElapsed = 31; currentRate = 5; rampTo = 5
It looks like when you use values that are less than the number of workers and for a very short test time, odd things happen with the distribution calculations. This appears to be because you need an integer number of arrivals as you cannot have something like 0.7 people arriving at your site.
Ok, I took another look this morning and the
This should not happen
error is because of this line incorrectly checking for falsey:
Oh I see what you mean. Thanks for that crucial point. I sort of thought so myself at the start.
So, which means that for the following config.yml, _we would not get an integer number of arrivals_ because Artillery will try to split the ramping of 100 threads amongst a 50 second interval? phases:
However, for the following config, it would distribute the spawning of each VU equally. (2 per second for 50 seconds) = 100 threads by the end of the duration.
Would you say that my above assessment is right?
Ok, I took another look this morning and the
This should not happen
error is because of this line incorrectly checking for falsey:Also,
rampTo
does not appear to be being honoured. Using the config in your original post, I get the following from the phases debug logs:$ DEBUG=phases npx artillery run -q ramptest.yaml ⠦ 2022-05-12T06:15:26.568Z phases rampTo: tick = 1000ms; r0 = 1; periods = 1; ticksPerPeriod = 30; period length = 30s ⠸ 2022-05-12T06:15:57.569Z phases ticksElapsed: 31; upping probability or stopping 2022-05-12T06:15:57.569Z phases done: ticksElapsed = 31; currentRate = 1; rampTo = 1
This also aligns with what you originally posted as your rate seemed to be stuck at 10 (i.e. 1 per second)
Not sure if your original request is sorted with the
arrivalCount
and you want to maybe raise another ticket for therampTo
?
Regarding arrivalCount, this works as expected with a config similar to the one below: phases:
So, every 5 seconds approximately, one thread is created which means arrivalCount works fine.
However, I would like to model a workload where there is more than 1 thread created and ramp up at any point in time with the help of arrivalRate and rampTo parameters. In other words, I'm trying to get this working https://docs-nine-inky.vercel.app/docs/guides/guides/test-script-reference#ramp-up-rate
Is arrivalRate always defined in terms of number of VU's per second? Let me try out what you suggested and get back to you.
Is arrivalRate always defined in terms of number of VU's per second?
That is my understanding, but I'm just a simple user so take that with a pinch of the proverbial :D
From what I can tell, the load requested is distributed evenly as possible amongst a set of workers. The worker count (thread count) seems to be decided based on your system. For example, mine starts 7 workers. If you take a look at https://github.com/artilleryio/artillery/blob/master/lib/dist.js you can see the code that distributes the load amongst the workers.
With the config:
duration: 50
arrivalRate: 1
rampTo: 100
I get the following distribution (I just dumped the newScripts)
[
{
"config": {
"target": "http://www.lambdatest.com",
"processor": "./SeleniumScript.js",
"phases": [
{
"duration": 50,
"arrivalRate": 1,
"rampTo": 15
}
],
"statsInterval": 30
},
"scenarios": [
{
"flow": [
{
"function": "navigateToLambdaTest"
}
]
}
]
},
{
"config": {
"target": "http://www.lambdatest.com",
"processor": "./SeleniumScript.js",
"phases": [
{
"duration": 50,
"arrivalRate": 0,
"rampTo": 15
}
],
"statsInterval": 30
},
"scenarios": [
{
"flow": [
{
"function": "navigateToLambdaTest"
}
]
}
]
},
{
"config": {
"target": "http://www.lambdatest.com",
"processor": "./SeleniumScript.js",
"phases": [
{
"duration": 50,
"arrivalRate": 0,
"rampTo": 14
}
],
"statsInterval": 30
},
"scenarios": [
{
"flow": [
{
"function": "navigateToLambdaTest"
}
]
}
]
},
{
"config": {
"target": "http://www.lambdatest.com",
"processor": "./SeleniumScript.js",
"phases": [
{
"duration": 50,
"arrivalRate": 0,
"rampTo": 14
}
],
"statsInterval": 30
},
"scenarios": [
{
"flow": [
{
"function": "navigateToLambdaTest"
}
]
}
]
},
{
"config": {
"target": "http://www.lambdatest.com",
"processor": "./SeleniumScript.js",
"phases": [
{
"duration": 50,
"arrivalRate": 0,
"rampTo": 14
}
],
"statsInterval": 30
},
"scenarios": [
{
"flow": [
{
"function": "navigateToLambdaTest"
}
]
}
]
},
{
"config": {
"target": "http://www.lambdatest.com",
"processor": "./SeleniumScript.js",
"phases": [
{
"duration": 50,
"arrivalRate": 0,
"rampTo": 14
}
],
"statsInterval": 30
},
"scenarios": [
{
"flow": [
{
"function": "navigateToLambdaTest"
}
]
}
]
},
{
"config": {
"target": "http://www.lambdatest.com",
"processor": "./SeleniumScript.js",
"phases": [
{
"duration": 50,
"arrivalRate": 0,
"rampTo": 14
}
],
"statsInterval": 30
},
"scenarios": [
{
"flow": [
{
"function": "navigateToLambdaTest"
}
]
}
]
}
]
So I would expect one primary worker to start at 1 arrival per second and each of the additional workers to start at 0 arrivals per second and slow ramp up to 14 or 15 over the 50 second period (i.e. at the end of the 50 seconds, each worker is setting off 14 or 15 async requests per second)
Unfortunately, this doesn't seem to be the case. All the workers with arrivalRate: 0
are getting culled by this
so on my system I end up with one worker ramping from 1 arrival per second to 15 arrivals per second over the 50 second test run. This gives me about 400 arrivals over the whole test run, where it should be more like 2,500 (i.e 1+3+5+7+9+11......+97+99)
I think this is a bug. I'll try and gather more details when I get time and if it is, I'll raise a ticket and PR to fix it
I raised a PR to fix this #1443 - as it only really affects tests that ramp from a very low arrivalRate
, it's not likely to affect the sustained part of any serious loadtest
I raised a PR to fix this #1443 - as it only really affects tests that ramp from a very low
arrivalRate
, it's not likely to affect the sustained part of any serious loadtest
Sure, thanks for the help @GeekyDeaks. I will try out a couple scenarios and get back as well.
I raised a PR to fix this #1443 - as it only really affects tests that ramp from a very low
arrivalRate
, it's not likely to affect the sustained part of any serious loadtest
@GeekyDeaks Are you suggesting based on your findings that I should keep both the arrivalRate and rampTo values large?
@GeekyDeaks Are you suggesting based on your findings that I should keep both the arrivalRate and rampTo values large?
I would probably suggest that you keep the arrivalRate
at least as high as the number of workers (processes) that will be launched. Unless you override it, the default value is set to os.cpus().length - 1
. If you do that, then the rampTo
should be correctly honoured
Thank you @GeekyDeaks . I'll try it out and get back. By the way, is there a new version of Artillery releasing anytime soon with the updated fix?
Looks like it was just released @ramgiteng
https://github.com/artilleryio/artillery/releases/tag/v2.0.0-17
Looks like it was just released @ramgiteng
https://github.com/artilleryio/artillery/releases/tag/v2.0.0-17
Thank you! I'll check it out.
Is arrivalRate always defined in terms of number of VU's per second?
That is my understanding, but I'm just a simple user so take that with a pinch of the proverbial :D
From what I can tell, the load requested is distributed evenly as possible amongst a set of workers. The worker count (thread count) seems to be decided based on your system. For example, mine starts 7 workers. If you take a look at https://github.com/artilleryio/artillery/blob/master/lib/dist.js you can see the code that distributes the load amongst the workers.
With the config:
duration: 50 arrivalRate: 1 rampTo: 100
I get the following distribution (I just dumped the newScripts)
[ { "config": { "target": "http://www.lambdatest.com", "processor": "./SeleniumScript.js", "phases": [ { "duration": 50, "arrivalRate": 1, "rampTo": 15 } ], "statsInterval": 30 }, "scenarios": [ { "flow": [ { "function": "navigateToLambdaTest" } ] } ] }, { "config": { "target": "http://www.lambdatest.com", "processor": "./SeleniumScript.js", "phases": [ { "duration": 50, "arrivalRate": 0, "rampTo": 15 } ], "statsInterval": 30 }, "scenarios": [ { "flow": [ { "function": "navigateToLambdaTest" } ] } ] }, { "config": { "target": "http://www.lambdatest.com", "processor": "./SeleniumScript.js", "phases": [ { "duration": 50, "arrivalRate": 0, "rampTo": 14 } ], "statsInterval": 30 }, "scenarios": [ { "flow": [ { "function": "navigateToLambdaTest" } ] } ] }, { "config": { "target": "http://www.lambdatest.com", "processor": "./SeleniumScript.js", "phases": [ { "duration": 50, "arrivalRate": 0, "rampTo": 14 } ], "statsInterval": 30 }, "scenarios": [ { "flow": [ { "function": "navigateToLambdaTest" } ] } ] }, { "config": { "target": "http://www.lambdatest.com", "processor": "./SeleniumScript.js", "phases": [ { "duration": 50, "arrivalRate": 0, "rampTo": 14 } ], "statsInterval": 30 }, "scenarios": [ { "flow": [ { "function": "navigateToLambdaTest" } ] } ] }, { "config": { "target": "http://www.lambdatest.com", "processor": "./SeleniumScript.js", "phases": [ { "duration": 50, "arrivalRate": 0, "rampTo": 14 } ], "statsInterval": 30 }, "scenarios": [ { "flow": [ { "function": "navigateToLambdaTest" } ] } ] }, { "config": { "target": "http://www.lambdatest.com", "processor": "./SeleniumScript.js", "phases": [ { "duration": 50, "arrivalRate": 0, "rampTo": 14 } ], "statsInterval": 30 }, "scenarios": [ { "flow": [ { "function": "navigateToLambdaTest" } ] } ] } ]
So I would expect one primary worker to start at 1 arrival per second and each of the additional workers to start at 0 arrivals per second and slow ramp up to 14 or 15 over the 50 second period (i.e. at the end of the 50 seconds, each worker is setting off 14 or 15 async requests per second)
Unfortunately, this doesn't seem to be the case. All the workers with
arrivalRate: 0
are getting culled by thisso on my system I end up with one worker ramping from 1 arrival per second to 15 arrivals per second over the 50 second test run. This gives me about 400 arrivals over the whole test run, where it should be more like 2,500 (i.e 1+3+5+7+9+11......+97+99)
I think this is a bug. I'll try and gather more details when I get time and if it is, I'll raise a ticket and PR to fix it
@GeekyDeaks How do you get the distribution from running the command? I tried running it with --output
but it didn't write to it.
This is my command: artillery run config.yml
Ah, IIRC I directly modified node_modules/artillery/lib/dist.js
to dump the value of newScripts
Hello,
Thanks for this tool!
I am trying to perform a load test on a web application via Chrome browsers with the help of Selenium and NodeJS. The purpose of using Artillery for my use case is to perform a ramp up schedule of triggering Chrome browsers in a Workload model.
Use Case:
For example: To ramp up to 3 Chrome browsers in 30 seconds. In my Selenium code below, I am instructing Selenium to go to lambdatest.com and capture the text of a header on the homepage and return it. In my YML file, I configured rampTo: 3. The expected output is that I only see 3 instances of the output in console. However, I am seeing numerous instances. Please also refer to "Expected Output" and "Actual Output" below.
Selenium Code
YAML file
Expected Output
Actual Output on Console:
Can you please help troubleshoot this (as to why I am seeing more than 3 instances of the output)? Do I have to make any changes to the .yml files?
Please let me know if you need any further details. Thank you!