Closed MattFisher closed 7 years ago
Hi Matt,
With a constantly increasing load due to a constant hatch rate, the server doesn't have time to steady-state and produce stable statistics.
This is the reason why Locust will reset the statistics after the hatching phase is completed. I'm not sure why stepped hatching would make any difference in this regard.
I would like to be able to set up a test run that will gradually increase load until the server hits its limits. At the moment, finding the maximum capacity of the server takes multiple sequential test runs with different numbers of locusts. Stepped hatching would allow a single test run to provide multiple loads in a single unattended test, sequentially, with time for the server to steady-state in-between. For reference, I'm using New Relic to monitor the results so I'm not focussed on extracting the statistics from Locust.
On 12 July 2014 23:59, Joakim Hamrén notifications@github.com wrote:
Hi Matt,
With a constantly increasing load due to a constant hatch rate, the server doesn't have time to steady-state and produce stable statistics.
This is the reason why Locust will reset the statistics after the hatching phase is completed. I'm not sure why stepped hatching would make any difference in this regard.
— Reply to this email directly or view it on GitHub https://github.com/locustio/locust/issues/168#issuecomment-48812180.
There was a 'ramping' plugin in locust some time ago (that was removed) that tried to do more or less what you described (if I understand correctly), by looking at the response times - that could be updated to current master. But if you'd rather use some other metrics (NewRelic metrics) that describe the load on your system, there's nothing to stop you writing a script that would:
a) call locust http api to add load b) wait a bit and then check the load (or whatever metric you're interested in) on your system c) goto a) - if the system can still handle more load
@justiniso Can this be re-opened?
I am also curious about how to provide a fractional hatch rate, so that fewer than 1 VU is hatched per second (often, much fewer, like maybe 5 new VUs spread out over 1 minute).
You can see, e.g. in bzt
, that concurrency divided by the ramp-up time is always interpreted as a hatch rate, and the math.ceil
means it cannot be less than 1.
I am interested if someone found answer or some workaround for this feature. It would be useful
Found out that hatch rate may be set to float value so to get one user per 10 sec hatch rate 0.1 can be set. Maybe it will be helpful for somebody
Hey guys i'm also very intrested by this feature request. This allows you to create more realistic tests Can we re-open it ?
With a constantly increasing load due to a constant hatch rate, the server doesn't have time to steady-state and produce stable statistics.
I would love to be able to specify a locust population that:
I think you would need two more parameters, in addition to hatch rate and number of locusts: population_step and time_between_steps.
It would make the population grow like this:
Instead of like this: