Closed centminmod closed 9 years ago
AFAIK, the locust master almost evenly distributes the hatch rate and locusts running on each slave. You can think of adding/removing clients per server as a means of weighting, that is the more clients you run in a server, the more it can generate load.
In theory, each slave can generate load as its told to generate as long as there are enough server resources. In actual load-testing, you have to check for your server's resource limits (e.g. net bandwidth, file desc, etc.), and from there you can calibrate how much a slave server can generate load - this is what I usually do, and if you can come up with good numbers about the limits you can actually automate this calculation.
Hope this helps, awesome post BTW!
cheers, would be great to have specific individual slave load test options, so we can see how each slave handles itself in a master/slave config.
I guess you can just run each slave as standalone in non-master/slave first - but then if you had a dozen slaves, it's just alot more manual work as opposed to being able to detect and run individual slaves in the master locust web gui console as well assign weightings to each slave based on calculated aggregate of each individual slave test results. Maybe a feature improvement for the future :)
oh and when you refer to automating the calculation, you mean within Locust itself ?
Locust will equally spread out the set amount of locusts on each connected slave node. So 1000 locusts with 4 connected slave nodes means 250 locusts per slave. Locust will treat each slave node equally independent of it's setup or underlying performance capabilities.
There's currently no options for distributing the workload differently.
By automating the calculation I prefer keeping it outside locust to avoid introducing code complexity in the tool.
Also remember you can run multiple clients in one server so again you can treat that as weight, unless Im wrong ☺
k been playing with locust more and i think i now understand what you mean regarding number of slaves determines the weighting
i updated my blog article above to retest with master /slave in this config
master
slave
so in total 8 slaves were used
so the weighting would be
master 3/8 slave 5/8
Guess the weighting could be kept out of locust code. But how hard would it be to introduce via the web guide console, the ability to individual run each slave by themselves as a way of diagnostic troubleshooting to ensure all slaves behave closely together or to pin point errors or issues ?
Sorry not sure how features get implemented, in my case I just rely on locust's hackability and extensibility...as an example I can monitor each slaves report by listening to on slave report
Did my first Locust load testing at http://wordpress7.centminmod.com/132/wordpress-super-cache-benchmark-locust-io-load-testing/ and love it :)
my question is, I am planning to add additional slaves from several geographic locations but these will most likely be cloud based servers from Linode, Vultr, DigitalOcean and Amazon EC2.
however, all these would have different server specs so how does Locust factor that in for assigning which or how much is called from each slave ? is there any weighting options ?
maybe nice to have a diagnostic tool to quickly test what each slave is capable of serving before hand so we can work out weightings to assign to slaves ?
cheers
George