Closed dmyers closed 9 years ago
I have been using for work for more than a year now in parallel mode. Usually with 50 to 300 parallel requests it obviously stresses your api and the capacity of handling sockets in a more significant way.
There is a little paragraph on the README about how to tweak your Unix/Mac machine before running the benchmarks in order to guarantee there are not any limits in terms of Socket capacity. Still a TODO for windows. Checkout this blog post of mine for more info: http://tech.opentable.co.uk/blog/2014/02/28/api-benchmark/
However, bigger response times are quite expected. This is the reason why, when in parallel mode, there is a field in the report that says Mean across all the parallel requests: that's pretty much the mean/n_parallel_requests that should be similar to the mean when in sequence mode (n_parallel_request = 1).
Is the request time for parallel requests supposed to be high? I was getting a graph that said basically 35 seconds and when I switch back to sequential it says about 1-2 seconds per each request. It seems maybe it is adding them up for parallel or doing some kind of logic maybe?