Open NaerChang opened 8 years ago
Same issue here. Using smth like that:
init = function(args)
local r = {}
r[1] = wrk.format(nil, "/?foo")
r[2] = wrk.format(nil, "/?bar")
r[3] = wrk.format(nil, "/?baz")
req = table.concat(r)
end
request = function()
return req
end
And the result:
Thread Stats Avg Stdev Max +/- Stdev
Latency 366.38ms 373.46ms 1.04s 0.00%
Req/Sec 3.00 1.44 10.00 93.55%
Latency Distribution
50% 0.00us
75% 0.00us
90% 0.00us
99% 0.00us
31 requests in 10.06s, 19.03KB read
Requests/sec: 3.08
Transfer/sec: 1.89KB
All data is correct except latency. Seems like it's broken when pipelines are using...
First of all I am using dockerized version of wrk (https://github.com/William-Yeh/docker-wrk). I use the pipeline method via lua script to run the perf test but latency data is all 0 and stdev becomes -nan%.
I am trying to understand in what case this would happen. I know it's can't be exactly 0 (rounding issue?). All REST API calls are between docker containers on the coreos host which is similar to just doing inter process communication. When I use curl I get latency data that is 1-3ms.