wg / wrk

Modern HTTP benchmarking tool
Other
37.99k stars 2.94k forks source link

contrast wrk and ApacheBenchmarking on nginx #180

Closed xiaokai-wang closed 9 years ago

xiaokai-wang commented 9 years ago

The scenario is that every 10s will execute "./nginx -s reload".

But the result confused me, wrk only emit 407253 in five minutes and qps is 1357.06, detailed information is below:

./wrk -t8 -c100 -d5m --timeout 3s "http://10.75.16.37:8888/proxy_test" Running 5m test @ http://10.75.16.37:8888/proxy_test 8 threads and 100 connections Thread Stats Avg Stdev Max +/- Stdev Latency 1.31ms 0.86ms 201.61ms 79.91% Req/Sec 2.41k 2.52k 9.19k 85.98% 407253 requests in 5.00m, 64.60MB read Socket errors: connect 0, read 583, write 135109, timeout 0 Requests/sec: 1357.06

Transfer/sec: 220.42KB

At the same time, I tried Apache-Benchmark to test at the same scenario. Detailed information is below:

./ab -c 400 -n 10000000 -l "http://10.75.16.37:8888/proxy_test" This is ApacheBench, Version 2.3 <$Revision: 1706008 $> Copyright 1996 Adam Twiss, Zeus Technology Ltd, http://www.zeustech.net/ Licensed to The Apache Software Foundation, http://www.apache.org/

Benchmarking 10.75.16.37 (be patient) Completed 1000000 requests Completed 2000000 requests Completed 3000000 requests Completed 4000000 requests Completed 5000000 requests Completed 6000000 requests Completed 7000000 requests Completed 8000000 requests Completed 9000000 requests Completed 10000000 requests Finished 10000000 requests

Server Software: nginx/1.8.0 Server Hostname: 10.75.16.37 Server Port: 8888

Document Path: /proxy_test Document Length: Variable

Concurrency Level: 400 Time taken for tests: 344.787 seconds Complete requests: 10000000 Failed requests: 0 Non-2xx responses: 986 Total transferred: 1620158791 bytes HTML transferred: 70162720 bytes Requests per second: 29003.38 #/sec Time per request: 13.791 ms Time per request: 0.034 [ms](mean, across all concurrent requests) Transfer rate: 4588.88 [Kbytes/sec] received

Connection Times (ms) min mean[+/-sd] median max Connect: 0 2 78.5 0 3004 Processing: 1 11 77.0 7 6005 Waiting: 1 11 77.0 7 6005 Total: 2 14 110.0 7 6021

Percentage of the requests served within a certain time (ms) 50% 7 66% 9 75% 11 80% 12 90% 18 95% 25 98% 35 99% 42

100% 6021 (longest request)

Comparison the result, it's result is very difference.

Can you explain the reason? or something?

Thanks.

wg commented 9 years ago

Hi @xiaokai-wang, I see wrk is reporting a ton of errors where ab isn't. Suggest you investigate the cause of those errors as they're surely skewing the results.

methane commented 9 years ago

wrk uses keep-alive by default wheres ab doesn't use it by default. (ab -k uses keep-alive).

I think wrk doesn't support server-side closing keep-alived connections.

wg commented 9 years ago

@methane reloading the nginx config shouldn't close any active connections should it?

methane commented 9 years ago

@wg nginx -s reload closes keep-alived connections.

# 1req/sec from another session.
$ netstat -tn | grep 8080
tcp4       0      0  127.0.0.1.8080         127.0.0.1.55480        ESTABLISHED
tcp4       0      0  127.0.0.1.55480        127.0.0.1.8080         ESTABLISHED
$ netstat -tn | grep 8080
tcp4       0      0  127.0.0.1.8080         127.0.0.1.55480        ESTABLISHED
tcp4       0      0  127.0.0.1.55480        127.0.0.1.8080         ESTABLISHED
$ netstat -tn | grep 8080
tcp4       0      0  127.0.0.1.8080         127.0.0.1.55480        ESTABLISHED
tcp4       0      0  127.0.0.1.55480        127.0.0.1.8080         ESTABLISHED
$ netstat -tn | grep 8080
tcp4       0      0  127.0.0.1.8080         127.0.0.1.55480        ESTABLISHED
tcp4       0      0  127.0.0.1.55480        127.0.0.1.8080         ESTABLISHED
$ nginx -s reload
$ netstat -tn | grep 8080
tcp4       0      0  127.0.0.1.8080         127.0.0.1.55482        ESTABLISHED
tcp4       0      0  127.0.0.1.55482        127.0.0.1.8080         ESTABLISHED
$ netstat -tn | grep 8080
tcp4       0      0  127.0.0.1.8080         127.0.0.1.55482        ESTABLISHED
tcp4       0      0  127.0.0.1.55482        127.0.0.1.8080         ESTABLISHED
$ netstat -tn | grep 8080
tcp4       0      0  127.0.0.1.8080         127.0.0.1.55482        ESTABLISHED
tcp4       0      0  127.0.0.1.55482        127.0.0.1.8080         ESTABLISHED
wg commented 9 years ago

@methane interesting! I guess it's only idle connections: http://mailman.nginx.org/pipermail/nginx/2011-January/024822.html

methane commented 9 years ago

I think idle means "there aren't requests for now". When receive reload signal while processing request, nginx closes connection right after sending reply.

wg commented 9 years ago

@methane yeah. In the later case I'd expect nginx to send the proper Connection: close header. In any case I pushed a commit to treat unexpected connection closure as an error and triggers a reconnect. Might be part of the problem here but dunno about all those write errors.

xiaokai-wang commented 9 years ago

@methane , thanks your testing, beyond my mind. I always think 'nginx -s reload' don't close the keep-alive connections with client and only close keep-alive connections with backend servers, of course I never tested it, maybe that's the reason. I think I should check nginx source code again.

@wg , thanks your reply. nice!