Closed geerlingguy closed 9 years ago
I have basic balancing set up. Would like to look into two network interfaces for the balancer though; that would probably be helpful in isolating the guts of the Pi cluster from any network to which I connect it. But it could also make things a little less convenient, having to work through a 'gateway' Pi.
Using the balancer (authenticated page load, no Nginx caching):
$ ab -n 100 -c 10 -C "SESSxxxxxxxxxxx=xxxxxxxxxxxx" http://pidramble.com/about
This is ApacheBench, Version 2.3 <$Revision: 1554214 $>
Copyright 1996 Adam Twiss, Zeus Technology Ltd, http://www.zeustech.net/
Licensed to The Apache Software Foundation, http://www.apache.org/
Benchmarking pidramble.com (be patient).....done
Server Software: nginx/1.2.1
Server Hostname: pidramble.com
Server Port: 80
Document Path: /about
Document Length: 6388 bytes
Concurrency Level: 10
Time taken for tests: 11.634 seconds
Complete requests: 100
Failed requests: 0
Total transferred: 754700 bytes
HTML transferred: 638800 bytes
Requests per second: 8.60 [#/sec] (mean)
Time per request: 1163.418 [ms] (mean)
Time per request: 116.342 [ms] (mean, across all concurrent requests)
Transfer rate: 63.35 [Kbytes/sec] received
Connection Times (ms)
min mean[+/-sd] median max
Connect: 1 1 0.2 1 2
Processing: 804 1094 144.9 1061 1944
Waiting: 803 1094 145.0 1061 1944
Total: 805 1095 145.0 1062 1946
Percentage of the requests served within a certain time (ms)
50% 1062
66% 1084
75% 1099
80% 1147
90% 1206
95% 1404
98% 1587
99% 1946
100% 1946 (longest request)
Direct to one server (authenticated page load, no Nginx caching):
$ ab -n 100 -c 10 -C "SESSxxxxxxxxxxx=xxxxxxxxxxxx" http://10.0.1.61/about
This is ApacheBench, Version 2.3 <$Revision: 1554214 $>
Copyright 1996 Adam Twiss, Zeus Technology Ltd, http://www.zeustech.net/
Licensed to The Apache Software Foundation, http://www.apache.org/
Benchmarking 10.0.1.61 (be patient).....done
Server Software: nginx/1.2.1
Server Hostname: 10.0.1.61
Server Port: 80
Document Path: /about
Document Length: 6388 bytes
Concurrency Level: 10
Time taken for tests: 26.630 seconds
Complete requests: 100
Failed requests: 0
Total transferred: 741100 bytes
HTML transferred: 638800 bytes
Requests per second: 3.76 [#/sec] (mean)
Time per request: 2663.028 [ms] (mean)
Time per request: 266.303 [ms] (mean, across all concurrent requests)
Transfer rate: 27.18 [Kbytes/sec] received
Connection Times (ms)
min mean[+/-sd] median max
Connect: 1 11 32.0 1 113
Processing: 978 2557 406.6 2502 3326
Waiting: 977 2557 406.6 2501 3325
Total: 1078 2568 399.5 2503 3397
Percentage of the requests served within a certain time (ms)
50% 2503
66% 2549
75% 2620
80% 2918
90% 3129
95% 3298
98% 3385
99% 3397
100% 3397 (longest request)
To get the cookie key/value, I just logged in in the browser, viewed the cookies for the site (only one on this fresh D8 site, what a pleasant sight!), and grabbed the key/value.
Also see the ab
benchmark for the balancer serving cached requests: https://github.com/geerlingguy/raspberry-pi-dramble/issues/13#issuecomment-75487406
Basically, it nearly maxes out the bandwidth available to the Pi, serving up ~1300 req/s at 10 MB/sec throughput. I'm guessing that we might need to use a gigabit interface to get any further gains over that benchmark.
With #18 complete (DB access is now much faster), I'm now getting:
$ ab -n 100 -c 10 -C "SESSxxxxxxxxxxx=xxxxxxxxxxxx" http://pidramble.com/about
This is ApacheBench, Version 2.3 <$Revision: 1554214 $>
Copyright 1996 Adam Twiss, Zeus Technology Ltd, http://www.zeustech.net/
Licensed to The Apache Software Foundation, http://www.apache.org/
Benchmarking pidramble.com (be patient).....done
Server Software: nginx/1.2.1
Server Hostname: pidramble.com
Server Port: 80
Document Path: /about
Document Length: 5775 bytes
Concurrency Level: 10
Time taken for tests: 10.645 seconds
Complete requests: 100
Failed requests: 0
Total transferred: 694000 bytes
HTML transferred: 577500 bytes
Requests per second: 9.39 [#/sec] (mean)
Time per request: 1064.529 [ms] (mean)
Time per request: 106.453 [ms] (mean, across all concurrent requests)
Transfer rate: 63.67 [Kbytes/sec] received
Connection Times (ms)
min mean[+/-sd] median max
Connect: 1 1 0.2 1 2
Processing: 845 1021 63.8 1011 1239
Waiting: 844 1020 63.8 1011 1238
Total: 846 1021 63.8 1012 1241
Percentage of the requests served within a certain time (ms)
50% 1012
66% 1041
75% 1056
80% 1066
90% 1100
95% 1148
98% 1209
99% 1241
100% 1241 (longest request)
Single server:
$ ab -n 100 -c 10 -C "SESSxxxxxxxxxxx=xxxxxxxxxxxx" http://10.0.1.61/about
This is ApacheBench, Version 2.3 <$Revision: 1554214 $>
Copyright 1996 Adam Twiss, Zeus Technology Ltd, http://www.zeustech.net/
Licensed to The Apache Software Foundation, http://www.apache.org/
Benchmarking 10.0.1.61 (be patient).....done
Server Software: nginx/1.2.1
Server Hostname: 10.0.1.61
Server Port: 80
Document Path: /about
Document Length: 5775 bytes
Concurrency Level: 10
Time taken for tests: 25.348 seconds
Complete requests: 100
Failed requests: 0
Total transferred: 680300 bytes
HTML transferred: 577500 bytes
Requests per second: 3.95 [#/sec] (mean)
Time per request: 2534.829 [ms] (mean)
Time per request: 253.483 [ms] (mean, across all concurrent requests)
Transfer rate: 26.21 [Kbytes/sec] received
Connection Times (ms)
min mean[+/-sd] median max
Connect: 1 11 32.3 1 114
Processing: 959 2411 299.5 2455 3193
Waiting: 958 2411 299.5 2455 3193
Total: 1057 2423 288.7 2456 3307
Percentage of the requests served within a certain time (ms)
50% 2456
66% 2481
75% 2499
80% 2523
90% 2567
95% 2631
98% 2954
99% 3307
100% 3307 (longest request)
Not too shabby, almost hit 10 req/sec authenticated :)
Leaving open; still need to test having a separate Gigabit interface to see if I can eke out more throughput for Nginx cached pages.
Tested the Gigabit interface, and it seems to allow Nginx to eke out a tiny bit more performance... but maybe we're hitting Pi limitations with running Nginx and pushing the bits down the wire at the same time.
$ ab -n 10000 -c 100 http://pidramble.com/about
This is ApacheBench, Version 2.3 <$Revision: 1554214 $>
Copyright 1996 Adam Twiss, Zeus Technology Ltd, http://www.zeustech.net/
Licensed to The Apache Software Foundation, http://www.apache.org/
Benchmarking pidramble.com (be patient)
Completed 1000 requests
Completed 2000 requests
Completed 3000 requests
Completed 4000 requests
Completed 5000 requests
Completed 6000 requests
Completed 7000 requests
Completed 8000 requests
Completed 9000 requests
Completed 10000 requests
Finished 10000 requests
Server Software: nginx/1.2.1
Server Hostname: pidramble.com
Server Port: 80
Document Path: /about
Document Length: 5775 bytes
Concurrency Level: 100
Time taken for tests: 6.638 seconds
Complete requests: 10000
Failed requests: 0
Total transferred: 68280000 bytes
HTML transferred: 57750000 bytes
Requests per second: 1506.55 [#/sec] (mean)
Time per request: 66.377 [ms] (mean)
Time per request: 0.664 [ms] (mean, across all concurrent requests)
Transfer rate: 10045.66 [Kbytes/sec] received
Connection Times (ms)
min mean[+/-sd] median max
Connect: 4 32 4.7 32 53
Processing: 8 34 4.2 34 58
Waiting: 8 34 4.2 34 58
Total: 14 66 3.9 66 91
Percentage of the requests served within a certain time (ms)
50% 66
66% 67
75% 67
80% 67
90% 68
95% 69
98% 71
99% 79
100% 91 (longest request)
I was hovering around 1500 req/s using the gigabit interface (note: I was trying to test raw throughput, so I was testing a cached nginx page instead of a full Drupal backend request) and a little more of a tuned Nginx configuration (using sendfile
, tcp_nodelay
and tcp_nopush
and a better keepalive timeout). But that doesn't justify the extra clutter of having a Gigabit Ethernet dongle hanging off the top Pi, so I'm going to close this out and call it good!
Nginx, you have done your duty well!
Back on 10/100, I'm not hitting close to the same speeds as the GigE adapter with the updated Nginx config, so though there still may be some ways to eke out more performance, I think we're pretty safe and scalable at this point, as far as cached requests go!
$ ab -n 10000 -c 100 http://pidramble.com/about
This is ApacheBench, Version 2.3 <$Revision: 1554214 $>
Copyright 1996 Adam Twiss, Zeus Technology Ltd, http://www.zeustech.net/
Licensed to The Apache Software Foundation, http://www.apache.org/
Benchmarking pidramble.com (be patient)
Completed 1000 requests
Completed 2000 requests
Completed 3000 requests
Completed 4000 requests
Completed 5000 requests
Completed 6000 requests
Completed 7000 requests
Completed 8000 requests
Completed 9000 requests
Completed 10000 requests
Finished 10000 requests
Server Software: nginx/1.2.1
Server Hostname: pidramble.com
Server Port: 80
Document Path: /about
Document Length: 5751 bytes
Concurrency Level: 100
Time taken for tests: 6.710 seconds
Complete requests: 10000
Failed requests: 0
Total transferred: 67990000 bytes
HTML transferred: 57510000 bytes
Requests per second: 1490.37 [#/sec] (mean)
Time per request: 67.098 [ms] (mean)
Time per request: 0.671 [ms] (mean, across all concurrent requests)
Transfer rate: 9895.51 [Kbytes/sec] received
Connection Times (ms)
min mean[+/-sd] median max
Connect: 4 32 3.9 32 50
Processing: 13 34 4.0 34 77
Waiting: 12 34 4.0 33 76
Total: 18 67 2.8 66 88
Percentage of the requests served within a certain time (ms)
50% 66
66% 67
75% 68
80% 68
90% 69
95% 70
98% 71
99% 74
100% 88 (longest request)
There are a few considerations here (along with the determination as to what balancing software to use—see #1):
I'm definitely leaning towards Nginx, for simplicity's sake, and since it's used all over the place. Not considering Apache really, since I haven't been impressed with it's balancing/proxying capabilities (which are getting better, but are still a bit resource-intensive).
Here are a couple links to help tease out using Nginx: