amphp / http-server

An advanced async HTTP server library for PHP, perfect for real-time apps and APIs with high concurrency demands.
https://amphp.org/http-server
MIT License
1.29k stars 101 forks source link

Benchmarking #362

Closed razshare closed 10 months ago

razshare commented 11 months ago

Hey, I'm trying to benchmark an application.

My setup is as following

This is one of the tests I'm running (I'm censuring the ip with x.x.x.x)

root@ubuntu-s-1vcpu-512mb-10gb-fra1-01:~# ab -t 10 -n 10000 -c 100 http://x.x.x.x/assets/index.js
This is ApacheBench, Version 2.3 <$Revision: 1879490 $>
Copyright 1996 Adam Twiss, Zeus Technology Ltd, http://www.zeustech.net/
Licensed to The Apache Software Foundation, http://www.apache.org/

Benchmarking x.x.x.x (be patient)
Completed 1000 requests
Finished 1770 requests

Server Software:        
Server Hostname:        x.x.x.x
Server Port:            80

Document Path:          /assets/index.js
Document Length:        0 bytes

Concurrency Level:      100
Time taken for tests:   10.716 seconds
Complete requests:      1770
Failed requests:        1236
   (Connect: 0, Receive: 0, Length: 18, Exceptions: 1218)
Total transferred:      1808514 bytes
HTML transferred:       1805826 bytes
Requests per second:    165.17 [#/sec] (mean)
Time per request:       605.442 [ms] (mean)
Time per request:       6.054 [ms] (mean, across all concurrent requests)
Transfer rate:          164.81 [Kbytes/sec] received

Connection Times (ms)
              min  mean[+/-sd] median   max
Connect:        0    1   0.8      0       5
Processing:    21  244 379.2    233    6950
Waiting:        0   10 102.9      0    1362
Total:         25  245 379.2    234    6951
WARNING: The median and mean for the initial connection time are not within a normal deviation
        These results are probably not that reliable.

Percentage of the requests served within a certain time (ms)
  50%    234
  66%    257
  75%    270
  80%    285
  90%    359
  95%    430
  98%    995
  99%   1710
 100%   6951 (longest request)

This thing costs only 4€ a month, so all things considered it looks like a good result (I think?).

CPU does go up to 100%, which is great, that's the point of async.

However, this warning seems to be declining a lot of requests image

It's a fair warning, but I was wondering if there's a way to disable this for benchmarking purposes?

Otherwise I would have to rent more machines.

Or maybe do you have some suggestions for better benchmarking techniques for amp?

kelunik commented 11 months ago

This thing costs only 4€ a month, so all things considered it looks like a good result (I think?).

1236 failed requests / 1770 complete requests doesn't look like a good result.

You'll also want to change $connectionLimitPerIp and $concurrencyLimit in SocketHttpServer::createForDirectAccess.

razshare commented 10 months ago

Hey, I'm closing this issue because I think I'm approaching the whole thing wrong, I need to do some rethinking and probably read the docs better. Thanks a lot for the advice!

razshare commented 10 months ago

Hey, @kelunik some updates on this even though I closed the issue. It turns out there were a few issues with apache2 interfering with the amp web server on port 80. Apache2 just kept restarting (and apparently it takes over port 80 regardless if it's occupied or not?), even though I had stopped the service altogether several times.

I ended up removing apache2 completely from the system, that fixed the failed requests.

Another issue was that my clients were actually not powerful enough to handle the benchmark...

In the end, I rented 2 more machines (both with 4 cpus and 8gb of ram) and used those 2 as clients and ran the tests against the 512mb ram and 1 cpu amphp server.

The results look pretty good, I think

root@ubuntu-s-4vcpu-8gb-fra1-01:~# ab -n 1000 -c 100 http://a.b.c.d/api/test
This is ApacheBench, Version 2.3 <$Revision: 1879490 $>
Copyright 1996 Adam Twiss, Zeus Technology Ltd, http://www.zeustech.net/
Licensed to The Apache Software Foundation, http://www.apache.org/

Benchmarking a.b.c.d (be patient)
Completed 100 requests
Completed 200 requests
Completed 300 requests
Completed 400 requests
Completed 500 requests
Completed 600 requests
Completed 700 requests
Completed 800 requests
Completed 900 requests
Completed 1000 requests
Finished 1000 requests

Server Software:        
Server Hostname:        a.b.c.d
Server Port:            80

Document Path:          /api/test
Document Length:        14 bytes

Concurrency Level:      100
Time taken for tests:   0.481 seconds
Complete requests:      1000
Failed requests:        0
Total transferred:      135000 bytes
HTML transferred:       14000 bytes
Requests per second:    2077.38 [#/sec] (mean)
Time per request:       48.138 [ms] (mean)
Time per request:       0.481 [ms] (mean, across all concurrent requests)
Transfer rate:          273.87 [Kbytes/sec] received

Connection Times (ms)
              min  mean[+/-sd] median   max
Connect:        0    1   0.8      0       4
Processing:     5   46   8.3     47      56
Waiting:        1   45   8.3     47      56
Total:          7   46   7.7     47      57
WARNING: The median and mean for the initial connection time are not within a normal deviation
        These results are probably not that reliable.

Percentage of the requests served within a certain time (ms)
  50%     47
  66%     50
  75%     51
  80%     52
  90%     54
  95%     54
  98%     56
  99%     56
 100%     57 (longest request)

This is a plain text response

And then reading the file from before

root@ubuntu-s-4vcpu-8gb-fra1-01:~# ab -n 1000 -c 2 http://a.b.c.d/assets/index.js
This is ApacheBench, Version 2.3 <$Revision: 1879490 $>
Copyright 1996 Adam Twiss, Zeus Technology Ltd, http://www.zeustech.net/
Licensed to The Apache Software Foundation, http://www.apache.org/

Benchmarking a.b.c.d (be patient)
Completed 100 requests
Completed 200 requests
Completed 300 requests
Completed 400 requests
Completed 500 requests
Completed 600 requests
Completed 700 requests
Completed 800 requests
Completed 900 requests
Completed 1000 requests
Finished 1000 requests

Server Software:        
Server Hostname:        a.b.c.d
Server Port:            80

Document Path:          /assets/index.js
Document Length:        93497 bytes

Concurrency Level:      2
Time taken for tests:   8.042 seconds
Complete requests:      1000
Failed requests:        0
Total transferred:      93648000 bytes
HTML transferred:       93497000 bytes
Requests per second:    124.34 [#/sec] (mean)
Time per request:       16.085 [ms] (mean)
Time per request:       8.042 [ms] (mean, across all concurrent requests)
Transfer rate:          11371.32 [Kbytes/sec] received

Connection Times (ms)
              min  mean[+/-sd] median   max
Connect:        0    1   0.5      1      11
Processing:     7   15   3.0     15      40
Waiting:        2    6   2.4      6      33
Total:          8   16   3.1     15      42

Percentage of the requests served within a certain time (ms)
  50%     15
  66%     16
  75%     17
  80%     17
  90%     19
  95%     22
  98%     25
  99%     28
 100%     42 (longest request)

This is without libevent unfortunately, the 1 cpu machine just can't handle running the Dockerfile and the container, it just stops because of a segkill.

The properties you suggested to change (connectionLimitPerIp, concurrencyLimit) do seem to have improved a lot the performance (and I've also made some changes to my code).

Here's another benchmark against a 4 cpu and 8gb server

root@ubuntu-s-4vcpu-8gb-fra1-01:~# ab -n 20000 -c 1000 http://a.b.c.d/api/test
This is ApacheBench, Version 2.3 <$Revision: 1879490 $>
Copyright 1996 Adam Twiss, Zeus Technology Ltd, http://www.zeustech.net/
Licensed to The Apache Software Foundation, http://www.apache.org/

Benchmarking a.b.c.d (be patient)
Completed 2000 requests
Completed 4000 requests
Completed 6000 requests
Completed 8000 requests
Completed 10000 requests
Completed 12000 requests
Completed 14000 requests
Completed 16000 requests
Completed 18000 requests
Completed 20000 requests
Finished 20000 requests

Server Software:        
Server Hostname:        a.b.c.d
Server Port:            80

Document Path:          /api/test
Document Length:        14 bytes

Concurrency Level:      1000
Time taken for tests:   5.292 seconds
Complete requests:      20000
Failed requests:        0
Total transferred:      2700000 bytes
HTML transferred:       280000 bytes
Requests per second:    3779.03 [#/sec] (mean)
Time per request:       264.618 [ms] (mean)
Time per request:       0.265 [ms] (mean, across all concurrent requests)
Transfer rate:          498.21 [Kbytes/sec] received

Connection Times (ms)
              min  mean[+/-sd] median   max
Connect:        0  150 359.1      0    3019
Processing:    11   58  97.1     33    1693
Waiting:        1   58  97.1     33    1693
Total:         25  208 413.1     34    3056

Percentage of the requests served within a certain time (ms)
  50%     34
  66%     37
  75%     41
  80%     58
  90%   1050
  95%   1255
  98%   1472
  99%   1484
 100%   3056 (longest request)

I can't say for sure, but I think even in this case the client is still the bottleneck, mainly because apache-benchmark limits concurrent request to a maximum of 1000, it won't let me set it above that number, but if I run the same benchmark from 2 different clients the server doesn't seem to slow down, I get the same results on both clients.

kelunik commented 10 months ago

I'd strongly recommend to look at wrk or siege as the benchmark client. ab is very old and basic.

razshare commented 10 months ago

Thank you, I will!