swoole / swoole-src

🚀 Coroutine-based concurrency library for PHP
https://www.swoole.com
Apache License 2.0
18.27k stars 3.16k forks source link

Nginx + Swoole performance too slow #2127

Closed jobs-git closed 5 years ago

jobs-git commented 5 years ago

Please answer these questions before submitting your issue. Thanks!

  1. What did you do? If possible, provide a simple script for reproducing the error. Run the swoole_http_server. Then test its performance using wrk with the following command wrk http://127.0.0.1:9501 -c 1000 -t 48 -d 5

The swoole server is configured as follows

<?php
$http = new swoole_http_server("127.0.0.1", 9501, SWOOLE_BASE);
$http->on('request', function ($request, swoole_http_response $response) {
    $response->header('Last-Modified', 'Thu, 18 Jun 2015 10:24:27 GMT');
    $response->header('E-Tag', '55829c5b-17');
    $response->header('Accept-Ranges', 'bytes');
    $response->end("<h1>\nHello Swoole.\n</h1>");
});
$http->start();
  1. What did you expect to see? The performance is close to that of static nginx as what was stated in the documentation. I am expecting 1M request/second

  2. What did you see instead? I see 1 over 20 of the performance only, I am expecting that it would be close to static nginx of 1M+ requests/second. Instead i only got 40k+ request/second.

  3. What version of Swoole are you using (show your php --ri swoole)? swoole 4.2.7

  4. What is your machine environment used (including version of kernel & php & gcc) ? CentOS 7 1708 php 7.1 2x Xeon e5 2670 gcc 4.8.5 20150623 kernel 3.10.0-862.2.3.el7.x86_64

twose commented 5 years ago

You only use one CPU, please see https://github.com/swoole/swoole-src#-benchmark and https://github.com/swoole/swoole-src/blob/master/benchmark/benchmark.php

twose commented 5 years ago

I just have a try hello world:

swoole

$ wrk http://127.0.0.1:9501 -c 1000 -t 48 -d 5
Running 5s test @ http://127.0.0.1:9501
  48 threads and 1000 connections
  Thread Stats   Avg      Stdev     Max   +/- Stdev
    Latency    21.88ms   17.60ms 131.59ms   87.80%
    Req/Sec     1.04k   315.00     2.63k    75.58%
  252839 requests in 5.10s, 43.64MB read
Requests/sec:  49538.91
Transfer/sec:      8.55MB

nginx

$ wrk http://127.0.0.1 -c 1000 -t 48 -d 5
Running 5s test @ http://127.0.0.1
  48 threads and 1000 connections
  Thread Stats   Avg      Stdev     Max   +/- Stdev
    Latency    27.91ms   18.63ms 171.81ms   77.26%
    Req/Sec   758.21    465.92     4.37k    80.32%
  184593 requests in 5.10s, 43.30MB read
Requests/sec:  36167.59
Transfer/sec:      8.48MB

But comparing to nginx static server doesn't make any sense, swoole http server is an application server.

jobs-git commented 5 years ago

You were right, I was just using 1 cpu core!

I looked into the benchmark you linked and it seems to utilized worker_num, so i have set it to 200 in my application. And guess what?!

The Swoole dynamic php is even twice faster than static nginx!

Static Nginx (Just html not php)

[user@localhost wrk]$ ./wrk http://127.0.0.1 -s pipeline.lua -c 10000 -t 40 -d 2
Running 2s test @ http://127.0.0.1
  40 threads and 10000 connections
  Thread Stats   Avg      Stdev     Max   +/- Stdev
    Latency   143.25ms  123.37ms 997.95ms   80.34%
    Req/Sec    25.86k     9.36k  107.83k    81.42%
  2176256 requests in 2.10s, 496.03MB read
Requests/sec: 1036951.83
Transfer/sec:    236.35MB

*There is no change beyond c 10k connections and 40 thread, anything higher will reduce the requests/second.

Dynamic php Swoole 4.2.7

[user@localhost wrk]$ ./wrk http://127.0.0.1:9501 -c 30000 -t 3000 -d 5
Running 5s test @ http://127.0.0.1:9501
  3000 threads and 30000 connections
  Thread Stats   Avg      Stdev     Max   +/- Stdev
    Latency    55.25ms   41.83ms   1.03s    79.98%
    Req/Sec   178.75    248.95    11.91k    95.90%
  13330458 requests in 6.23s, 3.34GB read
Requests/sec: 2138185.56
Transfer/sec:    548.53MB

Summary:

Nginx static: 1036951.83 rps
Swoole dynamic php: 2138185.56 rps

Nginx here is using pipelining, without it, nginx can just do just 500k req/s. While swoole 4.2.7 is serving 2.1 Million req/s without pipelining.

Actually, it seems Swoole can even do ~3 Million requests per second in my workstation but wrk is already being taxed too much it lags the system.

Arising Issue:

But however, I have another issue. Whenever I proxy nginx in front of swoole's fast configuration, I just only get 26k requests/second? What is wrong with my proxy?

Here is the test method:

./wrk http://127.0.0.1 -c 30000 -t 3000 -d 5

Here is my swoole file:

<?php
$http = new swoole_http_server("127.0.0.1", 9501, SWOOLE_BASE);
$http->set([
    'worker_num' => 200
]);
$http->on('request', function ($request, swoole_http_response $response) {
    $response->header('Last-Modified', 'Thu, 18 Jun 2015 10:24:27 GMT');
    $response->header('E-Tag', '55829c5b-17');
    $response->header('Accept-Ranges', 'bytes');
    $response->end("<h1>\nHello Swoole".rand(1000, 9999).".\n</h1>");
});
$http->start();

Nginx proxy

worker_processes 32;
worker_rlimit_nofile 524288;
user my_user;
#daemon off;
events
{
    use epoll;
    worker_connections  50000;
    multi_accept on;
}

http {
    include       mime.types;
    default_type  application/octet-stream;

    sendfile        on;
    open_file_cache max=50000;
    keepalive_requests 50000;

    tcp_nopush on;
    keepalive_timeout 30;
    gzip off;
    gzip_min_length 1024;
    access_log /dev/null;
    error_log /dev/null;
    server {
    listen 80;
    root "/var/www";
        server_name local.swoole.com;
    location / {
        proxy_http_version 1.1;
        proxy_set_header Connection "keep-alive";
        proxy_set_header X-Real-IP $remote_addr;
            proxy_pass http://127.0.0.1:9501;
    }
    }
}

I am suspecting that nginx could be connecting to just one worker of swoole even though there are 200 of them. If not, what might be wrong or missing in the configuration?

twose commented 5 years ago

Just use the simplest conf and retry? and you can change your server mode from SWOOLE_BASE to SWOOLE_PROCESS

server {
    server_name local.swoole.com;
    location / {
        proxy_http_version 1.1;
        proxy_set_header Connection "keep-alive";
        proxy_set_header X-Real-IP $remote_addr;
        proxy_pass http://127.0.0.1:9501;
    }
}
jobs-git commented 5 years ago

I tried to switch to swoole_process but it resulted to a much slower performance than swoole_base

SWOOLE_BASE

./wrk http://127.0.0.1:9501 -c 30000 -t 3000 -d 5
Running 5s test @ http://127.0.0.1:9501
  3000 threads and 30000 connections
  Thread Stats   Avg      Stdev     Max   +/- Stdev
    Latency    53.29ms   42.74ms   1.98s    79.73%
    Req/Sec   190.64    279.01    11.77k    94.99%
  14152825 requests in 5.98s, 3.55GB read
Requests/sec: 2365769.39
Transfer/sec:    606.91MB

SWOOLE_PROCESS

./wrk http://127.0.0.1:9501 -c 30000 -t 3000 -d 5
Running 5s test @ http://127.0.0.1:9501
  3000 threads and 30000 connections
  Thread Stats   Avg      Stdev     Max   +/- Stdev
    Latency   116.94ms   52.95ms 319.50ms   73.83%
    Req/Sec    84.51    132.02    11.92k    97.37%
  1906651 requests in 5.90s, 489.13MB read
Requests/sec: 323247.42
Transfer/sec:     82.93MB

NGINX + Swoole_process

./wrk http://127.0.0.1 -c 30000 -t 3000 -d 5
Running 5s test @ http://127.0.0.1
  3000 threads and 30000 connections
  Thread Stats   Avg      Stdev     Max   +/- Stdev
    Latency   224.63ms  388.94ms   2.00s    84.77%
    Req/Sec    31.61     55.52     2.40k    93.77%
  380426 requests in 5.69s, 95.42MB read
  Socket errors: connect 0, read 0, write 0, timeout 16450
Requests/sec:  66890.18
Transfer/sec:     16.78MB

Using the basic nginx config + swoole_process, i got double of my previous nginx+swoole performance, but this is nowhere near Swoole only server performance of 2Million req/s.

ghost commented 5 years ago

That's a really nice issue :relaxed:

:+1:

jobs-git commented 5 years ago

Indeed, solutions to slow proxy communications and slower swoole_process would be interesting. Just ping me up if there are new things in here that needs to be tested.

ghost commented 5 years ago

Really, that depends on the backend you are building. There are dozens of options to tweak and thousand ways of build a backend. That's about the whole infrastructure you are building, not one kind of benchmark like this one :relaxed:

re-thc commented 5 years ago

If Swoole can run the http server via Unix Socket and have Nginx proxy via Unix Socket this would implement a lot of the overhead. I think that's what's slowing things down.

jobs-git commented 5 years ago

That seems to be a sensible reason, how do you think can we resolve this?

re-thc commented 5 years ago

@jobs-git Swoole needs to provide a Unix Socket option and not only TCP.

twose commented 5 years ago
$server = new Swoole\Http\Server('/tmp/swoole.sock', 0, SWOOLE_PROCESS, SWOOLE_UNIX_STREAM);
$server->on('request', function (Swoole\Http\Request $request, Swoole\Http\Response $response) {
    $response->end('Hello Swoole!');
});
$server->start();
ghost commented 5 years ago

@jobs-git you are re-running your tests already? :relaxed:

jobs-git commented 5 years ago

I am not getting any response from swoole unix socket:

Here is the command i used:

echo GET / HTTP/1.1 | nc -U /home/user/swoole.sock -w 2000ms -v
echo GET / HTTP/1.0 | nc -U /home/user/swoole.sock -w 2000ms -v

This is what I get:

Ncat: Version 6.40 ( http://nmap.org/ncat )
Ncat: Connected to /home/user/swoole.sock.
Ncat: 15 bytes sent, 0 bytes received in 0.00 seconds.

I am at least expecting to get a "hello swoole!" response included, this is the swoole configuration:

$server = new Swoole\Http\Server('/home/user/swoole.sock', 0, SWOOLE_PROCESS, SWOOLE_UNIX_STREAM);
$server->on('request', function (Swoole\Http\Request $request, Swoole\Http\Response $response) {
    $response->end('Hello Swoole!');
});
$server->start();
ghost commented 5 years ago

I have not used that before, but what's about 'open_http_protocol' => true?

Feels like a mismatch between a request and onReceive?

Please have a look in docs about this :wink:

ghost commented 5 years ago

Btw @jobs-git i do not know what you want to archive, but only running useless benchmarks for me personally is wasting time (like the referenced h2o). That's kidding.

jobs-git commented 5 years ago

Cause Swoole performance would be such a waste if the overall output is hugely degraded by a factor of close to 10x, in Nginx+swoole case that is about 40x!

I will check if unix socket does the job better than reverse proxy. But I don't seem to get any response from Swoole. I have read from doc that open_http_protocol is already set to true by default, setting it to false does not seem to change anything.

jobs-git commented 5 years ago

After much testing, I found that it seems to be a Nginx issue. When proxying w/ a front-end server we can get optimal performance with the following:

SWOOLE_PROCESS + UNIX SOCKET

But this is about 10x slower than a raw SWOOLE_BASE application. A better raw application performance can be achieved w/

SWOOLE_BASE

But when used w/ unix socket, performance drops so much! The huge difference when switching between SWOOLE_PROCESS and SWOOLE_BASE in different configurations is better suited as another issue. I will be closing this now.