Open vandergrafgenerator opened 2 years ago
Nginx consumes some CPU, because the CPU resources allocated to workerman become less, the QPS will be reduced. However, for most services the performance is still excessive, and the loss of performance by nginx is negligible compared with the loss of business logic.
@walkor I believe you are correct. I just verified that that according to htop Workerman cpu usage is about ~30% of each cpu core. But when running Nginx as a reverse proxy to Workerman cpu usage jumps to about ~90% of each cpu core.
I wanted to use a reverse proxy Nginx because of attacks like slow client attack and other kinds of ddos. And also to leave static files handling to Nginx.
Do you have any suggestion of an alternative to Nginx for better performance?
Try with proxy_buffering off;
location / {
proxy_pass http://app;
proxy_buffering off;
proxy_ignore_client_abort on;
}
Without buffers, data is sent from the proxied server and immediately begins to be transmitted to the client. If the clients are assumed to be fast, buffering can be turned off in order to get the data to the client as soon as possible. With buffers, the Nginx proxy will temporarily store the backend’s response and then feed this data to the client. If the client is slow, this allows the Nginx server to close the connection to the backend sooner. It can then handle distributing the data to the client at whatever pace is possible.
Also nginx can terminate HTTPS traffic from clients, relieving your upstream web and application servers of the computational load of SSL/TLS.
And you can join different apps in the same domain.
server {
listen ...;
...
location / {
proxy_pass http://127.0.0.1:8080;
}
location /blog {
rewrite ^/blog(.*) /$1 break;
proxy_pass http://127.0.0.1:8181;
}
location /mail {
rewrite ^/mail(.*) /$1 break;
proxy_pass http://127.0.0.1:8282;
}
...
}
Nignx is really fast. But it's normal to have a penalty for the extra job.
The only other proxy server that I could recommend is Apache Traffic Server. https://trafficserver.apache.org/ But I don't know of any benchmark about both.
I found a good benchmark and info about proxy servers. https://static.usenix.org/events/lisa11/tech/slides/hedstrom.pdf
Apparently, Apache Traffic Server is faster. But not necessarily used in the same server. And remember that the configuration is more difficult.
Try to add keepalive to the upstream, to gain more performance and less wait. We have to explicitly enable this setting in Nginx so it does keepalive connections to the upstream it’s connecting to.
upstream app {
# The keepalive parameter sets the maximum number of idle keepalive connections
# to upstream servers that are preserved in the cache of each worker process. When
# this number is exceeded, the least recently used connections are closed.
keepalive 100;
server unix:/dev/shm/app.sock max_fails=0;
}
You are using unix sockets so the benefit will be less.
Read this article: https://ma.ttias.be/enable-keepalive-connections-in-nginx-upstream-proxy-configurations/
About your nginx config:
It's better to use:
multi_accept off; #default
use epoll;
, delete it, don't force, because nginx will by default use the most efficient method.worker_cpu_affinity auto;
timer_resolution 1s;
error_log stderr error;
, if there are errors you will see it, and not incur in a i/o than now you don't seereuseport
-> listen 0.0.0.0:8080 default_server reuseport;
sendfile off; #default
tcp_nopush off; #default
server_tokens off;
msie_padding off;
keepalive_disable none; #default msie6
worker_connections 16384;
, perhaps double it 32768keepalive_requests 10000000;
it's to high, and will use a lot of memory and CPU, default 1000 is more than acceptable.Keepalive_requests Sets the maximum number of requests that can be served through one keep-alive connection. After the maximum number of requests are made, the connection is closed.
Closing connections periodically is necessary to free per-connection memory allocations. Therefore, using too high maximum number of requests could result in excessive memory usage and not recommended.
Worker_connections Sets the maximum number of simultaneous connections that can be opened by a worker process.
It should be kept in mind that this number includes all connections (e.g. connections with proxied servers, among others), not only connections with clients. Another consideration is that the actual number of simultaneous connections cannot exceed the current limit on the maximum number of open files, which can be changed by worker_rlimit_nofile.
You are testing a hello world!
If you send a large output, will be better to have gzip
or brotli
compression delegated to nginx.
@joanhey Thanks for all the information. For now i only tested _proxybuffering off , it increased performance by ~2% only. I believe Nginx may be redirecting to SSL as you said? The CPU usage is too high. I am unfamiliar with Nginx, i will try all your suggestions later and update the results i get here.
@joanhey I did all you recommended except that i did not add proxy_buffering off;
because it seemed to me it could facilitate slow client attack? Is that assumption correct?
It got a huge performance improvement: I am getting 40k requests per second which is close to the 48k i get with pure Workerman. There was algo a huge improvement in CPU usage, it went down from 90% to 36% which is also close to the 30% i get from pure Workerman. I am benchmarking with 100 concurrent users.
With so much idle CPU i suppose there may be room to improve even more?
That is how the nginx.conf ended:
user root;
worker_cpu_affinity auto;
worker_processes auto;
timer_resolution 1s;
events {
worker_connections 32768;
multi_accept off;
}
http {
access_log off;
sendfile off;
tcp_nopush off;
tcp_nodelay on;
etag off;
server_tokens off;
msie_padding off;
keepalive_disable none;
keepalive_requests 1000;
upstream app {
keepalive 100;
server unix:/dev/shm/app.sock max_fails=0;
}
server {
listen 0.0.0.0:8080 default_server reuseport;
location / {
proxy_pass http://app;
proxy_ignore_client_abort on;
}
}
}
I think may the key point is keepalive?
The keepalive in upstream is very important for a TCP connection but insignificant using Unix sockets. The big problem was the high number of keealive_requests.
@vandergrafgenerator
@joanhey I did all you recommended except that i did not add proxy_buffering off; because it seemed to me it could facilitate slow client attack? Is that assumption correct?
It's correct
Please don't use user root;
use user www-data;
Which program do you use for benchmark ? ab
or wrk
or .. ?
@vandergrafgenerator
@joanhey I did all you recommended except that i did not add proxy_buffering off; because it seemed to me it could facilitate slow client attack? Is that assumption correct?
It's correct
Please don't use
user root;
useuser www-data;
Which program do you use for benchmark ?
ab
orwrk
or .. ?
I did use Apache Jmeter: 100 users(threads), zero ramp up time, infinite loop.
@walkor do you suggest Nginx as reverse proxy on a different machine to do SSL offloading while Workerman does it's job as http server then?
Do you think performance be the same then? with iptables restricting access to workerman based http server from nginx reverse proxy ip address only kind of?
please share your thoughts in performance perspective, hope we can see full capacity of workerman as Nginx is assigned CPU & memory resources on a different machine kind of... and we can scale workerman nodes by adding more in horizontal scaling perspective or do you think to do SSL certificate to be enabled on workerman directly by linking ssl certificates to it? if so, horizontal scalability will become difficult
thank you
Yes, if SSL is necessary, using nginx reverse proxy SSL is a very good idea in my opinion.
In terms of performance, running nginx on a different machine does not affect the performance of workerman. However, as a whole, the throughput does not just depend on the workerman. The specific throughput needs to be tested.
fully agree, to test all scenarios and document this sometime over Workerman v5.
Sorry, I never remember to add that to proxy connections.
Add to the location:
proxy_http_version 1.1;
proxy_set_header Connection "";
By default in nginx, the proxy upstream use http 1.0
(so no real keepalive).
And change the Connection
header to not listen the close
, to maintain open the upstream connection (be carefull in some apps).
@vandergrafgenerator Could you benhmark these changes and inform us. Thank you.
Workerman http i get 48k requests per second.
Nginx as a reverse proxy for Workerman i only get 31k requests per second.
Am i doing anything wrong?