Open bendini20 opened 2 years ago
This is already planned for version 3 of npm #156 . A release is not known yet as i know
I was looking into opening this issue glad someone did already. Not having load balancing options calls for installing load balancer behind it. Which kinda beats the purpose of using nginx proxy manager altogether. We need load balancing options for sure.
Hi, any news about load balancing feature?
waiting on release date for load balancing
Not enabling the load balancer misses the mark which results in letting this proxy look immature compared to its competitors. Would love to see this as well.
This would make NPM perfect (ok almost) But I sure am missing this feature.
To share my discovery and what I did instead: If you use pfSense on your network for a good solid firewall, you have a HAProxy module available for download. It's a reverse proxy with load balancer and it's fully integrated into pfSense, so you don't have to deal with the HAProxy configuration files, since the module uses the GUI of pfSense to integrate it properly.
Having a solid firewall VM is recommend and pfSense is really a free enterprise solution that does its job very well. No need to use nginx for load balancing unless you want to split firewall from load balancer.
To share my discovery and what I did instead: If you use pfSense on your network for a good solid firewall, you have a HAProxy module available for download. It's a reverse proxy with load balancer and it's fully integrated into pfSense, so you don't have to deal with the HAProxy configuration files, since the module uses the GUI of pfSense to integrate it properly.
Having a solid firewall VM is recommend and pfSense is really a free enterprise solution that does its job very well. No need to use nginx for load balancing unless you want to split firewall from load balancer.
the big difference here is that haproxy cannot load-balance UDP! I use HAproxy on pfSense since years, but always had th eproblem that I couldn't load balance UDP. Having this inside NginxProxyManager would be really cool.
Same problem like @ne0YT has explained, HA-Proxy in a higher version is able to load UDP, but not the version of PF-Sense FW. Therefore in NPM it would be a very good placement too to have alternatives. Especially when it comes to Syslog UDP loadbalancing. DNS UDP can be done quite smoother with PowerDNS for shure.
But is there a real load balancing? It looks like there's only NAT mode on haproxy for UDP. so really adding this already available feature to the GUI would just be very nice
as there is no healthcheck on free nginx it doesnt make to much sense to have load balancing
Arrrgg, I forgot, yes, you are right :-(
as there is no healthcheck on free nginx it doesnt make to much sense to have load balancing
This is completely incorrect. NGINX free version can tell when a forward host is offline and not send traffic to it. If I set multiple hosts in an upstream configuration, NGINX (The free version) will only send traffic to the hosts that are online.
I avoid putting unnecessary tasks on a router. It already has a task of routing/filtering millions of packets per second as the router, firewall and DNS. Tacking on a reverse proxy is too much IMO. In addition, running the reverse proxy where your dockers are means you can route via hostname and not IP addresses.
This is not the right way of thinking. Yes there is a finite limit of PPS that a router can handle, however, Since PFSense is purpose built (for many things in fact). If you run PFsense on a system that is resilient enough, you should not have to worry about limitations. If being used in a production enterprise environment I would look into setting up multiple PFsense boxes in HA mode to lighten the load across the board.
as there is no healthcheck on free nginx it doesnt make to much sense to have load balancing
This is completely incorrect. NGINX free version can tell when a forward host is offline and not send traffic to it. If I set multiple hosts in an upstream configuration, NGINX (The free version) will only send traffic to the hosts that are online.
It has no active healthchecks.
Hi Guys, I know there is another way to achieve load balancing, First you need to create a custom directory under the data/nginx directory corresponding to the server where you deploy npm, and then create a file named http.conf in the custom directory , the content of the file is upstream your_server { server ... } Then go back to your npm background, select the corresponding Proxy Host And finally select the Advanced option, fill in the location configuration, such as loaction /api { proxy_pass http://your_server } OK, like this npm achieves the effect of load balancing
@alex14dark, can you please elaborate with some real example? It seems like you cracked it but I am missing something as it does not work for me.
FYI:
I got this in my http.conf
in the data/nginx/root
directory:
http {
upstream backend {
server backend1.example.com;
server backend2.example.com;
check interval=5000 rise=2 fall=3 timeout=2000;
}
server {
listen 80;
server_name example.com;
location / {
proxy_pass http://backend;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
}
}
}
what do I need to adjust on my proxy? How does NPM load my custom http.conf? thanks
@tomitrescak The custom configuration needs to be in the data/nginx/custom directory. If not, you need to create one, and then create a http.conf file in this directory. According to the information you provide, the file content should be upstream backend { server backend1.example.com; server backend2.example.com; check interval=5000 rise=2 fall=3 timeout=2000; }, and finally go back to your npm web background interface, select the corresponding proxy host, and then add location / { in the corresponding advanced option proxy_pass http://backend; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-Forwarded-Proto $scheme; }
@alex14dark adding this to my custom config leads to nginx crashing
this is in my custom/http.conf
http {
upstream jobiq {
server server1.com weight 100;
server 127.0.0.1:3020;
check interval=5000 rise=2 fall=3 timeout=2000;
}
}
This is in the custom config of my reverse proxy
location / {
proxy_pass http://jobiq;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
}
Moreover there is a big fat warnign on the custom config page:
Please note, that any add_header or set_header directives added here will not be used by nginx. You will have to add a custom location '/' and add the header in the custom config there.
[UPDATE]
I found that I had some issues in the config for the server and this one made my server start custom/http.conf
upstream jobiq {
server server1.com weight=100;
server 127.0.0.1:3020;
}
The issue is that I am now getting "Too many redirects" error ;(
I add this as a separate post as I managed to SOLVE this thanks to @alex14dark and ChatGPT :)
In data/nginx/custom/http.conf
you set up your upstream, avoid the server
directive (more info at https://nginxproxymanager.com/advanced-config/#custom-nginx-configurations)
upstream backend {
server server1.com;
server 127.0.0.1:3020 backup; # or whatever is your config
}
In the "custom configuration" of your proxy add the following:
location / {
proxy_pass http://backend;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
}
INFO: This will essentially remove the configuration of your endpoint as a reverse proxy (removes the proxy configuration line) and adds only your custom config. Quite cool, but unclear I'd say.
MORE INFO for noobs like myself
If you are redirecting to another server such as server1.com
in the configuration above, make sure you configure the endpoint only as HTTP in the NPM, and do not request any HTTPS configuration. Maybe someone smarter can explain why, I do not know. All I know is that if the redirected endpoint was configured with HTTPS I was getting a "too many redirects" error. Maybe this is a huge security hole, please let me know if that is so.
@tomitrescak Glad to be able to help you!
@tomitrescak can the access list still work while using your custom configuration?
This is the Solution for me in this point: https://silicon.blog/2023/05/17/how-to-load-balance-your-servers-using-nginx-proxy-manager-and-cloudflare/
Thanks! This is awesome
From: leuedaniel @.> Reply-To: NginxProxyManager/nginx-proxy-manager @.> Date: Friday, January 19, 2024 at 2:51 PM To: NginxProxyManager/nginx-proxy-manager @.> Cc: Austin Leath @.>, Comment @.***> Subject: Re: [NginxProxyManager/nginx-proxy-manager] Load Balancing (Issue #1963)
This is the Solution for me in this point: https://silicon.blog/2023/05/17/how-to-load-balance-your-servers-using-nginx-proxy-manager-and-cloudflare/
— Reply to this email directly, view it on GitHub, or unsubscribe. You are receiving this because you commented.Message ID: @.***>
This is the Solution for me in this point: https://silicon.blog/2023/05/17/how-to-load-balance-your-servers-using-nginx-proxy-manager-and-cloudflare/
This looks promising, however I am interested in UDP load balancing and I suppose this method won't work for this use case, right?
Is this still the best workaround? I have 5 servers that server the same traffic and i would like to map them to a single URL using nxing proxy manager
Instead of wait NPM to support load balancing, I customize an image based on https://github.com/caprover/nginx-reverse-proxy, so you just connect your domain to this service
Source code: https://github.com/hoanganht91/nginx-reverse-proxy Docker image: https://hub.docker.com/r/annh9x/nginx-reverse-proxy
This is an example to test load balancing config
version: '3.8'
services:
test1:
image: strm/helloworld-http
test2:
image: strm/helloworld-http
test3:
image: strm/helloworld-http
load-balancer:
image: annh9x/nginx-reverse-proxy
environment:
UPSTREAM_HTTP_ADDRESS: 'server test1 weight=1;server test2 weight=2;server test3 weight=3;'
CLIENT_MAX_BODY_SIZE: 256M
Hello everyone, it's working for me. 1. 2.
3.
upstream apimeserverpool{
ip_hash;
server 192.168.1.56:8888 max_fails=3 fail_timeout=60s;
server 192.168.1.96:8888 max_fails=3 fail_timeout=60s;
keepalive 64;
}
server {
listen 80;
listen [::]:80;
listen 443 ssl http2;
listen [::]:443 ssl http2;
ssl on;
ssl_stapling on;
ssl_stapling_verify on;
server_name apimeserver.com;
ssl_certificate /data/custom_ssl/npm-3/fullchain.pem;
ssl_certificate_key /data/custom_ssl/npm-3/privkey.pem;
include conf.d/include/assets.conf;
include conf.d/include/force-ssl.conf;
access_log /data/logs/proxy-host-esb_access.log proxy;
error_log /data/logs/proxy-host-esb_error.log warn;
location / {
proxy_pass http://apimeserverpool;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
}
}
This is the Solution for me in this point: https://silicon.blog/2023/05/17/how-to-load-balance-your-servers-using-nginx-proxy-manager-and-cloudflare/
Not an expert here... but, it looks like the modification of the conf files in the container as instructed here should be mounted as external files in the docker startup? Also, I think this will break other non-load balanced sites that I setup? Or at least I will manually have to add the header config to each one, because we comment out all the headers in the local proxy.conf?
Anyone get those steps to work recently? It seems like wherever I try to call "upstream" from I get errors like this, "_warning nginx: [emerg] "upstream" directive is not allowed here in..."_
Hello! I have been using this docker for a while now. It is truly great. There could be two native integrations that should be relatively straightforward. NGINX supports load balancing to servers natively via round robin, health, etc. Right now, if I want to do load balancing, I have to forward traffic to a bare NGINX docker. Is there a way to add native GUI support for load balancing within NginxProxyManager?
Simple Nginx config for load balancing:
upstream{
server hostname:port;
server url;
}
server {
}