NginxProxyManager / nginx-proxy-manager

Docker container for managing Nginx proxy hosts with a simple, powerful interface
https://nginxproxymanager.com
MIT License
20.84k stars 2.42k forks source link

Load Balancing #1963

Open bendini20 opened 2 years ago

bendini20 commented 2 years ago

Hello! I have been using this docker for a while now. It is truly great. There could be two native integrations that should be relatively straightforward. NGINX supports load balancing to servers natively via round robin, health, etc. Right now, if I want to do load balancing, I have to forward traffic to a bare NGINX docker. Is there a way to add native GUI support for load balancing within NginxProxyManager?

Simple Nginx config for load balancing:

upstream { server hostname:port; server url;

}

server {

listen 80;
server_name <url to be balanced>;

location / {
    proxy_pass http://<name>;
}

}

support-tt commented 2 years ago

This is already planned for version 3 of npm #156 . A release is not known yet as i know

LinuxMeow commented 1 year ago

I was looking into opening this issue glad someone did already. Not having load balancing options calls for installing load balancer behind it. Which kinda beats the purpose of using nginx proxy manager altogether. We need load balancing options for sure.

Faridalim commented 1 year ago

Hi, any news about load balancing feature?

AlphaInfamous commented 1 year ago

waiting on release date for load balancing

martin-braun commented 1 year ago

Not enabling the load balancer misses the mark which results in letting this proxy look immature compared to its competitors. Would love to see this as well.

pwfraley commented 1 year ago

This would make NPM perfect (ok almost) But I sure am missing this feature.

martin-braun commented 1 year ago

To share my discovery and what I did instead: If you use pfSense on your network for a good solid firewall, you have a HAProxy module available for download. It's a reverse proxy with load balancer and it's fully integrated into pfSense, so you don't have to deal with the HAProxy configuration files, since the module uses the GUI of pfSense to integrate it properly.

Having a solid firewall VM is recommend and pfSense is really a free enterprise solution that does its job very well. No need to use nginx for load balancing unless you want to split firewall from load balancer.

ne0YT commented 1 year ago

To share my discovery and what I did instead: If you use pfSense on your network for a good solid firewall, you have a HAProxy module available for download. It's a reverse proxy with load balancer and it's fully integrated into pfSense, so you don't have to deal with the HAProxy configuration files, since the module uses the GUI of pfSense to integrate it properly.

Having a solid firewall VM is recommend and pfSense is really a free enterprise solution that does its job very well. No need to use nginx for load balancing unless you want to split firewall from load balancer.

the big difference here is that haproxy cannot load-balance UDP! I use HAproxy on pfSense since years, but always had th eproblem that I couldn't load balance UDP. Having this inside NginxProxyManager would be really cool.

manfred-warta commented 1 year ago

Same problem like @ne0YT has explained, HA-Proxy in a higher version is able to load UDP, but not the version of PF-Sense FW. Therefore in NPM it would be a very good placement too to have alternatives. Especially when it comes to Syslog UDP loadbalancing. DNS UDP can be done quite smoother with PowerDNS for shure.

ne0YT commented 1 year ago

But is there a real load balancing? It looks like there's only NAT mode on haproxy for UDP. so really adding this already available feature to the GUI would just be very nice

ne0YT commented 1 year ago

as there is no healthcheck on free nginx it doesnt make to much sense to have load balancing

manfred-warta commented 1 year ago

Arrrgg, I forgot, yes, you are right :-(

bendini20 commented 1 year ago

as there is no healthcheck on free nginx it doesnt make to much sense to have load balancing

This is completely incorrect. NGINX free version can tell when a forward host is offline and not send traffic to it. If I set multiple hosts in an upstream configuration, NGINX (The free version) will only send traffic to the hosts that are online.

bendini20 commented 1 year ago

I avoid putting unnecessary tasks on a router. It already has a task of routing/filtering millions of packets per second as the router, firewall and DNS. Tacking on a reverse proxy is too much IMO. In addition, running the reverse proxy where your dockers are means you can route via hostname and not IP addresses.

AustinLeath commented 1 year ago

This is not the right way of thinking. Yes there is a finite limit of PPS that a router can handle, however, Since PFSense is purpose built (for many things in fact). If you run PFsense on a system that is resilient enough, you should not have to worry about limitations. If being used in a production enterprise environment I would look into setting up multiple PFsense boxes in HA mode to lighten the load across the board.

ne0YT commented 1 year ago

as there is no healthcheck on free nginx it doesnt make to much sense to have load balancing

This is completely incorrect. NGINX free version can tell when a forward host is offline and not send traffic to it. If I set multiple hosts in an upstream configuration, NGINX (The free version) will only send traffic to the hosts that are online.

It has no active healthchecks.

alex14dark commented 1 year ago

Hi Guys, I know there is another way to achieve load balancing, First you need to create a custom directory under the data/nginx directory corresponding to the server where you deploy npm, and then create a file named http.conf in the custom directory , the content of the file is upstream your_server { server ... } Then go back to your npm background, select the corresponding Proxy Host And finally select the Advanced option, fill in the location configuration, such as loaction /api { proxy_pass http://your_server } OK, like this npm achieves the effect of load balancing

tomitrescak commented 1 year ago

@alex14dark, can you please elaborate with some real example? It seems like you cracked it but I am missing something as it does not work for me.

FYI:

I got this in my http.conf in the data/nginx/root directory:

http {
  upstream backend {
    server backend1.example.com;
    server backend2.example.com;
    check interval=5000 rise=2 fall=3 timeout=2000;
  }

  server {
    listen 80;
    server_name example.com;

    location / {
      proxy_pass http://backend;
      proxy_set_header Host $host;
      proxy_set_header X-Real-IP $remote_addr;
      proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
      proxy_set_header X-Forwarded-Proto $scheme;
    }
  }
}

what do I need to adjust on my proxy? How does NPM load my custom http.conf? thanks

alex14dark commented 1 year ago

@tomitrescak The custom configuration needs to be in the data/nginx/custom directory. If not, you need to create one, and then create a http.conf file in this directory. According to the information you provide, the file content should be upstream backend { server backend1.example.com; server backend2.example.com; check interval=5000 rise=2 fall=3 timeout=2000; }, and finally go back to your npm web background interface, select the corresponding proxy host, and then add location / { in the corresponding advanced option proxy_pass http://backend; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-Forwarded-Proto $scheme; }

tomitrescak commented 1 year ago

@alex14dark adding this to my custom config leads to nginx crashing

this is in my custom/http.conf

http {
  upstream jobiq {
    server server1.com weight 100;
    server 127.0.0.1:3020;
    check interval=5000 rise=2 fall=3 timeout=2000;
  }
}

This is in the custom config of my reverse proxy

location / {
      proxy_pass http://jobiq;
      proxy_set_header Host $host;
      proxy_set_header X-Real-IP $remote_addr;
      proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
      proxy_set_header X-Forwarded-Proto $scheme;
}

Moreover there is a big fat warnign on the custom config page:

Please note, that any add_header or set_header directives added here will not be used by nginx. You will have to add a custom location '/' and add the header in the custom config there.

[UPDATE]

I found that I had some issues in the config for the server and this one made my server start custom/http.conf

upstream jobiq {
    server server1.com weight=100;
    server 127.0.0.1:3020;
  }

The issue is that I am now getting "Too many redirects" error ;(

tomitrescak commented 1 year ago

I add this as a separate post as I managed to SOLVE this thanks to @alex14dark and ChatGPT :)

In data/nginx/custom/http.conf you set up your upstream, avoid the server directive (more info at https://nginxproxymanager.com/advanced-config/#custom-nginx-configurations)

upstream backend {
    server server1.com;
    server 127.0.0.1:3020 backup; # or whatever is your config
}

In the "custom configuration" of your proxy add the following:

location / {
      proxy_pass http://backend;
      proxy_set_header Host $host;
      proxy_set_header X-Real-IP $remote_addr;
      proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
      proxy_set_header X-Forwarded-Proto $scheme;
}

INFO: This will essentially remove the configuration of your endpoint as a reverse proxy (removes the proxy configuration line) and adds only your custom config. Quite cool, but unclear I'd say.

MORE INFO for noobs like myself

If you are redirecting to another server such as server1.com in the configuration above, make sure you configure the endpoint only as HTTP in the NPM, and do not request any HTTPS configuration. Maybe someone smarter can explain why, I do not know. All I know is that if the redirected endpoint was configured with HTTPS I was getting a "too many redirects" error. Maybe this is a huge security hole, please let me know if that is so.

alex14dark commented 1 year ago

@tomitrescak Glad to be able to help you!

haumanto commented 8 months ago

@tomitrescak can the access list still work while using your custom configuration?

leuedaniel commented 5 months ago

This is the Solution for me in this point: https://silicon.blog/2023/05/17/how-to-load-balance-your-servers-using-nginx-proxy-manager-and-cloudflare/

AustinLeath commented 5 months ago

Thanks! This is awesome

From: leuedaniel @.> Reply-To: NginxProxyManager/nginx-proxy-manager @.> Date: Friday, January 19, 2024 at 2:51 PM To: NginxProxyManager/nginx-proxy-manager @.> Cc: Austin Leath @.>, Comment @.***> Subject: Re: [NginxProxyManager/nginx-proxy-manager] Load Balancing (Issue #1963)

This is the Solution for me in this point: https://silicon.blog/2023/05/17/how-to-load-balance-your-servers-using-nginx-proxy-manager-and-cloudflare/

— Reply to this email directly, view it on GitHub, or unsubscribe. You are receiving this because you commented.Message ID: @.***>

RobsonMi commented 5 months ago

This is the Solution for me in this point: https://silicon.blog/2023/05/17/how-to-load-balance-your-servers-using-nginx-proxy-manager-and-cloudflare/

This looks promising, however I am interested in UDP load balancing and I suppose this method won't work for this use case, right?

velizmiguel commented 4 months ago

Is this still the best workaround? I have 5 servers that server the same traffic and i would like to map them to a single URL using nxing proxy manager

hoanganht91 commented 3 months ago

Instead of wait NPM to support load balancing, I customize an image based on https://github.com/caprover/nginx-reverse-proxy, so you just connect your domain to this service

Source code: https://github.com/hoanganht91/nginx-reverse-proxy Docker image: https://hub.docker.com/r/annh9x/nginx-reverse-proxy

This is an example to test load balancing config

version: '3.8'

services:
  test1:
    image: strm/helloworld-http
  test2:
    image: strm/helloworld-http
  test3:
    image: strm/helloworld-http
  load-balancer:
    image: annh9x/nginx-reverse-proxy
    environment:
      UPSTREAM_HTTP_ADDRESS: 'server test1 weight=1;server test2 weight=2;server test3 weight=3;'
      CLIENT_MAX_BODY_SIZE: 256M
IliyaPIS commented 2 months ago

Hello everyone, it's working for me. 1. image 2. image

3.

  upstream apimeserverpool{
    ip_hash; 
         server 192.168.1.56:8888 max_fails=3 fail_timeout=60s;
         server 192.168.1.96:8888 max_fails=3 fail_timeout=60s;
         keepalive 64;
    }
  server {

     listen 80;
     listen [::]:80;

     listen 443 ssl http2;
     listen [::]:443 ssl http2;

    ssl on;
    ssl_stapling on;
    ssl_stapling_verify on;

    server_name apimeserver.com;

     ssl_certificate /data/custom_ssl/npm-3/fullchain.pem;
     ssl_certificate_key /data/custom_ssl/npm-3/privkey.pem;

     include conf.d/include/assets.conf;

    include conf.d/include/force-ssl.conf;

    access_log /data/logs/proxy-host-esb_access.log proxy;
    error_log /data/logs/proxy-host-esb_error.log warn;

    location / {
      proxy_pass http://apimeserverpool;
      proxy_set_header Host $host;
      proxy_set_header X-Real-IP $remote_addr;
      proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
      proxy_set_header X-Forwarded-Proto $scheme;
    }
 }