NginxProxyManager / nginx-proxy-manager

Docker container for managing Nginx proxy hosts with a simple, powerful interface
https://nginxproxymanager.com
MIT License
22.62k stars 2.63k forks source link

Nginx Proxy Manager can be run in an HA model #3651

Open N4K185 opened 7 months ago

N4K185 commented 7 months ago

Dear cj21 I really like Nginx Proxy Manager

I sincerely thank you if you respond and guide me. I will "Buy coffee for you"

obsidiangroup commented 7 months ago

To me, this doesn't sound like the scope of NPM. Doesn't sound like it would be hard to implement on your own.

Install and configure keepalived on both machines. This includes deciding on what the "virtual" IP will be.
On both machines, deploy a NPM container. Point the data directories to some file share, NFS, etc. There are many ways of having shared storage, NFS being one. Point both containers data directory to this share, point both to the same MySQL/MariaDB instance. Any service that uses NPM as a proxy server, their address will be set to the virtual IP. Clients never know which one is serving. Take Host 1 down, Host 2 automatically becomes master and starts serving. Once Host 1 comes back up, it becomes STANDBY.

Now, support NPM, go buy a coffee. :)

CameronMunroe commented 6 months ago

keepalived would be perfect sauce for this.

miguelwill commented 6 months ago

hello

In my case, what I do to work with 2 NPM instances in HA is the following, which agrees with what they mentioned before:

HA IP: Keepalived to have an IP in high availability, and so in case you have a problem on the host or vm, the IP is activated on the secondary host.

HA Mariadb: bitnami/mariadb allows you to operate in replication mode very easily, allowing you to specify the variables necessary for the replication user on the Master, and the data necessary for synchronization from the Slave/secondary, in this way even if the primary dies completely the secondary has all the information (remember to specify the lifetime of the binary logs)

In the case of file synchronization, I use "incrontab" (since I did not want to depend on an nfs host), and when detecting a change in the configuration files, it executes a script that executes 2 tasks: 1.- synchronize the data folder with rsync, omitting the logs folder and config/config.json (in case you still use it) 2.- connect to the secondary host and, via a script that it has saved, perform a "reload" of the nginx process in the container, this to apply the changes synchronized previously.

I hope it helps someone and that they can rescue what is useful to them :D

greetings

edit: Well, today I updated to version 2.11.1 and I could notice that in the secondary the boot process was trying to review or update a migration table, so to solve the problem the connection had to be pointed to the IP of the db-master xD

empyrials commented 6 months ago

dang Miguelwill that sounds awesome. with excluding some passwords & IPs, care to share any scripts you have for that setup?

miguelwill commented 5 months ago

hello good Basically the bitnami/mariadb image was used, which has master/slave replication functions included via parameters in environment variables

and in the 2nd node it uses the same image but in SLAVE mode With this, the DBs are already synchronized and updated.

Regarding file synchronization, use incrontab with the following parameters to monitor multiple paths:

/opt/proxy-manager/data/nginx/ IN_DELETE,IN_CLOSE_WRITE,IN_NO_LOOP /usr/local/bin/sync-files.sh $@ $# $% $&" /opt/proxy-manager/data/nginx/default_host/ IN_DELETE,IN_CLOSE_WRITE,IN_NO_LOOP /usr/local/bin/sync-files.sh $@ $# $% $&" /opt/proxy-manager/data/nginx/default_www/ IN_DELETE,IN_CLOSE_WRITE,IN_NO_LOOP /usr/local/bin/sync-files.sh $@ $# $% $&" /opt/proxy-manager/data/nginx/proxy_host/ IN_DELETE,IN_CLOSE_WRITE,IN_NO_LOOP /usr/local/bin/sync-files.sh $@ $# $% $&" /opt/proxy-manager/data/nginx/redirection_host/ IN_DELETE,IN_CLOSE_WRITE,IN_NO_LOOP /usr/local/bin/sync-files.sh $@ $# $% $&" /opt/proxy-manager/data/nginx/stream/ IN_DELETE,IN_CLOSE_WRITE,IN_NO_LOOP /usr/local/bin/sync-files.sh $@ $# $% $&"

In the synchronization script, synchronize the modified files via rsync, and then run a script on the 2nd server to execute a reload on the nginx service:

for reload: docker compose -f /root/proxy-manager/docker-compose.yml exec -T proxy-manager nginx -s reload -c /etc/nginx/nginx.conf