NginxProxyManager / nginx-proxy-manager

Docker container for managing Nginx proxy hosts with a simple, powerful interface
https://nginxproxymanager.com
MIT License
23.29k stars 2.7k forks source link

UFW is blocking the proxied websites! #3350

Open RonAlmog opened 12 months ago

RonAlmog commented 12 months ago

Checklist

Describe the bug i'm using NPM, and it works. I only have one machine. that is a droplet on digital ocean, running ubuntu 22.10 one docker runs the npm , latest version like this: image: 'jc21/nginx-proxy-manager:latest' i have several folders, each for one website, and in each one there is a web app that exposes a different port. the ports are like 3000, 3001, 3002, etc. in the proxy manager I have added each one of them with their respective port, assign ssl, and it all works. Now I'm activating the firewall, ufw. here is my ufw status:

To Action From


22/tcp ALLOW Anywhere 80/tcp ALLOW Anywhere 443 ALLOW Anywhere 81 ALLOW Anywhere 22/tcp (v6) ALLOW Anywhere (v6) 80/tcp (v6) ALLOW Anywhere (v6) 443 (v6) ALLOW Anywhere (v6) 80 (v6) ALLOW Anywhere (v6) 81 (v6) ALLOW Anywhere (v6)

if I activate the firewall (sudo ufw activate), then all websites are blocked. I use cloudflare, so i get the cloudflare error page with error code 504. deactivate the ufw - websites work. activate the ufw - websites blocked again.

I read somewhere that if i allow the original ports on the firewall, it will work. and that is true. for example, If I add: sudo ufw allow 3001 then that website works! so i can go ahead and add all these ports, but that doesn't feel right. they should only be visible inside the server, no? the outside world should only access my server thru 80 and 443, no? i can't believe it's a bug, because I guess most people use ufw (no?) and it should be a very common setup. so maybe i'm doing something wrong?

Nginx Proxy Manager Version v2.10.4

To Reproduce as I described above

Expected behavior I think I should need to expose the 'hidden' port in ufw.

Screenshots

Operating System linux ubuntu 22

Additional context

rj-xy commented 12 months ago

What are you setting up for the host/IP for these services in NPM? Are they all docker containers? Docker containers are isolated from each other unless you add them to a docker network. My guess is NPM is referencing your other services via the host. If you create one docker network which they are all connected (or one network for each web app to NPM) then network traffic stays internal and has no need to even be routed through ufw.

Using a docker compose file or portainer is the easiest way to achieve this, or manually do it via the docker cli

RonAlmog commented 12 months ago

ok, let me provide some more info: about how i setup the host/ip, here is what i have image and for each one, it looks like this: image Yes, the web applications are all docker containers. in each folder i have a docker-compose.yml. some are simple, like this one:

version: "3.7"
services:
  app:
    container_name: github-cicd-next
    image: registry.gitlab.com/ronalmog/xyzabc:latest
    restart: always
    env_file:
      - .env
    ports:
      - 3000:3000

and some are more involved, and contain other containers, like db, backend, etc', and then they have their internal network, like this one:

version: '3.7'
services:
  backend:
    image: registry.gitlab.com/ronalmog/mywebsite-api-23:latest
    container_name: mywebsite-api
    restart: unless-stopped
    env_file: .env
    environment:
      - NODE_ENV=production
      - DATABASE_CONNECTION=${DATABASE_CONNECTION}
    ports:
      - 3001:3000
      - 27017:27017
    networks:
      - mywebsite_net

  mywebsite-web:
    image: registry.gitlab.com/ronalmog/mywebsite-web-23:latest
    container_name: mywebsite-web
    restart: unless-stopped
    depends_on:
      - backend
    env_file: .env
    environment:
      BASE_URL: http://backend:3001
    ports:
      - 4000:4000
    command: 'npm run start'
    networks:
      - mywebsite_net

networks:
  mywebsite_net:
    driver: bridge

those internal networks, like mywebsite_net in this case, are different for each folder/project. they are intended to be isolated, and only expose the website, in this case on 4000.

Please let me know if you see anything wrong, or what can be improved in this setup. thanks!

rj-xy commented 12 months ago

@RonAlmog can you provide details about how you host NPM? e.g stand alone docker container, within a compose file? And what networks is the NPM container attached to?

For example, mine is connected to two networks: image

Docker comes with three default networks (I added the other two, one manually (1), one with docker-compose (2)): image

RonAlmog commented 12 months ago

@rj-xy , of course. it is a docker-compose.yml file, here it is:

version: '3'
services:
  app:
    image: 'jc21/nginx-proxy-manager:latest'
    container_name: nginx-proxy
    restart: unless-stopped
    ports:
      - '80:80'
      - '81:81'
      - '443:443'
    environment:
      DB_MYSQL_HOST: "mysql"
      DB_MYSQL_PORT: 3306
      DB_MYSQL_USER: "xxx"
      DB_MYSQL_PASSWORD: "xyz"
      DB_MYSQL_NAME: "nginxproxy"
    volumes:
      - ./data:/data
      - ./letsencrypt:/etc/letsencrypt
  mysql:
    image: 'jc21/mariadb-aria:latest'
    container_name: mysql
    restart: unless-stopped
    environment:
      MYSQL_ROOT_PASSWORD: 'asdfasdf'
      MYSQL_DATABASE: 'nginxproxy'
      MYSQL_USER: 'fgfgf'
      MYSQL_PASSWORD: 'ghghghgh'
    volumes:
      - ./data/mysql:/var/lib/mysql

sorry for my ignorance, i didn't know I have to attach a network to it, or how to do that... thanks for any tip about this.

RonAlmog commented 12 months ago

here is my docker network ls , (i don't know how you printed your networks so nicely) NETWORK ID NAME DRIVER SCOPE b04d004ab86b bridge bridge local fe7d722b7448 host host local 99648a93d875 nginx-proxy_default bridge local e051d87db3c1 none null local 02c502e7b7a9 application1_default bridge local 58efc9ae1667 application2_net bridge local b056788fcb0a application3_default bridge local e201f15eacbf postgres_local_net bridge local 816744f128da application4_default bridge local 963a59715171 application5_local_net bridge local 8226bed03762 application5_net bridge local 6cf110a418d6 application6_default bridge local 250dad1f70d2 application6_local_net bridge local 925ce769f0d4 watchtower_default bridge local

rj-xy commented 12 months ago

Oh I see, 147.182.156.0 is your external (internet) IP.

First of all, you don't want to be exposing all these ports on your host (3000, 27017, 4000). I also hope you have only the required ports being forwarded from your router (80 and 443 is all you need). SSH port too if required (ensure you have SSHGuard setup).

All other comms should be done internally within the docker networks.

You don't have to expose any other ports from your containers except for NPM's ports (81, 80 and 443) Do not port forward port 81 from your router.

You will need create a new docker network and attach it to the containers mywebsite-web and nginx-proxy, you will then be able to reference from NPM directly: http://mywebsite-web:4000

rj-xy commented 12 months ago

You might want to add portainer in your compose file, it will make things a little easier: Just add port service to your compose file

version: '3.8'
services:
  port:
    container_name: port
    image: 'portainer/portainer-ce:latest'
    stdin_open: true # docker run -i
    tty: true        # docker run -t
    networks:
      - default
    restart: unless-stopped
    ports:
      - '9000:9000'
    volumes:
      - portainer_data:/data
      - /var/run/docker.sock:/var/run/docker.sock

  proxy:
    container_name: proxy
    image: 'jc21/nginx-proxy-manager:latest'
    stdin_open: true # docker run -i
    tty: true        # docker run -t
    networks:
      - default
    restart: unless-stopped
    ports:
      - '80:80'
      - '81:81'
      - '443:443'
    volumes:
      - proxy_data:/data
      - proxy_letsencrypt:/etc/letsencrypt

volumes:
  portainer_data:
  proxy_data:
  proxy_letsencrypt:

Then add it to NPM: image

rj-xy commented 12 months ago

Add NPM as a host too, so it works over HTTPS image

RonAlmog commented 12 months ago

Hey @rj-xy , thanks so much for this answer! I'm working on this. and just to clarify: those 'port' and 'proxy' that you put as forward hostname/ip, are these the container names? so in your case, you have a container named port, that exposes port 9000, and a container named proxy that exposes 81?

rj-xy commented 12 months ago

@RonAlmog yes, you can reference the containers by name if they are on at least 1 shared/same network. Let me know if you need any further help.

If you like, setup a github repo with your docker/compose files, that way it will be easier for me to make comments, and suggestions.

Then share the link to the repo here, and then close this issue/bug, as it's not a NPM bug

RonAlmog commented 12 months ago

Ok, so the first step is a success! I have the proxy up and running, with portainer. assigned both of them domain names, and they are up and running with users and passwords. The next step is not working yet. I want to have websites exposed witn NPM. so the first one is a simple nextjs website, exposing port 3000. I added the network 'default' in hope that it will be common with the proxy, and they will talk. that's not working.

@rj-xy , as you suggested, I created a repo for that: https://github.com/RonAlmog/DockerServer

so the first step is to allow a simple website to be forwarded. the next step is more advanced: a docker-compose that has several containers in it, linked to each other. for example: database, backend system, website, admin website. they all have a common network, but to the outside world i want to expose only the websites. let say website on port 4000, and admin website on port 4001. I want to use those with NPM to publish to the world. Thanks so much for your help!

RonAlmog commented 11 months ago

ok, we have another progress here. If I only read the f* manual... https://nginxproxymanager.com/advanced-config/#best-practice-use-a-docker-network I added to both the proxy and the website an 'external' network, like this:

networks:
  default:
    external: true
    name: mynetwork

and now they are talking nicely to each other, my container name is recognized, and the website shows without exposing the internal port in UFW. a win!

now there is only one small issue, that the website does not see the api. they are both containers in the same docker-compose, but somehow I managed to break the internal network... to be continued...

rj-xy commented 11 months ago

@RonAlmog use expose (https://docs.docker.com/compose/compose-file/compose-file-v3/#expose) instead of ports, you don't need to expose those ports to your host, just to the docker network.

My guess is UFW is blocking these, as they are not defined in your UFW rules. To confirm this, disable UFW temporarily, and see if it works. If so, just use expose instead of ports, and you should be golden!

levinside commented 11 months ago

Hello, here is solution for my case. https://pythonspeed.com/articles/docker-connection-refused/

TLDR: Change app's listening host from 127.0.0.1 to 0.0.0.0

github-actions[bot] commented 4 months ago

Issue is now considered stale. If you want to keep it open, please comment :+1: