NginxProxyManager / nginx-proxy-manager

Docker container for managing Nginx proxy hosts with a simple, powerful interface
https://nginxproxymanager.com
MIT License
21.92k stars 2.53k forks source link

Unable to reverse proxy to localhost #555

Open gururise opened 4 years ago

gururise commented 4 years ago

Checklist

What is troubling you?

I have Nginx Proxy Manager as my 1st point of entry into my Virtual Private Server for 80,443 which then routes out to the various containers I have running on the same server. I have the Forward Hostname / IP set to the IP address of my VPS and the port of the container.

Since Docker exposes all the container ports to the open internet, I want to bind my containers ports to localhost to not have a bunch of ports exposed on my VPS: ie. Port configuration 127.0.0.1:3001 -> 80/tcp

Unfortunately, when I do this, I am no longer able to reverse proxy to my containers. On my VPS, I can verify I am able to access 127.0.0.1:3000 but the reverse proxy no longer works. I've even tried setting the Forward Hostname/IP and port to 127.0.0.1: but that results in a 502 error.

I figure this is because the nginx reverse proxy container cannot access localhost in that way, but even using the IP address of my VPS no longer works.

Does anyone have a solution?

gururise commented 4 years ago

I think I found a solution:

  1. Create new docker network
  2. Create new container for the service you want to reverse proxy, and only expose the necessary port (ie. no port forwarding).
  3. Add nginx-reverse-proxy container and the newly created container to the new network.
  4. The host field can now use the IP that is assigned to the container.

Seems to work, is there any problem or security issues with doing it this way?

LivingWithHippos commented 4 years ago

Thanks I was looking for this. I wanted to use my docker containers from a subdomain, without exposing any ports. Security wise it should be ok because this is also how linuxserver.io letsencrypt container works (but you lose all the sweet nginx manager ui) but I'm still a beginner at this so I wouldn't trust this 100%.

For anyone wanting to do this, e.g. redirecting whoogle.mydomain.tld to your whoogle container, without exposing any port to the network:

  1. create a docker network, either by hand or by specifying it in a docker-compose.yml file or by using a default network ( if your docker-compose.yml is in a folder called ngix-manager, _default is appended to it for the network name -> ngix-manager_default, check with docker network ls your networks), also all the services in the same docker-compose.yml or in the same folder get linked to the same network

  2. add all the containers you want to manage from nginx manager to the same network (don;t forget nginx manager own network) by appending this into their docker-compose.yml:

version: '2'
services:
  whoogle:
    .....
    networks:
      - ngix-manager_default

networks:
  ngix-manager_default:
    external: true

N.B. : remove all the ports since we don't want to expose them, but note them down

  1. Spin up with docker-compose up -d your containers

  2. Create a new proxy host in nginx manager, add your subdomain, in forward hostname put your service name (whoogle here) and in forward port the port you noted in step 2 (5000 for whoogle). Add you SSL settings and you're done.

redtripleAAA commented 4 years ago

Are you guys having talking about the same issue I just reported? https://github.com/jc21/nginx-proxy-manager/issues/588

gabn88 commented 3 years ago

@LivingWithHippos @jc21 Thank you sooo much! Have been struggling 2 days to set this up using the nginx-proxy docker image in combination with the letsencrypt-nginx-proxy-companion... And I'm sure I added the network too, but somehow it didn't work. Even with a dozen restarts, etc... Was almost wondering if it was the WSL2 setup breaking things. And now with nginx-proxy-manager in combination with this post it WORKS šŸŽ‰ šŸ™‡ā€ā™‚ļø

EDIT: Don't mean to blame any other packet with this post. jwilder/nginx-proxy is also a great package. Just like etsencrypt-nginx-proxy-companion. But this packages makes it easier to quickly test some settings!

Sp33dFr34k commented 3 years ago

I am able to see the login page, but I am unable to actually login (no message, just clears the form) when I try to access Docker containers which are running on the localhost. It works fine when I'm not restricting them with any IP's, but as soon as I do, I see the login screen, but it will not let me login anymore. Tested with Portainer and NPM itself. Running 2.7.1. As soon as I set access back to public again it works. In the IP's I have declared my external IP, my internal range, and I've tried including the Docker network IP's, still doesn't work. Am I missing something?

ZacKeilholz commented 3 years ago

@LivingWithHippos @jc21 Thank you sooo much! Have been struggling 2 days to set this up using the nginx-proxy docker image in combination with the letsencrypt-nginx-proxy-companion... And I'm sure I added the network too, but somehow it didn't work. Even with a dozen restarts, etc... Was almost wondering if it was the WSL2 setup breaking things. And now with nginx-proxy-manager in combination with this post it WORKS šŸŽ‰ šŸ™‡ā€ā™‚ļø

EDIT: Don't mean to blame any other packet with this post. jwilder/nginx-proxy is also a great package. Just like etsencrypt-nginx-proxy-companion. But this packages makes it easier to quickly test some settings!

I'm also using WSL2... I was having a ton of issues/could not get this to work until I defined a network in docker-compose, and applied it to the service, and NPM. I've since deleted the defined network and everything works fine, whereas I was getting 502 errors before.

@LivingWithHippos I'm also a super newb, but I believe instead of trying to remember all of these ports you've removed, you can use expose:

expose:
  - 1234 
  - 56
  - 78

And this is actually a way of documenting the special ports your services need in docker-compose without actually exposing it to the host machine.

LivingWithHippos commented 3 years ago

@ZacKeilholz I started simply commenting out the ports, it's even faster when you use already-made compose files, but it's nice to know expose.

    #ports:
    #  - 8080:2368

Another trick I started using is adding a separate network for the backend service that doesn't need exposition to the proxy.

version: '3.1'

services:
  ghost:
    image: ghost:3-alpine
    ...
    networks:
      - ngix-manager_default
      - backend

  mysql:
    image: mysql:5
    ...
    networks:
      - backend

networks:
  ngix-manager_default:
    external: true
  backend:
    name: ghost_backend
    driver: bridge
alexhorner commented 3 years ago

Hi there,

Whilst adding all of your containers to a network in an option, what would be the best thing to do in a situation where a non-dockerised service is running on the host and exposing a port bound to 127.0.0.1?

An example of such a service could be a server management panel which binds to 127.0.0.1:10000 which I would then like to proxy to a subdomain via NGINX proxy manager.

LivingWithHippos commented 3 years ago

Hi there,

Whilst adding all of your containers to a network in an option, what would be the best thing to do in a situation where a non-dockerised service is running on the host and exposing a port bound to 127.0.0.1?

An example of such a service could be a server management panel which binds to 127.0.0.1:10000 which I would then like to proxy to a subdomain via NGINX proxy manager.

That's just the normal use of nginx proxy manager, you put the subdomain you want and 127.0.0.1:10000 as redirect destination

alexhorner commented 3 years ago

Hi there, Whilst adding all of your containers to a network in an option, what would be the best thing to do in a situation where a non-dockerised service is running on the host and exposing a port bound to 127.0.0.1? An example of such a service could be a server management panel which binds to 127.0.0.1:10000 which I would then like to proxy to a subdomain via NGINX proxy manager.

That's just the normal use of nginx proxy manager, you put the subdomain you want and 127.0.0.1:10000 as redirect destination

As a simple to reproduce test, I have installed Apache2 on the host machine outside of docker using apt-get, and configured it for 0.0.0.0:8080

If I got to mydomain.com:8080 I am able to sucessfully access the Apache web server, however when proxying http://127.0.0.1:8080 | HTTP only | Public in NGINX proxy manager, the status is Online but I get a 502 Bad Gateway error upon access.

Is this a misconfiguration on my behalf or is something not working?

LivingWithHippos commented 3 years ago

My bad, the point of adding nginx to the same network of the service you want to proxy still stands. To let docker see local host you have to play with the network mode. Using network_mode=host won't isolate the docker network from the host network. It works only for Linux. You can also look up host.docker.internal:host-gateway

Untested

alexhorner commented 3 years ago

I'll give this a try. My only concern is I don't know how secure this would be. I assume if I use host networking, the system firewall will work against the container and allow me to block any ports that should not be exposed on the hosts IP, as I believe internally 3000 is used by nginx proxy manager.

I shall test this tomorrow and reply with my findings, thanks for the suggestion!

alexhorner commented 3 years ago

I have now tested this and the system firewall properly protects all of the internal ports of NPM, and I am also able to use services proxied through it. Seems all good, will update this issue if I notice any... issues šŸ˜‚

phocks commented 1 year ago

In my case I was trying to forward proxy to the host machine that nginx-proxy-manager was running on but using an ssh forward port tunnel. The trick that worked for me was to use the public address of the host and make sure GatewayPorts is set to clientspecified in the sshd config /etc/ssh/sshd_config and had to make sure my host and port was accessible from the internet.

https://superuser.com/a/591963/373302

Probably not the best way to do it, but it worked.

skironDotNet commented 1 year ago

I think the "problem" is understanding, so when you try to forward 127.0.0.1, what you are telling NPM is to use localhost OF THE NPM CONTAINER and nothing is hosted there, but you are thinking to forward localhost of the host machine. Try this to confirm thinking docker exec -it <npm-container> bash and then curl 127.0.0.1:port you'll see there is nothing there. I have no answer how to access host's localhost inside of npm container. Obviously docker networking must be involved

alexhorner commented 1 year ago

I think the "problem" is understanding, so when you try to forward 127.0.0.1, what you are telling NPM is to use localhost OF THE NPM CONTAINER and nothing is hosted there, but you are thinking to forward localhost of the host machine. Try this to confirm thinking docker exec -it <npm-container> bash and then curl 127.0.0.1:port you'll see there is nothing there. I have no answer how to access host's localhost inside of npm container. Obviously docker networking must be involved

Hello, I am now two years older and wiser since I last replied to this issue.

The way to do it on Windows and macOS is to use the domain host.docker.internal to access services on the host machine.

As of the time of this reply, host.docker.internal is not supported on Linux hosts, so my previous reply of using host networking mode is the only workaround I'm aware of.

I no longer use NPM, I have learned it is mostly always easier to just write the NGINX config myself. It's really not as scary as people think, and quite easy to learn, and a lot more reliable and flexible.

skironDotNet commented 1 year ago

I was answering for other noobs like me who host from VPS :) Another info. Let say we have a domain.net when we run containers in default port 8080:80 then we expose 8080 to the public because bind is to 0.0.0.0 so can access domain.net:8080 (I know this if funny basics to the gurus) so one may want to run docker port 127.0.0.1:8080:80 to bind to host's internal network thus prevent exposing domain.net:8080 the desired thing, and we are going back to original "Unable to reverse proxy to localhost". So in order not to expose a port to the public and to be able to host via NPM, we can run a container bind to docker0 and by default this is 172.17.0.1 so can run a container port 172.17.0.1:8080:80, this way we bind to docker0 that is not exposed to the public (at least should not be) but then NPM can access it, so we create forward proxy for 172.17.0.1:8080

But if run UFW you must add port 8080 otherwise NPM still won't be able to connect, and this is strange because "normally" docker containers ignore UFW firewall

hope this will help someone at least to understand the basics of networking (I'm still learning :/)

alexhorner commented 1 year ago

I was answering for other noobs like me who host from VPS :) Another info. Let say we have a domain.net when we run containers in default port 8080:80 then we expose 8080 to the public because bind is to 0.0.0.0 so can access domain.net:8080 (I know this if funny basics to the gurus) so one may want to run docker port 127.0.0.1:8080:80 to bind to host's internal network thus prevent exposing domain.net:8080 the desired thing, and we are going back to original "Unable to reverse proxy to localhost". So in order not to expose a port to the public and to be able to host via NPM, we can run a container bind to docker0 and by default this is 172.17.0.1 so can run a container port 172.17.0.1:8080:80, this way we bind to docker0 that is not exposed to the public (at least should not be) but then NPM can access it, so we create forward proxy for 172.17.0.1:8080

But if run UFW you must add port 8080 otherwise NPM still won't be able to connect, and this is strange because "normally" docker containers ignore UFW firewall

hope this will help someone at least to understand the basics of networking (I'm still learning :/)

I can't exactly say this is a method I have tested before, but binding to 127.0.0.1 definitely is, and I'd still recommend host networking mode when doing that personally.

Less complex, pretty sure it doesn't have negative implications when compared to the other method you laid out. I'd love to know if it does though.

noize-e commented 1 year ago

I like the NPM interface and the easy security certification setup with Let's Encrypt so I didn't want to get rid of it. So the following steps describe the solution that worked for me.

  1. Create a new network in docker: docker network create npmnet
  2. Add the npmnet network as the default one in the docker-compose.yml file from NPM and every service managed by it.
    
    version: '3'

...service parameters

networks: default: external: true name: npmnet

3. Dont forget to remove even the __`ports`__ or the __`expose`__ attributes from the same file.
4. Restart NPM and all the other services: 
```bash
docker compose down && docke compose up -d ;
  1. Inspect the network to verify that the services are linked
    docker network inspect npmnet

    You will see something like

    [
    {
        "Name": "npmnet",
        ...
        "Containers": {
            "....": {
                "Name": "service-1",
                "EndpointID": "...",
                "MacAddress": "...",
                "IPv4Address": "172.20.0.2/16",
                "IPv6Address": ""
            },
            "....": {
                "Name": "service-2",
                "EndpointID": "...",
                "MacAddress": "...",
                "IPv4Address": "172.20.0.3/16",
                "IPv6Address": ""
            },
            "....": {
                "Name": "npm",
                "EndpointID": "...",
                "MacAddress": "...",
                "IPv4Address": "172.20.0.5/16",
                "IPv6Address": ""
            }
        },
        ...
    }
    ]

    As you can see each service has assigned an IP in the same subnetwork. You can verify it by accessing the NPM container and ping any service in the same network.

  2. Now, log in to the NPM's dashboard and for every proxy already setup change the Forward Hostname / IP* to the one defined in the npmnet network, for example for service-1 the IP should be 172.20.0.3 and the Forward Port * to 80.

That's it, the services are isolated to only the bridge docker network.

Note! Just to ensure everything is set up correctly I would run a port scan with nmap.

AlanMW commented 10 months ago

I've had everything working smoothly for months but am now getting a 504 error and a upstream timed out (110: Connection timed out) while connecting to upstream error. Previously I was just directing NPM to my computers IP:Port, but now all of a sudden it doesn't work. host.docker.internal does work for local containers, but what about containers that are on different machines on the same network? For some reason NPM doesn't see any other computers on my local network.

Xav-v commented 5 months ago

Just a small tip I'm using now. Probably not the best, but at least I could re-configure easily IPs at once for many hosts.

Creating a file under data/nginx/custom/http_top.conf: map $host $server { default "172.17.0.1"; } map $host $otherhost { default "xx.xx.xx.xx"; }

Then using $server or $otherhost as the variable of each "Forward Hostname / IP". Same applies in case you have a distant IP (e.g using with Tailscale IPs without any issue)

nikhilweee commented 3 months ago

Just came here to say that @LivingWithHippos's solution worked like a charm, except when I was setting up Immich where I had to add the default network in addition to NPM's default network in immich-server's network section.

version: '2'
services:
  whoogle:
    .....
    networks:
      - default # had to add this
      - ngix-manager_default

networks:
  ngix-manager_default:
    external: true
tooty-1135 commented 3 months ago

just addnetwork_mode: host in docker-compose.yml

version: '3.8'
services:
  app:
    ports:
      ...

    # add this
    network_mode: host

and run ufw allow 81

it works for me