Closed hakunamatata97k closed 3 years ago
I ended up whitelisting the public IP address of my router, and somehow it covered all the devices within my network.
Experiencing the same issue in the access list. It seems, only external IP addresses are accepted in the access list - which isn't fun when your ISP assigns the IP dynamically.
Hopefully this will be implemented soon, with a fixed IP that doesn't really help.
I ended up whitelisting the public IP address of my router, and somehow it covered all the devices within my network.
I am ending up with the same issue. What subnet did you use for external IP ? /32 ?
Same issue. Unable to resolve it using internal single IP, subnet range, or external IP.
Unfortunately there is nothing we can do about that. If you look into the access logs of your proxy host found at /data/logs/proxy-host-<id>_access.log
. You will see something like [Client 172.19.0.1]
in each of the lines, which shows you what IP nginx has received that request from.
If your NPM instance is in the public internet, and not in your local network, local ip adresses are NOT available, and nginx will only receive your routers public ip address from the requesting client.
If your npm instance is within your local network, there is a quirk in how docker passes the ip to the container, causing the ip to be something like 172.19.x.x
. This is the ip address of the docker bridge gateway. I think this should not happen if you send the request from a different machine than what npm is hosted on. Switching to host network mode in docker can resolve this issue, since the docker network won't have a bridge then. You can do this by changing port 80 and 443 section in your docker-compose to:
ports:
- target: 443
published: 443 # Outside port
mode: host
protocol: tcp
- target: 80
published: 80 # Outside port
mode: host
protocol: tcp
@chaptergy Thanks for the summary. As I understand, by switching to host networking on my proxy manager container, I should be able to allowlist both the public IP of my network, and the private subnet(s) of my network. I have done both steps, and continue to see the same behavior. From /data/logs/proxy-host-8-access.log
[02/Jun/2022:17:56:25 +0000] - - 403 - GET https ombi.alvani.me "/i/" [Client 50.35.120.49] [Length 111] [Gzip 1.35] [Sent-to 10.0.1.201] "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/605.1.15 (KHTML, like Gecko) Version/15.5 Safari/605.1.15" "-"
Allowlisting 50.35.120.49
still results in a 403.
Am I missing something?
I have NPM deployed in my local subnet. When I create an access list with
allow: 192.168.0.0/24 deny: all
and attach it to a proxy host, I get a 403 from everywhere, including any machine on the local subnet.
I can confirm this doesn't work. The compose snippet of chaptergy results in an error for me. If i try to whitelist my local ip subnet i get 403 on every page that uses the auth.
Adding to the Custom Nginx Configuration should work. I did this, but to whitelist DDNS addresses as well as LAN access. For example you could:
Hosts > Edit > Advanced and in Custom Nginx Configuration:
location = / {
allow 192.168.1.1/24;
allow 10.6.0.0/24;
deny all;
}
~I'm also having the same issue. I've tried even adding the Custom Nginx Config as stated by @threehappypenguins but it still doesn't work to limit local access.~
I figured it out. It does work with local access only. What I needed change is to allow the local docker IP range.
By using "allow 172.18.0.0/16" in the access list, I could limit access to my containers only if connected through my docker wireguard VPN.
Adding "allow 192.168.1.0/24" enables local network access. Adding "allow PUBLIC_IP" where PUBLIC_IP is your public IP address allows you to connect to a remote server without having to use the VPN.
~I'm also having the same issue. I've tried even adding the Custom Nginx Config as stated by @threehappypenguins but it still doesn't work to limit local access.~
I figured it out. It does work with local access only. What I needed change is to allow the local docker IP range.
By using "allow 172.18.0.0/16" in the access list, I could limit access to my containers only if connected through my docker wireguard VPN.
Adding "allow 192.168.1.0/24" enables local network access. Adding "allow PUBLIC_IP" where PUBLIC_IP is your public IP address allows you to connect to a remote server without having to use the VPN.
Seems to work for me !
By using "allow 172.18.0.0/16" in the access list, I could limit access to my containers only if connected through my docker wireguard VPN.
You are connecting via docker WireGuard in the same docker network, right? Because I have WireGuard built-in, and it uses different network, so still needed the public IP address.
By using "allow 172.18.0.0/16" in the access list, I could limit access to my containers only if connected through my docker wireguard VPN.
You are connecting via docker WireGuard in the same docker network, right? Because I have WireGuard built-in, and it uses different network, so still needed the public IP address.
Correct. I'm using wg-easy for my Wireguard setup to manage my users/credentials. wg-easy is using the same docker network as the rest of my containers including NginxProxyManager.
Well i have NginxProxyManager on a other VirtualMachine than wireguard because i cant use Adguard + NginxProxyManager on one Machine. But the NginxProxyManger and Wireguard are both on the 192.168.178.0 network..
~I'm also having the same issue. I've tried even adding the Custom Nginx Config as stated by @threehappypenguins but it still doesn't work to limit local access.~
I figured it out. It does work with local access only. What I needed change is to allow the local docker IP range.
By using "allow 172.18.0.0/16" in the access list, I could limit access to my containers only if connected through my docker wireguard VPN.
Adding "allow 192.168.1.0/24" enables local network access. Adding "allow PUBLIC_IP" where PUBLIC_IP is your public IP address allows you to connect to a remote server without having to use the VPN.
the only problem is: PUBLIC_IP can be changed and then, you need to change the configuration, right?
I have a similar problem like @xKliment. I run npm in a different lxc on the same machine as wireguard. I have this access list:
My devices get a 10.7.0.3/24 ip when connected through wireguard. But I am not able to access the site through npm, only from my non-vpn internal network...
Did you find a solution for this?
Nevermind, I just found that I set the wrong value in WG_ALLOWED_IPS for wireguard.
ive found using 1 instance NPM in the cloud, and 1 instance NPM in my lan, works best, and set my own dns recordsin router, public dns or hosts... helps with speed as in lan traffic doesnt got out my net interface and back in...if you use cloudflare for dns and dns with the api letsincrypt works even with lan IP's...
The machine I'm running NPM on sits on both my local network and the internet, basically acts as my home router. My workaround to this issue is to firewalld off everything coming through from my WAN interface except for things like port 80 and 443, and then run this docker container with network_mode: host
.
The only issue I hit is it seems NPM internally uses port 3000 for its API, which another container of mine was using. Once I switched ports for that container, everything seems to be functioning correctly now.
Workaround OpenMediaVault docker:
---
version: '3.8'
services:
app:
image: 'jc21/nginx-proxy-manager:latest'
restart: unless-stopped
network_mode: bridge
ports:
# These ports are in format <host-port>:<container-port>
- '192.168.0.141:80:80' # Public HTTP Port
- '192.168.0.141:443:443' # Public HTTPS Port
- '81:81' # Admin Web Port
By binding the port to the host address, I get the correct address of my systems in the local network.
Conveniently, you can use the global environment (also for other docks):
Global Environment:
BASE_HOST_IP=192.168.0.141
Placeholder in Yaml:
- '${BASE_HOST_IP}:80:80' # Public HTTP Port
- '${BASE_HOST_IP}:443:443' # Public HTTPS Port
Adding to the Custom Nginx Configuration should work. I did this, but to whitelist DDNS addresses as well as LAN access. For example you could:
Hosts > Edit > Advanced and in Custom Nginx Configuration:
location = / { allow 192.168.1.1/24; allow 10.6.0.0/24; deny all; }
This worked for me (i have a hairpin NAT). I didn't have to do the extra finangling. I removed the second line, and it allowed local access and prevented public access. I wouldn't recommend using public IP cuz that can change.
so I am using cloudflare end the origin IP is changing all the time if I block internet, then the IP cannot be resolved - domain name is kept by local nginx instance so the calling service has to go to internet, to cloudflare, come then to my nginx to resolve the IP - blocked internet access means: cannot resolve
what I am saying: even local call to my domain: my.domain.com (domain.com is on cloudflare, "my" is only in LAN) requires internet to resolve the my.domain.com
@homonto set up a local DNS (i.e. pi.hole is convenient) with your local server IP. In my case my router is the DHCP but I propagate the additional DNS IP to all systems in my network.
I have Pihole, even 2: one on each VLAN: 192.168.1.23 and 192.168.100.31 so maybe you can help me: 1- nginx is on 192.168.1.6 2- domain speed.mydomain.com is on 192.168.1.23 3- in nginx I set:
But with these settings it is not working
2023/11/08 07:32:06 [error] 1328750#1328750: *189640 access forbidden by rule, client: 172.70.91.5, server: speed.domain.com, request: "GET /favicon.ico HTTP/2.0", host: "speed.domain.com", referrer: "https://speed.domain.com/"
Evidently the request comes from Cloudfare rather than from local DNS, but why?
@homonto a ping ain't be logged by ngnix. You see a call to favicon.ico - that's probably your browser.
You must ensure in your testing that your browser uses the DNS of your network. Search for DNS in the settings. Some use primarily external DNS for security reasons. Only when the external DNS has no match, do they check the internal ones, but because Cloudflare knows an external address for your domain that's not happening.
Or call the site by curl or wget.
all my clients have provided my 2 Piholes as DNS IPs. and the domain assignment to the internal IP is on the Piholes:
Then both Piholes are referring to router (Opnsense):
And then Unbound DNS is providing the real DNS service. Do you think this can be the issue?
@homonto maybe you missed my previous comment - did you check your browser settings?
browser has no DNS settings and I checked on 3: Chrome, Firefox and Safari. And on mobile phone as well. plus, my firewall redirects everything calling port 53 to unbound I am 100% sure problem is on my side - just I am NOT sure how to approach it. ;-)
Are you in the right place?
Checklist
jc21/nginx-proxy-manager:latest
docker image?Describe the bug
My setup looks like the following:
Raspberry Pi 4 running Raspbian Os 64x running on a static IP (192.168.0.10).
docker & docker-compose & portainer are each properly installed.
raspberry running the following docker images with no ports conflicts: Nextcloud, ddclient, jc21/nginx-proxy-manager, pihole and finally this web service.
on the router (night hawk R7500), I set the IP address of the PiHole (in this case the Raspberry Pi) as DNS.
the Streaming website is a subdomain "movies.example.com". Where the domain "example.com" and the subdomain are enforced with self-signed SSL from the Nginx Proxy manger.
All the mentioned services are dockerized and nothing is installed on "bare metal"
The Nginx Proxy manager is installed with this tutorial.
The following (Screenshot 2) shows the view of the Nginx proxy manager access list IP Address Whitelist/Blacklist.
Screenshot 3 shows both the view of the SSL settings (3.3) and the view of the details section of the chosen host assigned with Authorization for Streaming.