Open RobHofmann opened 2 years ago
To troubleshoot, I would see if it works with the kill switch disabled.
If that doesn't fix it, try fiddling with the SUBNETS
variable. My first guess there would be to add 192.168.0.0/19
.
adding -e KILL_SWITCH=off
allows me to route through the container correctly.
I dont fully understand what is happening, but is there a way to keep the KILL_SWITCH enabled and make it work?
Additional information: WORKING WITHOUT KILLSWITCH:
Chain INPUT (policy ACCEPT 1785 packets, 1220K bytes)
pkts bytes target prot opt in out source destination
Chain FORWARD (policy ACCEPT 115 packets, 6900 bytes)
pkts bytes target prot opt in out source destination
1363 185K ACCEPT all -- eth0 tun0 0.0.0.0/0 0.0.0.0/0 state RELATED,ESTABLISHED
1687 1121K ACCEPT all -- tun0 eth0 0.0.0.0/0 0.0.0.0/0
Chain OUTPUT (policy ACCEPT 1545 packets, 276K bytes)
pkts bytes target prot opt in out source destination
NOT WORKING WITH KILLSWITCH:
Chain INPUT (policy DROP 1 packets, 104 bytes)
pkts bytes target prot opt in out source destination
10 6118 ACCEPT all -- * * 0.0.0.0/0 0.0.0.0/0 ctstate RELATED,ESTABLISHED
2 123 ACCEPT all -- lo * 0.0.0.0/0 0.0.0.0/0
2 456 ACCEPT all -- * * 192.168.0.0/21 0.0.0.0/0
0 0 ACCEPT all -- tun0 * 0.0.0.0/0 0.0.0.0/0
Chain FORWARD (policy DROP 8 packets, 480 bytes)
pkts bytes target prot opt in out source destination
0 0 ACCEPT all -- eth0 tun0 0.0.0.0/0 0.0.0.0/0 state RELATED,ESTABLISHED
0 0 ACCEPT all -- tun0 eth0 0.0.0.0/0 0.0.0.0/0
Chain OUTPUT (policy DROP 0 packets, 0 bytes)
pkts bytes target prot opt in out source destination
3 237 ACCEPT all -- * lo 0.0.0.0/0 0.0.0.0/0
0 0 ACCEPT all -- * * 0.0.0.0/0 192.168.0.0/21
12 2325 ACCEPT udp -- * eth0 0.0.0.0/0 81.17.29.2 udp dpt:1194
0 0 ACCEPT udp -- * eth0 0.0.0.0/0 31.7.57.242 udp dpt:1194
0 0 ACCEPT all -- * tun0 0.0.0.0/0 0.0.0.0/0
Hmm... Those chains look fine to me. I would expect this line:
2 456 ACCEPT all -- * * 192.168.0.0/21 0.0.0.0/0
would be enough to allow the traffic in.
Can you also add the output of ip r
for both as well?
Had a similar issue trying out with another container. Got a fix/workaround here: https://github.com/qdm12/gluetun/discussions/738
Hi! First of all: thanks for this container. It seems to be really amazing.
I've been fiddling around with this container and it seems to work fine whenever you use
--net=container:vpn
. However, in my real setup i use multiple docker hosts which, for various reasons, have containers that use a macvlan network setup. So in short, i have something like this:then after that i want to spin up my containers like this:
This should be routed through the VPN container. Somehow the VPN container doesn't seem to accept traffic from this IP range (assumption). If i, at this point, create another container using the
--net=container:vpn
flag, this seems to work fine (traffic from the last container is being routed through the VPN container).Is there anything needed to whitelist any additional incoming IP's?