Closed sylv-io closed 2 years ago
Thanks for the report! We'll look into this :thinking:
This currently isn't supported, because what our daemon will attempt to do is to configure systemd-resolved
to use 127.0.0.53
as a resolver - this is rightfully rejected.
In fact, as long as our daemon is using systemd-resolved
directly (and it should be), all you need to use your custom reoslvers for your example.net
addresses is to punch holes in the firewall for these IPs so that they don't get blocked.
This currently isn't supported, because what our daemon will attempt to do is to configure systemd-resolved to use 127.0.0.53 as a resolver - this is rightfully rejected.
I suspected that. makes sense :+1:.
In fact, as long as our daemon is using systemd-resolved directly (and it should be), all you need to use your custom reoslvers for your example.net addresses is to punch holes in the firewall for these IPs so that they don't get blocked.
I am already able to reach these IPs, by using the configuration "Local network sharing". However, the firewall blocks dns request, done by systemd-resolved to my lokal dns server (I guess intended). Would be great if there is somehow an optional way to unblock dns request done by systemd-resolved.
I create branch, where add_drop_dns_rule()
calls are comment out and build it from source:
https://github.com/sylv-io/mullvadvpn-app/tree/hack_no-dns-drop
I'm not a rust coder but works for me :smile:.
Still, it would be nice to have an optional config setting to implement this. (maybe with a clear risk warning?)
There is a way- it just doesn't concern any config options to our software. But it does necessitate knowledge of our firewall rules. Given that you want to reach a DNS server on 192.168.1.1, you'd want to have the following rules
table inet customDnsServers {
chain permitDnsTraffic {
type filter hook output priority -30; policy accept;
udp dport 53 ip daddr 192.168.1.1 ct mark set 0x00000f41;
tcp dport 53 ip daddr 192.168.1.1 ct mark set 0x00000f41;
}
}
You can save the above in a file and load it via sudo nft --file $pathToFile
. These only need to be set once per boot. To delete the table and start a new, just run sudo nft delete table inet customDnsServers
. These rules match DNS traffic on a specific IP address (the dport 53 daddr $IP
part) and then set nftables specific marks ct mark set 0x00000f41
, which is later interpreted by our firewall rules and let through. These rules piggy back on our split-tunneling firewall rules to allow traffic to flow outside the tunnel. You can add more rules for more resolver IPs by copying the rules and changing the destination IP addresses.
udp dport 53 ip daddr 192.168.1.1 ct mark set 0x00000f41;
tcp dport 53 ip daddr 192.168.1.1 ct mark set 0x00000f41;
udp dport 53 ip daddr 192.168.1.2 ct mark set 0x00000f41;
tcp dport 53 ip daddr 192.168.1.2 ct mark set 0x00000f41;
If your resolvers don't reside on the local network (i.e. you don't have specific routes for them) and you preferred they were routed outs, you'd need an extra table to add the conntrack mark and a meta mark (0x6d6f6c65
) on the traffic you want so that it isn't routed through the tunnel device.
This way, you don't have to change our code, and you make sure that you only leak the DNS traffic you want to leak, provided systemd-resolved only queries the hosts you need.
If your resolvers don't reside on the local network (i.e. you don't have specific routes for them) and you preferred they were routed outs, you'd need an extra table to add the conntrack mark and a meta mark (
0x6d6f6c65
) on the traffic you want so that it isn't routed through the tunnel device. This way, you don't have to change our code, and you make sure that you only leak the DNS traffic you want to leak, provided systemd-resolved only queries the hosts you need.
Could you elaborate a bit on this? I'm trying to exclude traffic incoming for SSHD and outgoing for DNS, however it is still being blocked by the firewall. If I just delete the mullvad rules everything is fine. marking 0x6d6f6c65
doesn't seem to have an effect however 0x00000f41
seems to let the connection timeout instead of an immediate reset. This is my current config.
table inet excludeTraffic {
chain allowIncoming {
type filter hook input priority -1000; policy accept;
tcp dport 1000 ct mark set 0x00000f41 accept;
tcp dport 1337 ct mark set 0x00000f41 accept;
}
chain allowOutgoing {
type filter hook output priority -30; policy accept;
udp dport 53 ct mark set 0x00000f41 accept;
tcp dport 53 ct mark set 0x00000f41 accept;
}
}
One problem I see is priority -1000
. The priority has to be within the range -200
to 0
to work properly.
You should also not need the accept
action here. Just setting the mark is what you want to do.
Thank you, I will try those changes this evening. I got that priority from this comment. https://github.com/mullvad/mullvadvpn-app/issues/2097#issuecomment-799485645 I guess I also don't understand the difference between the two mark values, 41f allows local while 6d6d6c65 routes outside the VPN, is that correct?
0x6d6f6c65
is a meta mark. This is the mark that the policy based routing uses to route excluded traffic outside the tunnel. For outgoing traffic you want to set this meta mark if you want it to be routed outside the tunnel. BUT this alone does not allow the traffic to pass the firewall...
0x00000f41
is a ct (connection tracker) mark. It's used in the firewall only and not the routing table. This mark is what our firewall rules use to allow traffic to pass our blocking rules. Apply this to all traffic, in and out, that you want to be excluded from our firewall rules.
We will be posting a guide on more custom firewalling on Linux, but it's not ready yet.
I suppose I will have to wait for the guide to run mullvad on my home server. As
table inet excludeTraffic {
chain allowIncoming {
type filter hook input priority -30;
tcp dport 1000 ct mark set 0x00000f41
tcp dport 1337 ct mark set 0x00000f41
}
chain allowOutgoing {
type filter hook output priority -30;
udp dport 53 ct mark set 0x00000f41 meta mark set 0x6d6f6c65
tcp dport 53 ct mark set 0x00000f41 meta mark set 0x6d6f6c65
}
}
still blocks all outbound port 53 traffic as well as not allowing incoming connections on 1000 or 1337 (Outside LAN)
There are a couple of issues here. Firstly, you'll want to use type route
for the outgoing chain, not type filter
. Otherwise, packets will not be rerouted. Secondly, you're also going to exclude traffic to 10.64.0.1
with this. That's typically where DNS requests will go in the tunnel when you're connected to one of the WireGuard relays. Maybe you want this. If so, ignore that part below.
As for the incoming chain not working, I think it could be due to RP filtering. It might help to add meta mark set 0x6d6f6c65
to those rules as well. I can't recall off the top of my head whether you need to use the prerouting
hook rather than input
as well.
table inet excludeTraffic {
chain allowIncoming {
type filter hook input priority -30;
tcp dport 1000 ct mark set 0x00000f41 meta mark set 0x6d6f6c65
tcp dport 1337 ct mark set 0x00000f41 meta mark set 0x6d6f6c65
}
chain allowOutgoing {
type route hook output priority -30;
ip daddr != 10.64.0.1 udp dport 53 ct mark set 0x00000f41 meta mark set 0x6d6f6c65
ip6 daddr != fc00:bbbb:bbbb:bb01::1 tcp dport 53 ct mark set 0x00000f41 meta mark set 0x6d6f6c65
}
}
The way I interpret this a working solution has been proposed and then discussion died out. I'm closing this since I think you can solve this with the custom firewall rules above. We have a guide on a similar topic also, if you want more info: https://mullvad.net/en/help/split-tunneling-with-linux-advanced/
Issue report
Operating system: Fedora 33 (Linux 5.10.14-200.fc33.x86_64 #1 SMP Sun Feb 7 19:59:31 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux) App version: 2021.1
Issue description
With regards to my question and the subsequent answer in #473, I was not able to use my local dns resolver systemd-resolved, even if the stub listener
127.0.0.53
is included as custom DNS server.connection logs:
This prevents the DNS resolution of host in my local network and therefore local services are not available anymore (e.g. nextcloud). However, if i use wireguard directly, I am able to use systemd-resolved for DNS resolution without a problem.
resolvectl status
example output after using wg-quick up mullvadRegarding the example above, systemd-resolved only uses my local DNS server to resolve/lookup the domains
.my.example.net
and `.lan.my.example.net`. For all other requests it still uses the DNS server provided by Mullvad wireguard.Motivation
I would love to see the same behavior on the mullvad-vnp, so that i could finally use the app myself :heart:. Thanks for the great work and congratulations for the release of the new version. :tada: