mullvad / mullvadvpn-app

The Mullvad VPN client app for desktop and mobile
https://mullvad.net/
GNU General Public License v3.0
4.62k stars 329 forks source link

systemd-resolved not supported as local DNS server in version 2021.1 #2468

Closed sylv-io closed 2 years ago

sylv-io commented 3 years ago

Issue report

Operating system: Fedora 33 (Linux 5.10.14-200.fc33.x86_64 #1 SMP Sun Feb 7 19:59:31 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux) App version: 2021.1

Issue description

With regards to my question and the subsequent answer in #473, I was not able to use my local dns resolver systemd-resolved, even if the stub listener 127.0.0.53 is included as custom DNS server.

connection logs:

mullvad-daemon[4121]: [mullvad_daemon::management_interface][DEBUG] connect_tunnel
mullvad-daemon[4121]: [mullvad_daemon][DEBUG] Target state Unsecured => Secured
mullvad-daemon[4121]: [mullvad_daemon::relays][DEBUG] Selecting among 3 relays with combined weight 1500
mullvad-daemon[4121]: [mullvad_daemon::relays][INFO] Selected relay de20-wireguard at 185.254.75.3
mullvad-daemon[4121]: [mullvad_daemon::relays][DEBUG] Relay matched on highest preference for retry attempt 0
mullvad-daemon[4121]: [talpid_core::firewall][INFO] Applying firewall policy: Connecting to 185.254.75.3:56610 over UDP with gateways 10.64.0.1,fc00:bbbb:bbbb:bb01::1, Allowing LAN
mullvad-daemon[4121]: [talpid_core::tunnel::wireguard][DEBUG] Using kernel WireGuard implementation
mullvad-daemon[4121]: [talpid_core::routing::imp::imp][DEBUG] Adding routes: {RequiredRoute { prefix: V6(Ipv6Network { addr: fc00:bbbb:bbbb:bb01::1, prefix: 128 }), node: RealNode(Node { ip: None, device: Some("wg-mullvad") }), table_id: 254 }, RequiredRoute { prefix: V4(Ipv4Network { addr: 10.64.0.1, prefix: 32 }), node: RealNode(Node { ip: None, device: Some("wg-mullvad") }), table_id: 254 }, RequiredRoute { prefix: V4(Ipv4Network { addr: 0.0.0.0, prefix: 0 }), node: RealNode(Node { ip: None, device: Some("wg-mullvad") }), table_id: 1836018789 }, RequiredRoute { prefix: V6(Ipv6Network { addr: ::, prefix: 0 }), node: RealNode(Node { ip: None, device: Some("wg-mullvad") }), table_id: 1836018789 }}
mullvad-daemon[4121]: [mullvad_daemon][DEBUG] New tunnel state: Connecting { endpoint: TunnelEndpoint { endpoint: Endpoint { address: 185.254.75.3:56610, protocol: Udp }, tunnel_type: Wireguard, proxy: None }, location: Some(GeoIpLocation { ipv4: None, ipv6: None, country: "Germany", city: Some("Dusseldorf"), latitude: 51.233334, longitude: 6.783333, mullvad_exit_ip: true, hostname: Some("de20-wireguard"), bridge_hostname: None }) }
mullvad-daemon[4121]: [talpid_core::firewall][INFO] Applying firewall policy: Connected to 185.254.75.3:56610 over UDP over "wg-mullvad" (ip: 10.66.241.16,fc00:bbbb:bbbb:bb01::3:f10f, v4 gw: 10.64.0.1, v6 gw: Some(fc00:bbbb:bbbb:bb01::1)), Allowing LAN
mullvad-daemon[4121]: [talpid_core::dns][INFO] Setting DNS servers to 127.0.0.53
mullvad-daemon[4121]: [talpid_core::dns::imp][DEBUG] Managing DNS via systemd-resolved
mullvad-daemon[4121]: [talpid_core::tunnel_state_machine::connected_state][ERROR] Error: Failed to set DNS
mullvad-daemon[4121]: Caused by: Error in systemd-resolved DNS monitor
mullvad-daemon[4121]: Caused by: Failed to perform RPC call on D-Bus
mullvad-daemon[4121]: Caused by: "Invalid DNS server address"
mullvad-daemon[4121]: [mullvad_daemon][DEBUG] New tunnel state: Disconnecting(Block)
mullvad-daemon[4121]: [mullvad_daemon::geoip][DEBUG] Error: Unable to fetch IPv6 GeoIP location
mullvad-daemon[4121]: Caused by: Hyper error
mullvad-daemon[4121]: Caused by: error trying to connect: tcp connect error: Connection refused (os error 111)
mullvad-daemon[4121]: Caused by: tcp connect error: Connection refused (os error 111)
mullvad-daemon[4121]: Caused by: Connection refused (os error 111)
mullvad-daemon[4121]: [talpid_core::tunnel_state_machine::connecting_state][DEBUG] Tunnel monitor exited with block reason: None
mullvad-daemon[4121]: [talpid_core::firewall][INFO] Applying firewall policy: Blocked. Allowing LAN. Allowing endpoint 193.138.218.71:444 over TCP
mullvad-daemon[4121]: [mullvad_daemon][DEBUG] New tunnel state: Error(ErrorState { cause: SetDnsError, block_failure: None })
mullvad-daemon[4121]: [mullvad_daemon][INFO] Blocking all network connections, reason: Failed to set system DNS server

This prevents the DNS resolution of host in my local network and therefore local services are not available anymore (e.g. nextcloud). However, if i use wireguard directly, I am able to use systemd-resolved for DNS resolution without a problem.

resolvectl status example output after using wg-quick up mullvad

Global
       Protocols: LLMNR=resolve mDNS=resolve -DNSOverTLS DNSSEC=allow-downgrade/supported
resolv.conf mode: stub                                                                   

Link 2 (eth0)
    Current Scopes: DNS LLMNR/IPv4 LLMNR/IPv6                                              
         Protocols: +DefaultRoute +LLMNR -mDNS -DNSOverTLS DNSSEC=allow-downgrade/supported
Current DNS Server: 10.13.37.1                                                             
       DNS Servers: 10.13.37.1                                                             
        DNS Domain: my.example.net lan.my.example.net              

Link 21 (mullvad)
    Current Scopes: DNS                                                                    
         Protocols: +DefaultRoute +LLMNR -mDNS -DNSOverTLS DNSSEC=allow-downgrade/supported
Current DNS Server: 193.138.218.74                                                         
       DNS Servers: 193.138.218.74                                                         
        DNS Domain: ~. 

Regarding the example above, systemd-resolved only uses my local DNS server to resolve/lookup the domains .my.example.net and `.lan.my.example.net`. For all other requests it still uses the DNS server provided by Mullvad wireguard.

Motivation

I would love to see the same behavior on the mullvad-vnp, so that i could finally use the app myself :heart:. Thanks for the great work and congratulations for the release of the new version. :tada:

faern commented 3 years ago

Thanks for the report! We'll look into this :thinking:

pinkisemils commented 3 years ago

This currently isn't supported, because what our daemon will attempt to do is to configure systemd-resolved to use 127.0.0.53 as a resolver - this is rightfully rejected. In fact, as long as our daemon is using systemd-resolved directly (and it should be), all you need to use your custom reoslvers for your example.net addresses is to punch holes in the firewall for these IPs so that they don't get blocked.

sylv-io commented 3 years ago

This currently isn't supported, because what our daemon will attempt to do is to configure systemd-resolved to use 127.0.0.53 as a resolver - this is rightfully rejected.

I suspected that. makes sense :+1:.

In fact, as long as our daemon is using systemd-resolved directly (and it should be), all you need to use your custom reoslvers for your example.net addresses is to punch holes in the firewall for these IPs so that they don't get blocked.

I am already able to reach these IPs, by using the configuration "Local network sharing". However, the firewall blocks dns request, done by systemd-resolved to my lokal dns server (I guess intended). Would be great if there is somehow an optional way to unblock dns request done by systemd-resolved.

sylv-io commented 3 years ago

I create branch, where add_drop_dns_rule() calls are comment out and build it from source: https://github.com/sylv-io/mullvadvpn-app/tree/hack_no-dns-drop I'm not a rust coder but works for me :smile:.

Still, it would be nice to have an optional config setting to implement this. (maybe with a clear risk warning?)

pinkisemils commented 3 years ago

There is a way- it just doesn't concern any config options to our software. But it does necessitate knowledge of our firewall rules. Given that you want to reach a DNS server on 192.168.1.1, you'd want to have the following rules

table inet customDnsServers {
    chain permitDnsTraffic {
      type filter hook output priority -30; policy accept;
       udp dport 53 ip daddr 192.168.1.1 ct mark set 0x00000f41;
       tcp dport 53 ip daddr 192.168.1.1 ct mark set 0x00000f41;
    }
}

You can save the above in a file and load it via sudo nft --file $pathToFile. These only need to be set once per boot. To delete the table and start a new, just run sudo nft delete table inet customDnsServers. These rules match DNS traffic on a specific IP address (the dport 53 daddr $IP part) and then set nftables specific marks ct mark set 0x00000f41, which is later interpreted by our firewall rules and let through. These rules piggy back on our split-tunneling firewall rules to allow traffic to flow outside the tunnel. You can add more rules for more resolver IPs by copying the rules and changing the destination IP addresses.

       udp dport 53 ip daddr 192.168.1.1 ct mark set 0x00000f41;
       tcp dport 53 ip daddr 192.168.1.1 ct mark set 0x00000f41;
       udp dport 53 ip daddr 192.168.1.2 ct mark set 0x00000f41;
       tcp dport 53 ip daddr 192.168.1.2 ct mark set 0x00000f41;

If your resolvers don't reside on the local network (i.e. you don't have specific routes for them) and you preferred they were routed outs, you'd need an extra table to add the conntrack mark and a meta mark (0x6d6f6c65) on the traffic you want so that it isn't routed through the tunnel device. This way, you don't have to change our code, and you make sure that you only leak the DNS traffic you want to leak, provided systemd-resolved only queries the hosts you need.

Flat commented 3 years ago

If your resolvers don't reside on the local network (i.e. you don't have specific routes for them) and you preferred they were routed outs, you'd need an extra table to add the conntrack mark and a meta mark (0x6d6f6c65) on the traffic you want so that it isn't routed through the tunnel device. This way, you don't have to change our code, and you make sure that you only leak the DNS traffic you want to leak, provided systemd-resolved only queries the hosts you need.

Could you elaborate a bit on this? I'm trying to exclude traffic incoming for SSHD and outgoing for DNS, however it is still being blocked by the firewall. If I just delete the mullvad rules everything is fine. marking 0x6d6f6c65 doesn't seem to have an effect however 0x00000f41 seems to let the connection timeout instead of an immediate reset. This is my current config.

table inet excludeTraffic {
  chain allowIncoming {
    type filter hook input priority -1000; policy accept;
    tcp dport 1000 ct mark set 0x00000f41 accept;
    tcp dport 1337 ct mark set 0x00000f41 accept;
  }
  chain allowOutgoing {
    type filter hook output priority -30; policy accept;
    udp dport 53 ct mark set 0x00000f41 accept;
    tcp dport 53 ct mark set 0x00000f41 accept;
  }
}
faern commented 3 years ago

One problem I see is priority -1000. The priority has to be within the range -200 to 0 to work properly.

You should also not need the accept action here. Just setting the mark is what you want to do.

Flat commented 3 years ago

Thank you, I will try those changes this evening. I got that priority from this comment. https://github.com/mullvad/mullvadvpn-app/issues/2097#issuecomment-799485645 I guess I also don't understand the difference between the two mark values, 41f allows local while 6d6d6c65 routes outside the VPN, is that correct?

faern commented 3 years ago

0x6d6f6c65 is a meta mark. This is the mark that the policy based routing uses to route excluded traffic outside the tunnel. For outgoing traffic you want to set this meta mark if you want it to be routed outside the tunnel. BUT this alone does not allow the traffic to pass the firewall...

0x00000f41 is a ct (connection tracker) mark. It's used in the firewall only and not the routing table. This mark is what our firewall rules use to allow traffic to pass our blocking rules. Apply this to all traffic, in and out, that you want to be excluded from our firewall rules.

We will be posting a guide on more custom firewalling on Linux, but it's not ready yet.

Flat commented 3 years ago

I suppose I will have to wait for the guide to run mullvad on my home server. As

table inet excludeTraffic {
  chain allowIncoming {
    type filter hook input priority -30;
    tcp dport 1000 ct mark set 0x00000f41
    tcp dport 1337 ct mark set 0x00000f41
  }
  chain allowOutgoing {
    type filter hook output priority -30;
    udp dport 53 ct mark set 0x00000f41 meta mark set 0x6d6f6c65
    tcp dport 53 ct mark set 0x00000f41 meta mark set 0x6d6f6c65
  }
}

still blocks all outbound port 53 traffic as well as not allowing incoming connections on 1000 or 1337 (Outside LAN)

dlon commented 2 years ago

There are a couple of issues here. Firstly, you'll want to use type route for the outgoing chain, not type filter. Otherwise, packets will not be rerouted. Secondly, you're also going to exclude traffic to 10.64.0.1 with this. That's typically where DNS requests will go in the tunnel when you're connected to one of the WireGuard relays. Maybe you want this. If so, ignore that part below.

As for the incoming chain not working, I think it could be due to RP filtering. It might help to add meta mark set 0x6d6f6c65 to those rules as well. I can't recall off the top of my head whether you need to use the prerouting hook rather than input as well.

table inet excludeTraffic {
  chain allowIncoming {
    type filter hook input priority -30;
    tcp dport 1000 ct mark set 0x00000f41 meta mark set 0x6d6f6c65
    tcp dport 1337 ct mark set 0x00000f41 meta mark set 0x6d6f6c65
  }
  chain allowOutgoing {
    type route hook output priority -30;
    ip daddr != 10.64.0.1 udp dport 53 ct mark set 0x00000f41 meta mark set 0x6d6f6c65
    ip6 daddr != fc00:bbbb:bbbb:bb01::1 tcp dport 53 ct mark set 0x00000f41 meta mark set 0x6d6f6c65
  }
}
faern commented 2 years ago

The way I interpret this a working solution has been proposed and then discussion died out. I'm closing this since I think you can solve this with the custom firewall rules above. We have a guide on a similar topic also, if you want more info: https://mullvad.net/en/help/split-tunneling-with-linux-advanced/