qdm12 / gluetun

VPN client in a thin Docker container for multiple VPN providers, written in Go, and using OpenVPN or Wireguard, DNS over TLS, with a few proxy servers built-in.
https://hub.docker.com/r/qmcgaw/gluetun
MIT License
7.62k stars 358 forks source link

Bug: 1.1.1.1 shouldn't be there for Kubernetes sidecar #1523

Closed PrivatePuffin closed 1 year ago

PrivatePuffin commented 1 year ago

Is this urgent?

Yes

Host OS

Kubernetes, Mixed

CPU arch

None

VPN service provider

AirVPN

What are you using to run the container

Kubernetes

What is the version of Gluetun

Latest

What's the problem 🤔

With kubernetes it's vital to be able to fully resolve internal cluster DNS. However with both:

  DNS_KEEP_NAMESERVER: on
  DOT: off

It still stupidly adds 1.1.1.1, where the docs state it will "keep your current server". While it in theory keeps the server, users don't expect the first (primary!) server to be turned into a public DNS server.

That's not expected and causes issues, as a lot of application rely on only the first(!) server in the resolve file, which is set to 1.1.1.1 completely bricking internal cluster resolve on some kubernetes clusters.

While this can be manually adapted against by altering the DNS server, this is not a flexible and expectable setup.

If this is expected behavior (which is still weird), can we at least have something as simple as: DNS: off

So we can just not have Gluetun try to fuck with DNS settings at all. This should also be relatively easy to add, as it's just a general toggle.

Share your logs

In this case not really relevant I think, as the behavior has been reported before...
Though not in this context.

Share your configuration

No response

telnetdoogie commented 1 year ago

This appears to be a dupe of #1443 ?

PrivatePuffin commented 1 year ago

This appears to be a dupe of #1443 ?

Im not sure, because the old server IS in fact being kept. 1.1.1.1 just gets appended…

PrivatePuffin commented 1 year ago

Im assuming that is intended behavior, hence this request to give a complete dns=off toggle.

sansmoraxz commented 1 year ago

Any workaround guide for the time being?

I need my pods to communicate with an internal Redis cluster and I don't want to expose it to the internet (not to mention the added latency and cost).

mariopaolo commented 1 year ago

Any workaround guide for the time being?

you can override the default DNS by specifying a value for the DNS_ADDRESS environment variable. https://github.com/qdm12/gluetun/wiki/DNS-options e.g.: DNS_ADDRESS: 172.17.0.10

I managed to solve it like this for the time being (thanks to @truecharts support staff for the workaround)

Routhinator commented 1 year ago

So I have confirmed through testing that adding:

DNS_ADDRESS: 172.17.0.10
DNS_KEEP_NAMESERVER: on
DOT: off

... this container is STILL injecting 1.1.1.1 into resolv.conf and breaking cluster.local resolution.

$ cat /etc/resolv.conf
nameserver 1.1.1.1
nameserver 172.17.0.10
search ix-sonarr.svc.cluster.local svc.cluster.local cluster.local
options ndots:2

There needs to be an option to fully switch this off as per the bug report. This should be, ideally, handled by configuring the upstream in CoreDNS to 1.1.1.1 or setting the DHCP DNS servers to 1.1.1.1, not being forcibly injected by the container. This feature is nice for people who don't have those preferred options, but it should not be the default behaviour.

UPDATE:

Not sure why but deleting the chart I was using and readding only the two lines below managed to work around the 1.1.1.1 injection, however the need to turn this off entirely is still there.

DNS_ADDRESS: 172.17.0.10
DNS_KEEP_NAMESERVER: on
qdm12 commented 1 year ago

Can you guys try image qmcgaw/gluetun:pr-1742 (see https://github.com/qdm12/gluetun/issues/137#issuecomment-1630908995) to see if this is resolved? This will get merged soon (finishing dnssec optional implementation before merging)

ksimm1 commented 1 year ago

Can you guys try image qmcgaw/gluetun:pr-1742 (see #137 (comment)) to see if this is resolved? This will get merged soon (finishing dnssec optional implementation before merging)

Using qmcgaw/gluetun:pr-1742 does not resolve the issue. 1.1.1.1 is still there

qdm12 commented 1 year ago

The code (both old and newer) re-writes the /etc/resolv.conf file and sets the DNS_ADDRESS as the first line in the file. Note:

This feature is nice for people who don't have those preferred options, but it should not be the default behaviour.

If we leave the default DNS as it is originally in the container, this will effectively leak out traffic since traffic to the local Docker network is allowed by default. So it's definitely not a good choice to NOT modify the DNS as default behavior.

Now say an option is added to leave /etc/resolv.conf untouched:

  1. Where will your DNS traffic go?
  2. Would you be ok for it to leak out of the VPN?
  3. Does your local docker/k8s DNS server only resolve local hostnames and block requests for outside zones?
  4. Would adding 1.1.1.1 (or other) after the local nameserver sort of resolve this?? Although it will introduce a delay for each DNS request since it will try the first local DNS server.
PrivatePuffin commented 1 year ago

So it's definitely not a good choice to NOT modify the DNS as default behavior.

I never said to change the default, however for kubernetes users NEED to be able to keep the kubernetes DNS as default as kubernetes networking HEAVILY(!!!) relies on DNS for internal communication.

This means that adding your container as a sidecart, breaks any internal kubernetes communication by default, making it unusable without workaround (workarounds which require the user to know their Kubernets internal DNS IP, which is also not a given).

Where will your DNS traffic go?

Internal kube DNS, where users point kubernetes to forward internet DNS queries is their choice

Would you be ok for it to leak out of the VPN?

More okey than a sidecart breaking half of our 800 helm charts by breaking internal kubernetes networking, yes. Even so, there are more options to prevent leakage. Such as running a good DNS server locally

Does your local docker/k8s DNS server only resolve local hostnames and block requests for outside zones?

K8S requires an upstream DNS server as well. It's the users choice how they configure a local DNS server.

Would adding 1.1.1.1 (or other) after the local nameserver sort of resolve this?? Although it will introduce a delay for each DNS request since it will try the first local DNS server.

As explained above, a LOT of applications ignore the secondary DNS server. But even so it would get unused because kubernetes forwards it to internet anyhow.


You seem to misunderstand the problem:

  1. The "Keep DNS" was documented to keep your DNS server, which is doesn't. It adds to the DNS server list, which is undocumented behavior. This should be documented, regardless. Not here, but in the docs.

  2. We require the ability to stop your container from messing with kubernetes DNS. Our other option is to fork this project and do the work ourselves, because we now have about 28.000 users being potentially affected by this bug.

PrivatePuffin commented 1 year ago

Worthwhile note: When you're worried about leaking... why are you purposefully(!) leaking our DNS to cloudflare in planetext?!

qdm12 commented 1 year ago

The "Keep DNS" was documented to keep your DNS server, which is doesn't.

Indeed, the documentation was wrong Keep the nameservers in /etc/resolv.conf untouched, but disabled DNS blocking features, I changed two things:

When you're worried about leaking... why are you purposefully(!) leaking our DNS to cloudflare in planetext?!

(Cloudflare is just the DNS provider, it's changeable, I use it for this example)

  1. DNS traffic in plaintext to Cloudflare: anonymity relies on Cloudflare and ISP
  2. DNS traffic over TLS to Cloudflare: anonymity relies on Cloudflare
  3. DNS traffic in plaintext to Cloudflare through the VPN: anonymity relies on the VPN provider or (Cloudflare and VPN server ISP)
  4. DNS traffic over TLS to Cloudflare through the VPN: anonymity relies on the VPN provider or Cloudflare

In my view, 3 is still better than 2, and by far better than 1. 4 being obviously the best choice privacy wise. And by default, it is 4 that gets configured, not 3, so you purposefully(!) is not really correct.


Anyway, the DNS_KEEP_NAMESERVER=on solution is far from ideal, and I would suggest to all of you to subscribe to #281 I will probably come up with a better solution (waiting for #1742 to be merged soon, working on dnssec validation currently, it's complicated/takes time)

PrivatePuffin commented 1 year ago

The "Keep DNS" was documented to keep your DNS server, which is doesn't.

Indeed, the documentation was wrong Keep the nameservers in /etc/resolv.conf untouched, but disabled DNS blocking features, I changed two things:

  • the documentation is now:

    Keep /etc/resolv.conf untouched. ⚠️ this will likely leak DNS traffic outside the VPN through your default container DNS. This imples DOT=off and ignores DNS_ADDRESS

  • the behavior of DNS_KEEP_NAMESERVER was changed/fixed in e556871 so if set to on, nothing is done DNS-wise, and the following warning will be logged:

    keeping the default container nameservers, this will likely leak DNS traffic outside the VPN and go through your container network DNS outside the VPN tunnel!

    I don't think there is any use case to have DNS_KEEP_NAMESERVER=on otherwise (feel free to correct me!).

When you're worried about leaking... why are you purposefully(!) leaking our DNS to cloudflare in planetext?!

(Cloudflare is just the DNS provider, it's changeable, I use it for this example)

  1. DNS traffic in plaintext to Cloudflare: anonymity relies on Cloudflare and ISP
  2. DNS traffic over TLS to Cloudflare: anonymity relies on Cloudflare
  3. DNS traffic in plaintext to Cloudflare through the VPN: anonymity relies on the VPN provider or (Cloudflare and VPN server ISP)
  4. DNS traffic over TLS to Cloudflare through the VPN: anonymity relies on the VPN provider or Cloudflare

In my view, 3 is still better than 2, and by far better than 1. 4 being obviously the best choice privacy wise. And by default, it is 4 that gets configured, not 3, so you purposefully(!) is not really correct.

Anyway, the DNS_KEEP_NAMESERVER=on solution is far from ideal, and I would suggest to all of you to subscribe to #281 I will probably come up with a better solution (waiting for #1742 to be merged soon, working on dnssec validation currently, it's complicated/takes time)

THanks, that should work fine :)

PrivatePuffin commented 1 year ago

Closed due to being fixed in e556871

qdm12 commented 1 year ago

Awesome 👍 If you ever think of a solution to tunnel DNS traffic whilst not breaking Kubernetes DNS-sing (side node, it also currently breaks resolution of container names in simple bridged Docker networks), please create another issue 😉 Also my apologies for the long delay resolving this.

David-Woodward commented 1 year ago

@qdm12 in #1443 you stated the following and then asked that we continue the conversation here.

@David-Woodward this is an interesting feature that should be implemented, most likely after #1742 gets merged. We could even add an option such as DNS_OPENVPN_PUSHED=on to use that.

Sorry for the delayed response, but that sounds good to me.

To tie this into the previous discussion regarding the 4 options for handling DNS queries, I prefer to have DNS handled by my VPN provider as I'm paying them to keep my traffic private and that is their sole business. While I have no reason to suspect Cloudflare would do anything to compromise my privacy, I have no contract with them and I have doubts they could be held accountable for any privacy issues since their DNS service doesn't require us to accept terms of use, an end user agreement, etc.