newsnowlabs / docker-ingress-routing-daemon

Docker swarm daemon that modifies ingress mesh routing to expose true client IPs to service containers
MIT License
189 stars 37 forks source link

Is compatible with DNS Server? #11

Closed Rihan9 closed 3 years ago

Rihan9 commented 3 years ago

Hi, first of all, thank you so much for your work. I am using your script for my reverse proxy inside a docker swarm with success, but I can't get it to work whenI try to use it for DNS server as well.

I run your script with the following line inside a systemctl service:

docker-ingress-routing-daemon --install --ingress-gateway-ips 10.0.0.2 --services sh_adguardhome,sh_nginx-proxy-manager --udp-ports 53,853,80,443,67,68 --tcp-ports 53,853,80,443,67,68

sh_nginx-proxy-manager -> this works well (ports 80 and 443) sh_adguardhome --> this not works (ports 53,83,67 and 68).

Currently, I'm using host mode and Placement constraints to run the service in the same server of load balancing to get it works. It works also in ingress mode, but without the correct Ip all the dns requests come from the same ip address making monitoring impossible (I don't want to violate user privacy, I need to know if any network devices are doing weird things). Following the current service compose:

adguardhome:
  image: adguard/adguardhome:latest
  networks:
    - raspberry_network
  ports:
    # - 53:53/udp
    - target: 53
      published: 53
      protocol: udp
      mode: host
    # - 53:53/tcp
    - target: 53
      published: 53
      protocol: tcp
      mode: host
    # - 853:853/udp
    - target: 853
      published: 853
      protocol: udp
      mode: host
    # - 853:853/tcp
    - target: 853
      published: 853
      protocol: tcp
      mode: host
    # - 67:67/udp
    - target: 67
      published: 67
      protocol: udp
      mode: host
    # - 67:67/tcp
    - target: 67
      published: 67
      protocol: tcp
      mode: host
    # - 68:68/udp
    - target: 68
      published: 68
      protocol: udp
      mode: host
    # - 68:68/tcp
    - target: 68
      published: 68
      protocol: tcp
      mode: host
  volumes:
    - /media/gfs/adguardhome_conf:/opt/adguardhome/conf
    - /media/gfs/adguardhome_work:/opt/adguardhome/work

the most common response on dns request is: DNS_PROBE_STARTED followed by DNS_PROBE_FINISHED_BAD_CONFIG.

With wireshark I made a communication log. I see that the server has responded, but its source address is not that of the load-balancer or host. The address appears to be the one inside the ingress network . Is it possible that this is why it fails? image

struanb commented 3 years ago

Hi @Rihan9. Thanks for trying DIND. I'm afraid I'm not very familiar with docker compose syntax. Do I understand you correctly that you're running the sh_adguardhome service in network=host mode? If so, I don't believe that is a case that can be supported by DIRD, which is designed to modify how ingress networking operates.

Can you retest the service using either the default, or a custom, swarm overlay network? Then with DIRD, you should indeed be able to obtain the correct Ip all the incoming dns requests.

Rihan9 commented 3 years ago

Hi @struab, sorry, I probably confused you in my first comment. As a workaround, I'm temporarily using port published in host mode instead the default ingress mode, but I would like to use your script + ingress mode.

The service it also connected the "raspberry network" which is an overlay network with little customization (attachable on, encryped on, subnet and gateway configurated). In the same network is also connected another service which works in ingress mode + your script without problems

struanb commented 3 years ago

Ok thanks for clarifying. I wonder if it is an issue with UDP, as it seems your TCP service is working as expected.

Is the service that you're experiencing the issue with one I could run on a test server? If so, could you provide an example docker service create' command I could use to reproduce?

Rihan9 commented 3 years ago

You can install with: docker service create --name my_dns -v /my/own/workdir:/opt/adguardhome/work -v /my/own/confdir:/opt/adguardhome/conf-p 80:80/tcp -p 3000:3000/tcp -p 53:53/udp -p 53:53/tcp -p 853:853/udp -p 853:853/tcp -p 67:67/udp -p 67:67/tcp -p 68:68/udp -p 68:68/tcp adguard/adguardhome:latest

You need to replace '/my/own/workdir' and '/my/own/confdir' with your folder to store some data.

The control panel should be accessible on http://127.0.0.1:3000/ and it should guide you to a basic setup.

If you need it, a guide to server configuration can be found here: https://github.com/AdguardTeam/AdGuardHome/wiki/Getting-Started#first-time

struanb commented 3 years ago

Apologies @Rihan9 I haven't been able to work on this, and I'm not familiar with this application so there's quite a barrier to me reproducing this issue.

What would make it faster for me - if you still have this issue - would be if you could provide everything I need to reproduce it, including application config.

Or if this is a UDP issue perhaps you can reproduce it with a simple application like dnsmasq (which I am familiar with).

Looking forward to hearing from you, as I would like to be able to provide you with a resolution.

psitem commented 3 years ago

I've run into this same issue with DNS on Docker swarm. The response packets come back from the container's IP address instead of the ingress node's IP address so the client ignores them as a potential Caching Poisoning attempt.

Apparently this is a long-standing swarm issue.

https://github.com/moby/moby/issues/11998

psitem commented 3 years ago

Took another stab at this today.

With the docker-ingress-routing-daemon uninstalled, DNS queries to an ingress IP get SNAT'd and the replies come back from the original ingress IP. The container sees the packets as coming from the ingress IP.

With the daemon installed, the incoming SNAT doesn't happen so the container sees the packets as coming from the original source IP. Replies come from the container's IP.

Something needs to happen so that iptables knows to SNAT the response packets. I've made a couple attempts at manually modifying the NAT table but haven't gotten anywhere, I am terrible at iptables.

struanb commented 3 years ago

Thanks for trying DIND. I don’t understand why in your setup you are making DNS queries to an ingress IP. In your setup are you running a dns server as a service?

In any case, I think a solution could be to run DIND with the command line options that ensure it only removes SNAT for the traffic/service/ports you need and not for DNS traffic or other traffic/services/ports you don’t need.

To do this, please see the instructions in the README, let me know how you get on, but don’t hesitate to follow up on this thread if you need further assistance.

On 26 Aug 2021, at 23:20, T Bryce Yehl @.***> wrote:  I've run into this same issue with DNS on Docker swarm. The response packets come back from the container's IP address instead of the ingress node's IP address so the client ignores them as a potential Caching Poisoning attempt.

Apparently this is a long-standing swarm issue.

moby/moby#11998

— You are receiving this because you commented. Reply to this email directly, view it on GitHub, or unsubscribe.

struanb commented 3 years ago

@Rihan9 @tbyehl I've been testing dnsmasq running within a docker service, roughly as described in https://github.com/moby/moby/issues/11998#issuecomment-122345196, and I think I've found a problem.

In DIND, these lines only handle outgoing packets on incoming TCP connections:

# 3. Map any connection mark on outgoing traffic to a firewall mark on the individual packets.
    nsenter -n -t $NID iptables -t mangle -A OUTPUT -p tcp -j CONNMARK --restore-mark

I'm hopefuly we can extend this to handle UDP packets too, and am looking into a fix.

struanb commented 3 years ago

Testing indicates that adding the following line, to the one above, seems sufficient to allow DNS to work:

nsenter -n -t $NID iptables -t mangle -A OUTPUT -p udp -j CONNMARK --restore-mark

Please test if you can and advise if this addresses your issue.

N.B. Before committing this patch, we just need to satisfy ourselves as to whether these firewall rules, which run in the container namespace, should be further modified to reflect the --tcp-ports and --ucp-ports values, when these options are used; or, if it is safe to continue to allow the iptables -t mangle -A OUTPUT -j CONNMARK --restore-mark to operate on all tcp and udp packets in any case.

I suspect it is safe, as if masquerading is not disabled for a TCP or UDP port, then:

  1. Not only is the SNAT rule not skipped, but the TOS byte on packets being routed by the load balancer to the service containers is not set (as it would otherwise be - in the ingress_sbox namespace, mangle table PREROUTING chain - to the least-significant byte of the node's ingress IP).
  2. So then: when, within the service container namespace mangle table PREROUTING chain, the incoming packet is checked for the load balancer TOS byte, none is found and no connection mark is set.
  3. And so then, on the outgoing response packets, when the connection mark is restored, none will be restored, and the custom routing tables installed by DIND in the container's namespace will not be used to route traffic back to the load balancer.
  4. And so finally, the outgoing response packets will be routed back via the traditional route.
struanb commented 3 years ago

P.S. Further, launching DIND with --udp-ports 10053 and running two DNS services, one on port 10053 and one on port 10054, and running tcpdump -n -i eth0 udp within the containers of each service, shows that the service containers for port 10053 see the client's real IP, while the service containers for port 10054 see only the load balancer ingress IP - as expected!

I will aim to commit this patch and issue a fresh release later today.

struanb commented 3 years ago

This patch is committed, and v3.3.0 is released.

Please upgrade to https://github.com/newsnowlabs/docker-ingress-routing-daemon/releases/tag/v3.3.0 to use DIND with DNS or other UDP-based services.

Thanks @Rihan9 @tbyehl for your patience while I found a way to get this tested.

I will now close this issue.

Rihan9 commented 3 years ago

Hi @struanb, thanks to you for the awesome work! In the end I was unable to give you a hand during the debug phase, my knowledge of iptables is too poor.