Open oldium opened 2 years ago
Hi @oldium, nice work finding and debugging the issue, have you been able to get your PR tested?
@klaases I made few changes to the pull request and re-tested it thoroughly on Windows with Acrylic DNS Proxy (in the forwarding proxy mode - no caching). I set on both ingress DNS minikube PODs DNS_NODATA_DELAY_MS="20" environment variable to delay the NoData responses - in order to allow querying of two DNS servers in two independent minikube profiles - see logs from Acrylic proxy below. Request 00002 (type A - IPv4) was responded almost immediately by one server and second server had delay of 20ms. Request 00003 (type AAAA - IPv6) has no responses and was delayed by both servers by 20ms.
2022-09-11 20:22:12.027 [I] TDnsResolver.HandleDnsRequestForIPv4Udp: Request ID 00002 received from client 127.0.0.1:56863 [OC=0;RD=1;QDC=1;Q[1]=frontend.home.arpa;T[1]=A;Z=0002010000010000000000000866726F6E74656E6404686F6D6504617270610000010001].
2022-09-11 20:22:12.027 [I] TDnsResolver.HandleDnsRequestForIPv4Udp: Request ID 00002>50074 forwarded to server 1.
2022-09-11 20:22:12.027 [I] TDnsResolver.HandleDnsRequestForIPv4Udp: Request ID 00002>50074 forwarded to server 2.
2022-09-11 20:22:12.049 [I] TDnsResolver.HandleDnsResponseForIPv4Udp: Response ID 50074 received from server 2 in 10.0 msecs [OC=0;RC=0;TC=0;RD=1;RA=1;AA=0;QDC=1;ANC=1;NSC=0;ARC=0;Q[1]=frontend.home.arpa;T[1]=A;A[1]=frontend.home.arpa>171.19.19.172;Z=C39A818000010001000000000866726F6E74656E6404686F6D65046172706100000100010866726F6E74656E6404686F6D65046172706100000100010000012C0004AC1313AB].
2022-09-11 20:22:12.049 [I] TDnsResolver.HandleDnsResponseForIPv4Udp: Response ID 50074>00002 received from server 2 sent to client 127.0.0.1:56863.
2022-09-11 20:22:12.051 [I] TDnsResolver.HandleDnsRequestForIPv4Udp: Request ID 00003 received from client 127.0.0.1:56866 [OC=0;RD=1;QDC=1;Q[1]=frontend.home.arpa;T[1]=AAAA;Z=0003010000010000000000000866726F6E74656E6404686F6D65046172706100001C0001].
2022-09-11 20:22:12.051 [I] TDnsResolver.HandleDnsRequestForIPv4Udp: Request ID 00003>39906 forwarded to server 1.
2022-09-11 20:22:12.051 [I] TDnsResolver.HandleDnsRequestForIPv4Udp: Request ID 00003>39906 forwarded to server 2.
2022-09-11 20:22:12.058 [I] TDnsResolver.HandleDnsResponseForIPv4Udp: Response ID 50074 received from server 1 in 26.4 msecs [OC=0;RC=0;TC=0;RD=1;RA=1;AA=0;QDC=1;ANC=0;NSC=0;ARC=0;Z=C39A818000010000000000000866726F6E74656E6404686F6D6504617270610000010001].
2022-09-11 20:22:12.058 [I] TDnsResolver.HandleDnsResponseForIPv4Udp: Response ID 50074 received from server 1 discarded.
2022-09-11 20:22:12.073 [I] TDnsResolver.HandleDnsResponseForIPv4Udp: Response ID 39906 received from server 2 in 25.4 msecs [OC=0;RC=0;TC=0;RD=1;RA=1;AA=0;QDC=1;ANC=0;NSC=0;ARC=0;Z=9BE2818000010000000000000866726F6E74656E6404686F6D65046172706100001C0001].
2022-09-11 20:22:12.078 [I] TDnsResolver.HandleDnsResponseForIPv4Udp: Response ID 39906>00003 received from server 2 sent to client 127.0.0.1:56866.
2022-09-11 20:22:12.078 [I] TDnsResolver.HandleDnsResponseForIPv4Udp: Response ID 39906 received from server 1 in 25.8 msecs [OC=0;RC=0;TC=0;RD=1;RA=1;AA=0;QDC=1;ANC=0;NSC=0;ARC=0;Z=9BE2818000010000000000000866726F6E74656E6404686F6D65046172706100001C0001].
2022-09-11 20:22:12.078 [I] TDnsResolver.HandleDnsResponseForIPv4Udp: Response ID 39906 received from server 1 discarded.
I also tested this without setting DNS_NODATA_DELAY_MS (default configuration) and this works as expected too - responses are sent immediately. This should be the default use case for the majority of users.
Maybe it would be good to document this somewhere - I added new environment variable DNS_NODATA_DELAY_MS, which delays NoData responses (the responses without any IP address). This allows querying two servers by the DNS proxy (Acrylic on Windows, or dnsmasq on Linux), where the fastest response is accepted and forwarded to the requesting client while all later responses are discarded.
Also it would be good to make DNS_NODATA_DELAY_MS configurable somehow, it is not possible to add nor modify environment variables for PODs.
Updated the minikube-ingress-dns PullRequest to use ConfigMap and added some description. I hope this could be used somehow in minikube as well. If not, please suggest better solution. Thanks.
Added file watcher, so that the new value from ConfigMap is automatically applied (this takes few seconds - the ConfigMap change is not visible immediately). I think this is a generic solution and can be applied.
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/remove-lifecycle stale
/lifecycle rotten
/close
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
/remove-lifecycle stale
The Kubernetes project currently lacks enough contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/remove-lifecycle stale
/close
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
/remove-lifecycle stale
The Kubernetes project currently lacks enough contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/remove-lifecycle stale
/close
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
/remove-lifecycle stale
The Kubernetes project currently lacks enough contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/remove-lifecycle stale
/close
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
/remove-lifecycle stale
The minikube-ingress-dns is still ready...
Since my mention in the solution PR did not show up, I did want to record the current state on here:
The PR has been in its current code state since April last year.
Original statement by owner (before some improvements) was that it looked okay, but needed some testing.
I've since tested it to the best of my ability, with suggestions from @oldium.
That was concluded in the beginning of February, and I documented my testing steps so people could just follow it if they want to test too.
Other than the original communication there was no response from owner.
ingress-dns has also not been updated since 3 years ago. When I looked at dependencies, the first I found, node 12 container, is EOL since 2022. The PR from @oldium did include updates to dependencies. I realize that ingress-dns isn't really THAT security relevant, since its not meant to be used in production, but still that feels kind of wrong.
Perhaps another maintainer has time to look at this? Or at least freeze this issue, so it doesn't keep getting marked stale with all the connected comment bloat.
The Kubernetes project currently lacks enough contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/remove-lifecycle stale
/close
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
/remove-lifecycle stale
The Kubernetes project currently lacks enough contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/remove-lifecycle stale
/close
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
/remove-lifecycle stale
What Happened?
Tested on Windows 11. I followed steps to enable ingress-dns and forward test domain queries to minikube DNS from https://minikube.sigs.k8s.io/docs/handbook/addons/ingress-dns/
Then I tried to
ping
the configured host, but the ping failed:nslookup
succeeded, though:I checked this with Wireshark and the problem is that my Windows run on IPv6 primarily, so they queried the
AAAA
query first, but receivedA
response. This is actually invalid and reason why DNS resolution failed.Kindly please @sharifelgamal to comment/merge Pull Request https://github.com/sharifelgamal/minikube-ingress-dns/pull/4 and update the image version to fix this issue.
Attach the log file
log.txt
Operating System
Windows
Driver
Hyper-V