Closed nikos912000 closed 3 years ago
I think I found what is wrong.
FakeDNS forwards any queries that do not match the specified rules to the IP address set through the dns argument. This is set to 8.8.8.8
by default.
This is fine for external queries or in cases where the cluster's DNS (kube-dns/CoreDNS) uses that address.
Replacing this line with:
if query.domain.decode().endswith('.cluster.local.'):
addr = ('kube-dns.kube-system.svc.cluster.local', 53)
else:
addr = ('%s' % (args.dns), 53)
fixes the issue.
A proper solution would be for the FakeDNS script to receive a list of -additional- DNS IPs/ports and patterns. Something like:
[(".cluster.local.", "kube-dns.kube-system.svc.cluster.local", 53)]
.
These would be set in the values.yaml
and be passed to the controller and injector.
What do you think @Devatoria @ptnapoleon?
Thanks for identifying the fix! I do agree that would be the proper solution.
Awesome, thanks @ptnapoleon. I'll be on holidays but if no one else picks this up I'll take care of it once I'm back :)
Describe the bug While testing the DNS disruptions at node level I noticed a couple of critical issues.
In summary, these impact the DNS records of all cluster's Kubernetes Services (
*.svc.cluster.local
).To Reproduce Steps to reproduce the behavior:
make minikube-start
make minikube-build
make install
Do a
nslookup
before applying the Disruption:google.com
.Do a
nslookup
again:Expected behavior The disruption should only impact the provided hostname (
demo.chaos-demo.svc.cluster.local
).Environment:
local
): minikubeAdditional context In minikube there is only one node which means all outgoing calls to the cluster's Kubernetes Services are affected. In a multi-node setup this affects the nodes which are targeted using the label selector.