Closed metacoma closed 8 months ago
It seems I have found the solution to the connectivity issue for minikube deployed using terraform resources.
After several hours of searching on Google, I came across this Slack thread with similar symptoms: https://slack-archive.rancher.com/t/10289479/hi-since-a-few-hours-ago-my-dns-in-k3s-stopped-working-nobod#538f20db-d53f-4443-8bc7-6f414988ebfe
So, after executing the following commands:
ufw allow 6443/tcp # 1. This would expose my API server to the internet. Not going to do this.
ufw allow from 10.42.0.0/16 to any # 2. Allow all pods to communicate with my host.
ufw allow from 10.43.0.0/16 to any # 3. Allow all services to communicate with my host.
coredns works fine
The unanswered question is: why does the kubernetes cluster deployed directly using minikube have no issue with the CoreDNS pod, while minikube deployed using terraform encounters this problem?
Both k8s clusters I deployed are on the same machine.
$ lsb_release -a
No LSB modules are available.
Distributor ID: Ubuntu
Description: Ubuntu 22.04.2 LTS
Release: 22.04
Codename: jammy
Anyway, feel free to close this issue.
The unanswered question is: why does the kubernetes cluster deployed directly using minikube have no issue with the CoreDNS pod, while minikube deployed using terraform encounters this problem?
It appears that the root cause of this issue is that I ran terraform inside a docker container.
Container logs:
Steps to reproduce:
What's interesting is that when deploying the cluster using minikube without terraform, everything works as expected.