Open jeusdi opened 2 years ago
Hi @jeusdi , thanks for opening this issue!
Do you have the log output of your k3d cluster create
command?
The cluster creation should fail if k3d fails to inject that hosts entry. Did you restart the cluster or the docker service?
@iwilltry42 I have the same problem, and also in my case the config map does not have the host.k3d.internal injected, I have only
172.20.0.2 k3d-c2d-dev-k8s-server-0
my cluster is running (re-created yesterday), I did restart the laptop, what can I do to troubleshoot this? Should I recreate the cluster (creation is always successful) and how can I get creation logs?
I recreated the cluster, and then the coredns host entry is there. It seems that the host entry "disappears" upon reboot of the host machine - can it be re-injected upon reboot as well?
Creating the k3d cluster k3d-c2d-dev-k8s ...
INFO[0000] Prep: Network
INFO[0000] Created network 'k3d-c2d-dev-k8s'
INFO[0000] Created image volume k3d-c2d-dev-k8s-images
INFO[0000] Starting new tools node...
INFO[0000] Starting Node 'k3d-c2d-dev-k8s-tools'
INFO[0001] Creating node 'k3d-c2d-dev-k8s-server-0'
INFO[0001] Creating LoadBalancer 'k3d-c2d-dev-k8s-serverlb'
INFO[0001] Using the k3d-tools node to gather environment information
INFO[0001] HostIP: using network gateway 172.18.0.1 address
INFO[0001] Starting cluster 'c2d-dev-k8s'
INFO[0001] Starting servers...
INFO[0001] Starting Node 'k3d-c2d-dev-k8s-server-0'
INFO[0005] All agents already running.
INFO[0005] Starting helpers...
INFO[0005] Starting Node 'k3d-c2d-dev-k8s-serverlb'
INFO[0011] Injecting records for hostAliases (incl. host.k3d.internal) and for 3 network members into CoreDNS configmap...
INFO[0014] Cluster 'c2d-dev-k8s' created successfully!
INFO[0014] You can now use it like this:
kubectl cluster-info
This is definitely an issue - upon restart of the machine the host.k3d.internal is gone.
The workaround is to define the subnet upon cluster creation, e.g. k3d cluster create ... --subnet '172.18.0.0/16'
and then instead of using host.k3d.internal to use directly 172.18.0.1
...
AFAICT - reproducing this is as easy as k3d cluster stop $cluster_name && k3d cluster start $cluster_name
. What's (arguably) worse is that this bug also affects any defined hostAliases
!
In other words, after stopping/starting the k3d cluster, none of the host aliases defined in the original k3d config get injected. They only get injected upon starting the cluster the first time during k3d cluster create
Edit: Just found https://github.com/k3d-io/k3d/issues/1112 and https://github.com/k3d-io/k3d/issues/1221
Host entries to the CoreDNS config are now managed via the coredns-custom configmap as per https://github.com/k3d-io/k3d/pull/1453 so they survive restarts of the cluster and host system.
This is released in https://github.com/k3d-io/k3d/releases/tag/v5.7.0
Incase anyone comes across this thread thinking the problem has been resolved, the merge above was backed out in 5.7.1 so this issue remains. It is also being tracked on #1221 .
What did you do
I've set a cluster up using this config file:
command:
k3d cluster create --config k3d-config.yaml
I'm getting this event when I'm trying to deploy my application:
Formatted warning failed event:
Last one says:
After that, I've took a look on coredns configmap:
As you can see,
host.k3d.internal
doesn't exist.What did you expect to happen
host.k3d.internal
can be reached.Which OS & Architecture
Which version of
k3d
k3d version v5.2.2 k3s version v1.21.7-k3s1 (default)
Which version of docker
docker version:
docker info: