Closed valentinvieriu closed 4 years ago
You need to apply again the patch with CoreDNS. Set all the environment variables (export section at the beginning of the script) and patch the coreDNS config map again after k3d start -n kyma
.
export KUBECONFIG="$(k3d get-kubeconfig -n='kyma')"
REGISTRY_IP=$(docker inspect -f '{{range .NetworkSettings.Networks}}{{.IPAddress}}{{end}}' /k3d-registry)
sed "s/REGISTRY_IP/$REGISTRY_IP/" coredns-patch.tpl >coredns-patch.yaml
kubectl -n kube-system patch cm coredns --patch "$(cat coredns-patch.yaml)"
Related issue: https://github.com/rancher/k3d/issues/229
Note that since k3d v3.0 the workaround above failed to me. I used this instead:
export KUBECONFIG="$(k3d kubeconfig merge kyma --switch-context)"
REGISTRY_IP=$(docker inspect -f '{{range .NetworkSettings.Networks}}{{.IPAddress}}{{end}}' /registry.localhost)
sed "s/REGISTRY_IP/$REGISTRY_IP/" coredns-patch.tpl >coredns-patch.yaml
kubectl -n kube-system patch cm coredns --patch "$(cat coredns-patch.yaml)"
As the cluster is consuming resources, I've tried to stop it when not deeded using
k3d stop -n kyma
, and then start it againak3d start -n kyma
. It seems that after the whole cluster stabilises ( no pods in the CrashLoopBackOff state) after the start, when trying to login into console you get: