kyma-incubator / local-kyma

Local installation on k3d cluster
14 stars 10 forks source link

after stopping and starting again cluster kyma does not start #3

Closed valentinvieriu closed 4 years ago

valentinvieriu commented 4 years ago

As the cluster is consuming resources, I've tried to stop it when not deeded using k3d stop -n kyma, and then start it againa k3d start -n kyma. It seems that after the whole cluster stabilises ( no pods in the CrashLoopBackOff state) after the start, when trying to login into console you get: image

pbochynski commented 4 years ago

You need to apply again the patch with CoreDNS. Set all the environment variables (export section at the beginning of the script) and patch the coreDNS config map again after k3d start -n kyma.

export KUBECONFIG="$(k3d get-kubeconfig -n='kyma')"
REGISTRY_IP=$(docker inspect -f '{{range .NetworkSettings.Networks}}{{.IPAddress}}{{end}}' /k3d-registry)
sed "s/REGISTRY_IP/$REGISTRY_IP/" coredns-patch.tpl >coredns-patch.yaml
kubectl -n kube-system patch cm coredns --patch "$(cat coredns-patch.yaml)"

Related issue: https://github.com/rancher/k3d/issues/229

felipekunzler commented 4 years ago

Note that since k3d v3.0 the workaround above failed to me. I used this instead:

export KUBECONFIG="$(k3d kubeconfig merge kyma --switch-context)"
REGISTRY_IP=$(docker inspect -f '{{range .NetworkSettings.Networks}}{{.IPAddress}}{{end}}' /registry.localhost)
sed "s/REGISTRY_IP/$REGISTRY_IP/" coredns-patch.tpl >coredns-patch.yaml
kubectl -n kube-system patch cm coredns --patch "$(cat coredns-patch.yaml)"