kelseyhightower / kubernetes-the-hard-way

Bootstrap Kubernetes the hard way. No scripts.
Apache License 2.0
41.26k stars 14.12k forks source link

coredns-1.8.yaml not found #737

Open rlratcliffe opened 1 year ago

rlratcliffe commented 1 year ago

working on the first step of Deploying the DNS Cluster Add-on and tried to do both

kubectl apply -f https://storage.googleapis.com/kubernetes-the-hard-way/coredns-1.8.yaml and kubectl apply -f https://storage.googleapis.com/kubernetes-the-hard-way/coredns.yaml

and received error: unable to read URL "https://storage.googleapis.com/kubernetes-the-hard-way/coredns-1.8.yaml", server reported 404 Not Found, status code=404

exdial commented 1 year ago

and received error: unable to read URL "https://storage.googleapis.com/kubernetes-the-hard-way/coredns-1.8.yaml", server reported 404 Not Found, status code=404

Use helm chart instead.

$ helm repo add coredns https://coredns.github.io/helm
$ helm --namespace=kube-system install coredns coredns/coredns
hkz-aarvesen commented 1 year ago

I was able to install coredns using helm, but I think the coredns-1.8.yml is installing more than just coredns. After running the helm chart on controller-0 (from this helpful advice) and running the check command, there are no pods running in the system namespace:

$ kubectl get pods -l k8s-app=kube-dns -n kube-system
No resources found in kube-system namespace.

When I try to go to https://storage.googleapis.com/kubernetes-the-hard-way in the browser, I get the following error:

<Error>
  <Code>NoSuchBucket</Code>
  <Message>The specified bucket does not exist.</Message>
</Error>

I think the bucket has been accidentally destroyed. Or maybe the permissions are now set incorrectly.

Edit: Hack to get working

I took the 1.7.0 yaml file out of the repo (https://github.com/kelseyhightower/kubernetes-the-hard-way/blob/master/deployments/coredns-1.7.0.yaml), updated coredns from 1.7.0 to 1.8.0 (just search for the one place in the yaml), and it worked.

Note this is all on controller-0, not locally.

# if you followed the advice to update this via helm, uninstall it
$ helm uninstall coredns

# get the 1.7.0 file
$ wget https://raw.githubusercontent.com/kelseyhightower/kubernetes-the-hard-way/master/deployments/coredns-1.7.0.yaml

# fix it
$ cp coredns-1.7.0.yaml coredns-1.8.0.yaml
$ vi coredns-1.8.0.yaml

# apply it
$ kubectl apply -f coredns-1.8.0.yaml 
serviceaccount/coredns created
clusterrole.rbac.authorization.k8s.io/system:coredns created
clusterrolebinding.rbac.authorization.k8s.io/system:coredns created
configmap/coredns created
deployment.apps/coredns created
service/kube-dns created

$ kubectl get pods -l k8s-app=kube-dns -n kube-system
NAME                      READY   STATUS    RESTARTS   AGE
coredns-6955db5cc-fz9lp   1/1     Running   0          20s
coredns-6955db5cc-k6476   1/1     Running   0          20s
rlratcliffe commented 1 year ago

@hkz-aarvesen didn't know the coredns-1.7.0.yaml was in the repo. it worked to copy it locally and modify the image version and then apply it. appreciate your help!

exdial commented 1 year ago

@hkz-aarvesen

I was able to install coredns using helm, but I think the coredns-1.8.yml is installing more than just coredns.

coredns-1.8.yaml installs 6 types of resources: ServiceAccount, ClusterRole, ClusterRoleBinding, ConfigMap, Deployment and Service.

curl -s https://raw.githubusercontent.com/kelseyhightower/kubernetes-the-hard-way/master/deployments/coredns-1.7.0.yaml | grep ^kind | wc -l

With the default installation of helm chart you will have all of these resources, except ServiceAccount, ClusterRole and ClusterRoleBinding. ServiceAccount and roles will only be installed if you specify the serviceAccount.create flag (see configuration options)

$ kubectl -n kube-system get all -l k8s-app=coredns

NAME                                   READY   STATUS    RESTARTS   AGE
pod/coredns-coredns-55b8869fc9-qlj2t   1/1     Running   1          11d

NAME                      TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)         AGE
service/coredns-coredns   ClusterIP   10.32.0.57   <none>        53/UDP,53/TCP   11d

NAME                              READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/coredns-coredns   1/1     1            1           11d

NAME                                         DESIRED   CURRENT   READY   AGE
replicaset.apps/coredns-coredns-55b8869fc9   1         1         1       11d

After running the helm chart on controller-0 (from this helpful advice) and running the check command, there are no pods running in the system namespace:

$ kubectl get pods -l k8s-app=kube-dns -n kube-system
No resources found in kube-system namespace.

There are no pods because you are trying to select pods with the k8s-app=kube-dns label, but the default label for the coredns helm chart is k8s-app=coredns. The original manifest was renamed from kube-dns to coredns. Looks like @kelseyhightower forgot to rename the labels.

$ kubectl -n kube-system get deploy coredns-coredns -o jsonpath='{.metadata.labels}' | jq
{
  "app.kubernetes.io/instance": "coredns",
  "app.kubernetes.io/managed-by": "Helm",
  "app.kubernetes.io/name": "coredns",
  "app.kubernetes.io/version": "1.10.1",
  "helm.sh/chart": "coredns-1.23.0",
  "k8s-app": "coredns",
  "kubernetes.io/cluster-service": "true",
  "kubernetes.io/name": "CoreDNS"
}

You can always use customLabels option for the helm chart to set necessary labels, eg. k8s-app=kube-dns.

I would say that using the official CoreDNS helm chart is the most correct way to get DNS on the cluster.

lianzeng commented 1 year ago

download coredns-1.7.0.yaml locally , and fix image to 1.8.0, then run locally: kubectl apply -f coredns-1.8.0.yaml

chungheon commented 1 year ago

Hi, I followed the suggestion. But now I have a running coredns but it is 0/1 ready. image

When i call nslookup it fails, and this is the log from the coredns pod. "kubectl logs coredns-76cfcdf788-cfv2n -n kube-system" image Not sure if its related but im thinking this is why the smoke test for exposing service through nodeport is failing as well.

koenry commented 1 year ago

Hey! You can fork the 1.7.0 file to your own repository and change the image version to 1.8.0 and use it as raw content you wont need to ssh and you will be able to perform as the original guide was intended. I have it here: https://github.com/koenry/k8s-hard-way-core-dns-1.8 Also you can just use the 1.7 version without any issues and you will be able to finish the guide with fully functional cluster kubectl apply -f https://raw.githubusercontent.com/kelseyhightower/kubernetes-the-hard-way/master/deployments/coredns-1.7.0.yaml

krosibahili commented 1 year ago

Good!

jg3 commented 7 months ago

See also: there is a ./manifests/ with a coredns-1.10.1.yaml here ... https://github.com/kelseyhightower/kubernetes-the-hard-way/tree/af7ffdb8e610d31a417a3ce1e876f107e777e34b/manifests

jojoatt commented 4 months ago

Hey! You can fork the 1.7.0 file to your own repository and change the image version to 1.8.0 and use it as raw content you wont need to ssh and you will be able to perform as the original guide was intended. I have it here: https://github.com/koenry/k8s-hard-way-core-dns-1.8

Also you can just use the 1.7 version without any issues and you will be able to finish the guide with fully functional cluster

kubectl apply -f https://raw.githubusercontent.com/kelseyhightower/kubernetes-the-hard-way/master/deployments/coredns-1.7.0.yaml

Hello,

I am following the Kubernetes the Hard Way guide on the master branch. When I apply your manifest, it does not change anything regarding DNS resolution. For example, when I run nslookup google.com in any container in any pod in my Kubernetes cluster, it does not work. Do you have any ideas on what could be the issue?

Thank you in advance