kelseyhightower / kubernetes-the-hard-way

Bootstrap Kubernetes the hard way. No scripts.
Apache License 2.0
40.8k stars 13.97k forks source link

CoreDNS deployment not creating pods/kube-scheduler errors #585

Open paulbehrisch opened 4 years ago

paulbehrisch commented 4 years ago

I followed the tutorial until kubectl apply -f https://storage.googleapis.com/kubernetes-the-hard-way/coredns.yaml This creates also a deployment, but it never created any pods for me. I tried to go back to step 1 and see if I missed a step, but I still wasn't able to make it work.

Since that's a test cluster. I tried kubectl create clusterrolebinding cluster-system-anonymous --clusterrole=cluster-admin --user=system:anonymous which would give anon users admin access just to confirm if it's a RBAC related issue.


           └─5460 /usr/local/bin/kube-scheduler --config=/etc/kubernetes/config/kube-scheduler.yaml --v=2
Jul 01 13:33:23 controller-1 kube-scheduler[5460]: E0701 13:33:23.057942    5460 reflector.go:125] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.Node: nodes is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope
Jul 01 13:33:23 controller-1 kube-scheduler[5460]: E0701 13:33:23.171212    5460 reflector.go:125] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:226: Failed to list *v1.Pod: pods is forbidden: User "system:anonymous" cannot list resource "pods" in API group "" at the cluster scope
Jul 01 13:33:23 controller-1 kube-scheduler[5460]: E0701 13:33:23.256641    5460 reflector.go:125] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
Jul 01 13:33:23 controller-1 kube-scheduler[5460]: E0701 13:33:23.376396    5460 reflector.go:125] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope
Jul 01 13:33:23 controller-1 kube-scheduler[5460]: E0701 13:33:23.452657    5460 reflector.go:125] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:anonymous" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
Jul 01 13:33:23 controller-1 kube-scheduler[5460]: E0701 13:33:23.495816    5460 reflector.go:125] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:anonymous" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
Jul 01 13:33:23 controller-1 kube-scheduler[5460]: E0701 13:33:23.564782    5460 reflector.go:125] k8s.io/client-go/informers/factory.go:133: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:anonymous" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
Jul 01 13:33:23 controller-1 kube-scheduler[5460]: E0701 13:33:23.615216    5460 reflector.go:125] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:anonymous" cannot list resource "replicasets" in API group "apps" at the cluster scope
Jul 01 13:33:23 controller-1 kube-scheduler[5460]: E0701 13:33:23.865305    5460 reflector.go:125] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:anonymous" cannot list resource "statefulsets" in API group "apps" at the cluster scope
Jul 01 13:33:23 controller-1 kube-scheduler[5460]: E0701 13:33:23.968706    5460 reflector.go:125] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:anonymous" cannot list resource "persistentvolumes" in API group "" at the cluster scope```
chaunceyt commented 4 years ago

Solution:

curl https://docs.projectcalico.org/manifests/calico.yaml -O
kubectl apply -f calico.yaml
bougyman commented 4 years ago

I had the exact same problem, but applying calico.yaml changed nothing. Now I have 0/1 of calico and 0/2 of coredns

$ kubectl --kubeconfig admin.kubeconfig get deployments -n kube-system                                                                                
NAME                      READY   UP-TO-DATE   AVAILABLE   AGE                                                                                                               
calico-kube-controllers   0/1     0            0           102s
coredns                   0/2     0            0           29m

And describing the deployment doesn't really say why:

Name:               calico-kube-controllers
Namespace:          kube-system
CreationTimestamp:  Sun, 26 Jul 2020 10:50:46 -0500
Labels:             k8s-app=calico-kube-controllers
Annotations:        Selector:  k8s-app=calico-kube-controllers
Replicas:           1 desired | 0 updated | 0 total | 0 available | 0 unavailable
StrategyType:       Recreate
MinReadySeconds:    0
Pod Template:
  Labels:           k8s-app=calico-kube-controllers
  Service Account:  calico-kube-controllers
  Containers:
   calico-kube-controllers:
    Image:      calico/kube-controllers:v3.15.1
    Port:       <none>
    Host Port:  <none>
    Readiness:  exec [/usr/bin/check-status -r] delay=0s timeout=1s period=10s #success=1 #failure=3
    Environment:
      ENABLED_CONTROLLERS:  node
      DATASTORE_TYPE:       kubernetes
    Mounts:                 <none>
  Volumes:                  <none>
  Priority Class Name:      system-cluster-critical
OldReplicaSets:             <none>
NewReplicaSet:              <none>
Events:                     <none>

Name:                   coredns
Namespace:              kube-system
CreationTimestamp:      Sun, 26 Jul 2020 10:23:09 -0500
Labels:                 k8s-app=kube-dns
                        kubernetes.io/name=CoreDNS
Annotations:            Selector:  k8s-app=kube-dns
Replicas:               2 desired | 0 updated | 0 total | 0 available | 0 unavailable
StrategyType:           RollingUpdate
MinReadySeconds:        0
RollingUpdateStrategy:  1 max unavailable, 25% max surge
Pod Template:
  Labels:           k8s-app=kube-dns
  Service Account:  coredns
  Containers:
   coredns:
    Image:       coredns/coredns:1.7.0
    Ports:       53/UDP, 53/TCP, 9153/TCP
    Host Ports:  0/UDP, 0/TCP, 0/TCP
    Args:
      -conf
      /etc/coredns/Corefile
    Limits:
      memory:  170Mi
    Requests:
      cpu:        100m
      memory:     70Mi
    Liveness:     http-get http://:8080/health delay=60s timeout=5s period=10s #success=1 #failure=5
    Readiness:    http-get http://:8181/ready delay=0s timeout=1s period=10s #success=1 #failure=3
    Environment:  <none>
    Mounts:
      /etc/coredns from config-volume (ro)
  Volumes:
   config-volume:
    Type:               ConfigMap (a volume populated by a ConfigMap)
    Name:               coredns
    Optional:           false
  Priority Class Name:  system-cluster-critical
OldReplicaSets:         <none>
NewReplicaSet:          <none>
Events:                 <none>

Also can't run busybox:

% kubectl run busybox --image=busybox:1.28 --command -- sleep 3600                                                                                                          
Error from server (Forbidden): pods "busybox" is forbidden: error looking up service account default/default: serviceaccount "default" not found

I'm thinking this might be the culprit, but I don't see it in any of the event logs for the deployments.

Kubernetes master is running at https://35.222.214.59:6443
CoreDNS is running at https://35.222.214.59:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy

What could be wrong / missing?

bougyman commented 4 years ago

Perhaps this has something to do with it. I noticed I'm getting leader election events like crazy, and this is in the controller logs:

Jul 26 16:01:44 controller-0 kube-controller-manager[6023]: W0726 16:01:44.710524    6023 authentication.go:268] No authentication-kubeconfig provided in order to lookup client-ca-file in configmap/extension-apiserver-authentication in kube-system, so client certificate authentication won't work.
Jul 26 16:01:44 controller-0 kube-controller-manager[6023]: W0726 16:01:44.711687    6023 authentication.go:292] No authentication-kubeconfig provided in order to lookup requestheader-client-ca-file in configmap/extension-apiserver-authentication in kube-system, so request-header client certificate authentication won't work.
Jul 26 16:01:44 controller-0 kube-controller-manager[6023]: W0726 16:01:44.711751    6023 authorization.go:146] No authorization-kubeconfig provided, so SubjectAccessReview of authorization tokens won't work.

The kube-controller-manager service just keeps restarting:

Jul 26 17:01:01 controller-0 kube-controller-manager[2985]: E0726 17:01:01.696231    2985 controllermanager.go:521] Error starting "csrsigning"
Jul 26 17:01:01 controller-0 kube-controller-manager[2985]: F0726 17:01:01.696254    2985 controllermanager.go:235] error starting controllers: failed to start certificate c>
Jul 26 17:01:01 controller-0 systemd[1]: kube-controller-manager.service: Main process exited, code=exited, status=255/EXCEPTION
% kubectl get events -n kube-system                                                                                                                                         
LAST SEEN   TYPE     REASON           OBJECT                              MESSAGE
3m10s       Normal   LeaderElection   endpoints/kube-controller-manager   controller-2_50988d97-b53a-4f7b-8da8-89f739691fc1 became leader
3m10s       Normal   LeaderElection   lease/kube-controller-manager       controller-2_50988d97-b53a-4f7b-8da8-89f739691fc1 became leader
2m50s       Normal   LeaderElection   endpoints/kube-controller-manager   controller-1_06e7355b-40fa-4dd1-92c1-e6aab8a7b246 became leader
2m50s       Normal   LeaderElection   lease/kube-controller-manager       controller-1_06e7355b-40fa-4dd1-92c1-e6aab8a7b246 became leader
2m30s       Normal   LeaderElection   endpoints/kube-controller-manager   controller-0_e4a5f4a5-4492-48cb-8959-f3c9dc2a6e96 became leader
2m30s       Normal   LeaderElection   lease/kube-controller-manager       controller-0_e4a5f4a5-4492-48cb-8959-f3c9dc2a6e96 became leader
2m7s        Normal   LeaderElection   endpoints/kube-controller-manager   controller-1_d8325837-bffd-49b4-9456-a92897f855ed became leader
2m7s        Normal   LeaderElection   lease/kube-controller-manager       controller-1_d8325837-bffd-49b4-9456-a92897f855ed became leader
108s        Normal   LeaderElection   endpoints/kube-controller-manager   controller-0_17607701-70d9-49f4-a443-6118f49494d9 became leader
108s        Normal   LeaderElection   lease/kube-controller-manager       controller-0_17607701-70d9-49f4-a443-6118f49494d9 became leader
89s         Normal   LeaderElection   endpoints/kube-controller-manager   controller-2_9cdfac5c-e552-420b-9197-21f5eb6139dc became leader
89s         Normal   LeaderElection   lease/kube-controller-manager       controller-2_9cdfac5c-e552-420b-9197-21f5eb6139dc became leader
70s         Normal   LeaderElection   endpoints/kube-controller-manager   controller-0_7ad86a33-e9ea-458e-8356-6f29f0c2a506 became leader
70s         Normal   LeaderElection   lease/kube-controller-manager       controller-0_7ad86a33-e9ea-458e-8356-6f29f0c2a506 became leader
50s         Normal   LeaderElection   endpoints/kube-controller-manager   controller-1_f5325e69-4d23-421e-82da-5bc5aafadf93 became leader
50s         Normal   LeaderElection   lease/kube-controller-manager       controller-1_f5325e69-4d23-421e-82da-5bc5aafadf93 became leader
33s         Normal   LeaderElection   endpoints/kube-controller-manager   controller-2_4a9afb03-6b2e-4d6e-b96d-c2dd31b59e88 became leader
33s         Normal   LeaderElection   lease/kube-controller-manager       controller-2_4a9afb03-6b2e-4d6e-b96d-c2dd31b59e88 became leader
11s         Normal   LeaderElection   endpoints/kube-controller-manager   controller-1_e50dbc1b-d635-495f-87fd-ffecb7767ca9 became leader
11s         Normal   LeaderElection   lease/kube-controller-manager       controller-1_e50dbc1b-d635-495f-87fd-ffecb7767ca9 became leader