kubesphere / kubekey

Install Kubernetes/K3s only, both Kubernetes/K3s and KubeSphere, and related cloud-native add-ons, it supports all-in-one, multi-node, and HA 🔥 ⎈ 🐳
https://kubesphere.io
Apache License 2.0
2.3k stars 541 forks source link

ERRO[09:20:51 MSK] PATH=$PATH /bin/sh -c "/usr/local/bin/kubeadm init #572

Closed MonaxGT closed 3 years ago

MonaxGT commented 3 years ago

Hi Kubersphere team!

I used manual to deploy kubersphere. I installed all of depends. I created ./kk create config --with-kubesphere v3.1.0 and 1.20.4 Kubernetes version (I also tried v1.19.8)

Then I run sudo ./kk create cluster -f config-sample.yaml and received error:

ERRO[09:20:51 MSK] Failed to init kubernetes cluster: Failed to exec command: sudo env PATH=$PATH /bin/sh -c "/usr/local/bin/kubeadm init --config=/etc/kubernetes/kubeadm-config.yaml --ignore-preflight-errors=FileExisting-crictl"
W0706 09:20:51.194782   13677 utils.go:69] The recommended value for "clusterDNS" in "KubeletConfiguration" is: [10.233.0.10]; the provided value is: [169.254.25.10]
[init] Using Kubernetes version: v1.20.4
[preflight] Running pre-flight checks
    [WARNING FileExisting-ebtables]: ebtables not found in system path
    [WARNING FileExisting-ethtool]: ethtool not found in system path
    [WARNING FileExisting-tc]: tc not found in system path
error execution phase preflight: [preflight] Some fatal errors occurred:
    [ERROR FileExisting-ip]: ip not found in system path
    [ERROR FileExisting-iptables]: iptables not found in system path
[preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`
To see the stack trace of this error execute with --v=5 or higher: Process exited with status 1  node=10.10.10.101
WARN[09:20:51 MSK] Task failed ...
WARN[09:20:51 MSK] error: interrupted by error
Error: Failed to init kubernetes cluster: interrupted by error

But if I just copy sudo env PATH=$PATH /bin/sh -c "/usr/local/bin/kubeadm init --config=/etc/kubernetes/kubeadm-config.yaml --ignore-preflight-errors=FileExisting-crict and run from terminal - I don't see any problem

[k8s-user@k8s-master kubersphere]$ sudo env PATH=$PATH /bin/sh -c "/usr/local/bin/kubeadm init --config=/etc/kubernetes/kubeadm-config.yaml --ignore-preflight-errors=FileExisting-crictl"
W0706 08:46:00.323861  127486 utils.go:69] The recommended value for "clusterDNS" in "KubeletConfiguration" is: [10.233.0.10]; the provided value is: [169.254.25.10]
W0706 08:46:00.458826  127486 configset.go:348] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
[init] Using Kubernetes version: v1.19.8
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [k8s-master k8s-master.local k8s-node1 k8s-node1.local k8s-node2 k8s-node2.local kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.local lb.kubesphere.local localhost] and IPs [10.233.0.1 10.10.10.101 127.0.0.1 10.10.10.102 10.10.10.103]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] External etcd mode: Skipping etcd/ca certificate authority generation
[certs] External etcd mode: Skipping etcd/server certificate generation
[certs] External etcd mode: Skipping etcd/peer certificate generation
[certs] External etcd mode: Skipping etcd/healthcheck-client certificate generation
[certs] External etcd mode: Skipping apiserver-etcd-client certificate generation
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.
[apiclient] All control plane components are healthy after 63.502290 seconds
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.19" in namespace kube-system with the configuration for the kubelets in the cluster
[upload-certs] Skipping phase. Please see --upload-certs
[mark-control-plane] Marking the node k8s-master as control-plane by adding the label "node-role.kubernetes.io/master=''"
[mark-control-plane] Marking the node k8s-master as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
[bootstrap-token] Using token: 70ezkf.c8t9f42yqmm714bh
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

You can now join any number of control-plane nodes by copying certificate authorities
and service account keys on each node and then running the following as root:

I.tried to delete and create cluster again severals times but received the same error again and again...

After copy config file to $HOME/.kube, join worker's node with kuberadmin joint and run kubectl get nodes I've seen only nodes in NotReady status... and nothing changes

OS: Red Hat Enterprise Linux Server 7.9 (Maipo)

Can you help me with error?

RolandMa1986 commented 3 years ago

That's should be the root cause. Before you set up Kubernetes, you need to add NO_PROXY=*.cluster.local to your host env to avoid internal service going through the proxy. You can found the issue at https://github.com/kubernetes/kubeadm/issues/666

or https://stackoverflow.com/questions/45580788/how-to-install-kubernetes-cluster-behind-proxy-with-kubeadm

MonaxGT commented 3 years ago

That's should be the root cause.

About proxy? or about delete kube-apiserver-master?

When I tried to install kubesphere (create Cluster) I put all of private network to proxy ignore in /etc/environment and docker settings

MonaxGT commented 3 years ago

That's should be the root cause. Before you set up Kubernetes, you need to add NO_PROXY=*.cluster.local to your host env to avoid internal service going through the proxy. You can found the issue at kubernetes/kubeadm#666

or https://stackoverflow.com/questions/45580788/how-to-install-kubernetes-cluster-behind-proxy-with-kubeadm

Ok, I will try!

RolandMa1986 commented 3 years ago

You can try to add the no_proxy environment directly in the kube-apiserver.yaml config `vi /etc/kubernetes/manifests/kube-apiserver.yaml'

    env:
    - name: NO_PROXY
      value: 10.96.0.0/16,10.244.0.0/16,<nodes-ip-range>// add all neccesery IP and dns.
MonaxGT commented 3 years ago

You can try to add the no_proxy environment directly in the kube-apiserver.yaml config `vi /etc/kubernetes/manifests/kube-apiserver.yaml'

    env:
    - name: NO_PROXY
      value: 10.96.0.0/16,10.244.0.0/16,<nodes-ip-range>// add all neccesery IP and dns.

I check my file and see:

    - name: no_proxy
      value: localhost,127.0.0.1,10.0.0.0/8,lb.kubesphere.local,localaddress,.localdomain.com

Should I use exactly nets from your post?

MonaxGT commented 3 years ago

I ve tried to recreate cluster after put NO_PROXY=*.cluster.local An now I get request to http://ks-apiserver.kubesphere-system.svc/oauth/token failed, reason: getaddrinfo ENOTFOUND ks-apiserver.kubesphere-system.svc

when I try to login(

MonaxGT commented 3 years ago

I understood really interesting thing... When I try to create cluster everytime kk create cluster with dns IP address 169.254.25.10... I change this to 10.233.0.10 and after run create cluster see again only 169.254.25.10 in config file... BUT when I run create cluster I've seen in log that he tried use 169.254.25.10 but recommended 10.233.0.10)))

If I am not mistaken kk change normal dns address to wrong and tries to deploy....

zryfish commented 3 years ago

I understood really interesting thing... When I try to create cluster everytime kk create cluster with dns IP address 169.254.25.10... I change this to 10.233.0.10 and after run create cluster see again only 169.254.25.10 in config file... BUT when I run create cluster I've seen in log that he tried use 169.254.25.10 but recommended 10.233.0.10)))

If I am not mistaken kk change normal dns address to wrong and tries to deploy....

169.254.25.10 is the address used by nodelocaldns, 10.233.0.10 is the address used by cluster dns. There is nothing wrong with these addresses. You should check your host dns settings, make sure there are no unavailable upstream dns servers in your /etc/resolv.conf

RolandMa1986 commented 3 years ago

The previous check shows your CoreDNS works. Maybe you can the nodelocaldns's status. Whether the connection between NodeLocalDNS to CoreDNS was proxied. kubectl -n kube-system logs nodelocaldns-<tab>

24sama commented 3 years ago

I find no nodelocaldns pod in your kube-system. Maybe you should apply them manually:

kubectl apply -f /etc/kubernetes/nodelocaldns.yaml
kubectl apply -f /etc/kubernetes/nodelocaldnsConfigmap.yaml
MonaxGT commented 3 years ago

I understood really interesting thing... When I try to create cluster everytime kk create cluster with dns IP address 169.254.25.10... I change this to 10.233.0.10 and after run create cluster see again only 169.254.25.10 in config file... BUT when I run create cluster I've seen in log that he tried use 169.254.25.10 but recommended 10.233.0.10))) If I am not mistaken kk change normal dns address to wrong and tries to deploy....

169.254.25.10 is the address used by nodelocaldns, 10.233.0.10 is the address used by cluster dns. There is nothing wrong with these addresses. You should check your host dns settings, make sure there are no unavailable upstream dns servers in your /etc/resolv.conf

Ok, but he wanted to use exactly this address to clusterDNS

ERRO[09:58:06 MSK] Failed to init kubernetes cluster: Failed to exec command: sudo env PATH=$PATH /bin/sh -c "/usr/local/bin/kubeadm init --config=/etc/kubernetes/kubeadm-config.yaml --ignore-preflight-errors=FileExisting-crictl"
W0715 09:58:05.904247   57813 utils.go:69] The recommended value for "clusterDNS" in "KubeletConfiguration" is: [10.233.0.10]; the provided value is: [169.254.25.10]

I checked there was no configuration files in /etc/kubernetes before I ran kk cluster create

MonaxGT commented 3 years ago

And in /etc/kubernetes/kubeadm-config.yaml i see

dns:
  type: CoreDNS
  imageRepository: coredns
  imageTag: 1.6.9

  apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
clusterDNS:
- 169.254.25.10
clusterDomain: cluster.local
evictionHard:

And I check /etc/resolv.conf on my host. All of dns servers work properly

MonaxGT commented 3 years ago

I find no nodelocaldns pod in your kube-system. Maybe you should apply them manually:

kubectl apply -f /etc/kubernetes/nodelocaldns.yaml
kubectl apply -f /etc/kubernetes/nodelocaldnsConfigmap.yaml

I couldn't find this yaml files in /etc/kubernetes when cluster is up(

24sama commented 3 years ago

I couldn't find this yaml files in /etc/kubernetes when cluster is up(

These files generate by kk, and kk will apply them automatically.

It is possible these file don't generated after you get an error in this step. Then you create a cluster manually, and some components are missing.

ERRO[09:58:06 MSK] Failed to init kubernetes cluster: Failed to exec command: sudo env PATH=$PATH /bin/sh -c "/usr/local/bin/kubeadm init --config=/etc/kubernetes/kubeadm-config.yaml --ignore-preflight-errors=FileExisting-crictl"
W0715 09:58:05.904247   57813 utils.go:69] The recommended value for "clusterDNS" in "KubeletConfiguration" is: [10.233.0.10]; the provided value is: [169.254.25.10]

You can refer to the following settings:

nodelocaldnsConfigmap.yaml

---
apiVersion: v1
kind: ConfigMap
metadata:
  name: nodelocaldns
  namespace: kube-system
  labels:
    addonmanager.kubernetes.io/mode: EnsureExists

data:
  Corefile: |
    cluster.local:53 {
        errors
        cache {
            success 9984 30
            denial 9984 5
        }
        reload
        loop
        bind 169.254.25.10
        forward . { coredns svc address } {
            force_tcp
        }
        prometheus :9253
        health 169.254.25.10:9254
    }
    in-addr.arpa:53 {
        errors
        cache 30
        reload
        loop
        bind 169.254.25.10
        forward . { coredns svc address } {
            force_tcp
        }
        prometheus :9253
    }
    ip6.arpa:53 {
        errors
        cache 30
        reload
        loop
        bind 169.254.25.10
        forward . { coredns svc address } {
            force_tcp
        }
        prometheus :9253
    }
    .:53 {
        errors
        cache 30
        reload
        loop
        bind 169.254.25.10
        forward . /etc/resolv.conf
        prometheus :9253
    }

nodelocaldns.yaml

---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: nodelocaldns
  namespace: kube-system
  labels:
    addonmanager.kubernetes.io/mode: Reconcile

---
apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: nodelocaldns
  namespace: kube-system
  labels:
    k8s-app: kube-dns
    addonmanager.kubernetes.io/mode: Reconcile
spec:
  selector:
    matchLabels:
      k8s-app: nodelocaldns
  template:
    metadata:
      labels:
        k8s-app: nodelocaldns
      annotations:
        prometheus.io/scrape: 'true'
        prometheus.io/port: '9253'
    spec:
      priorityClassName: system-cluster-critical
      serviceAccountName: nodelocaldns
      hostNetwork: true
      dnsPolicy: Default  # Don't use cluster DNS.
      tolerations:
      - effect: NoSchedule
        operator: "Exists"
      - effect: NoExecute
        operator: "Exists"
      - key: "CriticalAddonsOnly"
        operator: "Exists"
      containers:
      - name: node-cache
        image: kubesphere/k8s-dns-node-cache:1.15.12
        resources:
          limits:
            memory: 170Mi
          requests:
            cpu: 100m
            memory: 70Mi
        args: [ "-localip", "169.254.25.10", "-conf", "/etc/coredns/Corefile", "-upstreamsvc", "coredns" ]
        securityContext:
          privileged: true
        ports:
        - containerPort: 53
          name: dns
          protocol: UDP
        - containerPort: 53
          name: dns-tcp
          protocol: TCP
        - containerPort: 9253
          name: metrics
          protocol: TCP
        livenessProbe:
          httpGet:
            host: 169.254.25.10
            path: /health
            port: 9254
            scheme: HTTP
          timeoutSeconds: 5
          successThreshold: 1
          failureThreshold: 10
        readinessProbe:
          httpGet:
            host: 169.254.25.10
            path: /health
            port: 9254
            scheme: HTTP
          timeoutSeconds: 5
          successThreshold: 1
          failureThreshold: 10
        volumeMounts:
        - name: config-volume
          mountPath: /etc/coredns
        - name: xtables-lock
          mountPath: /run/xtables.lock
      volumes:
        - name: config-volume
          configMap:
            name: nodelocaldns
            items:
            - key: Corefile
              path: Corefile
        - name: xtables-lock
          hostPath:
            path: /run/xtables.lock
            type: FileOrCreate
      # Minimize downtime during a rolling upgrade or deletion; tell Kubernetes to do a "force
      # deletion": https://kubernetes.io/docs/concepts/workloads/pods/pod/#termination-of-pods.
      terminationGracePeriodSeconds: 0
  updateStrategy:
    rollingUpdate:
      maxUnavailable: 20%
    type: RollingUpdate
MonaxGT commented 3 years ago

Hi! I recreate cluster again and tried to put configs nodelocaldns.yaml and nodelocaldnsConfigmap.yaml in /etc/kubernetes/ and it didn't help. Then I apply this config after installing kubesphere and got 3 pods which restarted again and again,

[kuber@master kubernetes]$ kubectl get -n kube-system pods
NAME                                           READY   STATUS             RESTARTS   AGE
calico-kube-controllers-8f59968d4-v7xgq        1/1     Running            0          17m
calico-node-qw2gd                              1/1     Running            0          17m
calico-node-wndnp                              1/1     Running            0          17m
calico-node-zwwpr                              1/1     Running            0          17m
coredns-86cfc99d74-gnd27                       1/1     Running            0          18m
coredns-86cfc99d74-vzgpl                       1/1     Running            0          18m
kube-apiserver-master                          1/1     Running            0          18m
kube-controller-manager-master                 1/1     Running            0          18m
kube-proxy-6jl2v                               1/1     Running            0          18m
kube-proxy-bkzd7                               1/1     Running            0          18m
kube-proxy-qkq8s                               1/1     Running            0          17m
kube-scheduler-master                          1/1     Running            0          18m
nodelocaldns-5kcs8                             0/1     CrashLoopBackOff   4          3m2s
nodelocaldns-lvhnz                             0/1     CrashLoopBackOff   4          3m2s
nodelocaldns-pvtwf                             0/1     CrashLoopBackOff   4          3m2s
openebs-localpv-provisioner-7cfc686bc5-g6lqv   1/1     Running            0          17m
snapshot-controller-0                          1/1     Running            0          9m35s
[kuber@master kubernetes]$ kubectl get -n kubesphere-system pods
NAME                                    READY   STATUS    RESTARTS   AGE
ks-apiserver-949bb66c8-zmh5r            1/1     Running   0          10m
ks-console-5576fccbb8-b6gqm             1/1     Running   0          18m
ks-controller-manager-5c48b7c97-xpst4   1/1     Running   0          10m
ks-installer-5d65c99d54-xbhth           1/1     Running   0          20m

[kuber@master kubernetes]$ kubectl -n kubesphere-system exec -it ks-apiserver-949bb66c8-zmh5r -- sh
/ # nslookup ks-controller-manager.kubesphere-system.svc.cluster.local 10.233.0.10
Server:     10.233.0.10
Address:    10.233.0.10:53

Name:   ks-controller-manager.kubesphere-system.svc.cluster.local
Address: 10.233.0.128
image
MonaxGT commented 3 years ago

All my running pods

[kuber@master kubernetes]$ kubectl get pod --all-namespaces
NAMESPACE                      NAME                                               READY   STATUS             RESTARTS   AGE
kube-system                    calico-kube-controllers-8f59968d4-v7xgq            1/1     Running            0          36m
kube-system                    calico-node-qw2gd                                  1/1     Running            0          36m
kube-system                    calico-node-wndnp                                  1/1     Running            0          36m
kube-system                    calico-node-zwwpr                                  1/1     Running            0          36m
kube-system                    coredns-86cfc99d74-gnd27                           1/1     Running            0          38m
kube-system                    coredns-86cfc99d74-vzgpl                           1/1     Running            0          38m
kube-system                    kube-apiserver-master                              1/1     Running            0          38m
kube-system                    kube-controller-manager-master                     1/1     Running            0          38m
kube-system                    kube-proxy-6jl2v                                   1/1     Running            0          37m
kube-system                    kube-proxy-bkzd7                                   1/1     Running            0          38m
kube-system                    kube-proxy-qkq8s                                   1/1     Running            0          37m
kube-system                    kube-scheduler-master                              1/1     Running            0          38m
kube-system                    nodelocaldns-5kcs8                                 0/1     CrashLoopBackOff   9          22m
kube-system                    nodelocaldns-lvhnz                                 0/1     CrashLoopBackOff   9          22m
kube-system                    nodelocaldns-pvtwf                                 0/1     CrashLoopBackOff   9          22m
kube-system                    openebs-localpv-provisioner-7cfc686bc5-g6lqv       1/1     Running            0          36m
kube-system                    snapshot-controller-0                              1/1     Running            0          29m
kubesphere-controls-system     default-http-backend-76d9fb4bb7-9zp25              1/1     Running            0          28m
kubesphere-controls-system     kubectl-admin-7b69cb97d5-82zsp                     1/1     Running            0          27m
kubesphere-monitoring-system   alertmanager-main-0                                2/2     Running            0          27m
kubesphere-monitoring-system   alertmanager-main-1                                2/2     Running            0          27m
kubesphere-monitoring-system   alertmanager-main-2                                2/2     Running            0          27m
kubesphere-monitoring-system   kube-state-metrics-687c7c4d86-xfdf4                3/3     Running            0          27m
kubesphere-monitoring-system   node-exporter-9z2xf                                2/2     Running            0          27m
kubesphere-monitoring-system   node-exporter-t4tnv                                2/2     Running            0          27m
kubesphere-monitoring-system   node-exporter-w44bw                                2/2     Running            0          27m
kubesphere-monitoring-system   notification-manager-deployment-7bd887ffb4-bjzjs   1/1     Running            0          27m
kubesphere-monitoring-system   notification-manager-deployment-7bd887ffb4-npn2s   1/1     Running            0          27m
kubesphere-monitoring-system   notification-manager-operator-78595d8666-2nv7h     2/2     Running            0          27m
kubesphere-monitoring-system   prometheus-k8s-0                                   3/3     Running            1          27m
kubesphere-monitoring-system   prometheus-k8s-1                                   3/3     Running            1          27m
kubesphere-monitoring-system   prometheus-operator-d7fdfccbf-brhjd                2/2     Running            0          27m
kubesphere-system              ks-apiserver-949bb66c8-zmh5r                       1/1     Running            0          19m
kubesphere-system              ks-console-5576fccbb8-b6gqm                        1/1     Running            0          28m
kubesphere-system              ks-controller-manager-5c48b7c97-xpst4              1/1     Running            0          19m
kubesphere-system              ks-installer-5d65c99d54-xbhth                      1/1     Running            0          30m
24sama commented 3 years ago

Can you paste the pod logs in here?

Use:

kubectl -n kube-system logs -p nodelocaldns-5kcs8
kubectl -n kube-system describe po nodelocaldns-5kcs8
MonaxGT commented 3 years ago
[kuber@master kubernetes]$ kubectl -n kube-system logs -p pod/nodelocaldns-5kcs8
2021/07/17 07:45:33 [INFO] Using Corefile /etc/coredns/Corefile
2021/07/17 07:45:33 [ERROR] Failed to read node-cache coreFile /etc/coredns/Corefile.base - open /etc/coredns/Corefile.base: no such file or directory
2021/07/17 07:45:33 [ERROR] Failed to sync kube-dns config directory /etc/kube-dns, err: lstat /etc/kube-dns: no such file or directory
plugin/forward: /etc/coredns/Corefile:10 - Error during parsing: Wrong argument count or unexpected line ending after '.'

[kuber@master kubernetes]$ kubectl -n kube-system describe pod/nodelocaldns-5kcs8
Name:                 nodelocaldns-5kcs8
Namespace:            kube-system
Priority:             2000000000
Priority Class Name:  system-cluster-critical
Node:                 master/10.10.10.101
Start Time:           Sat, 17 Jul 2021 01:06:03 +0300
Labels:               controller-revision-hash=5c9d7c594f
                      k8s-app=nodelocaldns
                      pod-template-generation=1
Annotations:          prometheus.io/port: 9253
                      prometheus.io/scrape: true
Status:               Running
IP:                   10.10.10.101
IPs:
  IP:           10.10.10.101
Controlled By:  DaemonSet/nodelocaldns
Containers:
  node-cache:
    Container ID:  docker://b98e891096cb26e4f3a980d656563333a6d159ea826d97d5d1b898c6bb5d0531
    Image:         kubesphere/k8s-dns-node-cache:1.15.12
    Image ID:      docker-pullable://kubesphere/k8s-dns-node-cache@sha256:3b55377cd3b8098a79dc3f276cc542a681e3f2b71554addac9a603cc65e4829e
    Ports:         53/UDP, 53/TCP, 9253/TCP
    Host Ports:    53/UDP, 53/TCP, 9253/TCP
    Args:
      -localip
      169.254.25.10
      -conf
      /etc/coredns/Corefile
      -upstreamsvc
      coredns
    State:          Waiting
      Reason:       CrashLoopBackOff
    Last State:     Terminated
      Reason:       Error
      Exit Code:    1
      Started:      Sat, 17 Jul 2021 10:50:38 +0300
      Finished:     Sat, 17 Jul 2021 10:50:38 +0300
    Ready:          False
    Restart Count:  119
    Limits:
      memory:  170Mi
    Requests:
      cpu:        100m
      memory:     70Mi
    Liveness:     http-get http://169.254.25.10:9254/health delay=0s timeout=5s period=10s #success=1 #failure=10
    Readiness:    http-get http://169.254.25.10:9254/health delay=0s timeout=5s period=10s #success=1 #failure=10
    Environment:  <none>
    Mounts:
      /etc/coredns from config-volume (rw)
      /run/xtables.lock from xtables-lock (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from nodelocaldns-token-pwtdk (ro)
Conditions:
  Type              Status
  Initialized       True
  Ready             False
  ContainersReady   False
  PodScheduled      True
Volumes:
  config-volume:
    Type:      ConfigMap (a volume populated by a ConfigMap)
    Name:      nodelocaldns
    Optional:  false
  xtables-lock:
    Type:          HostPath (bare host directory volume)
    Path:          /run/xtables.lock
    HostPathType:  FileOrCreate
  nodelocaldns-token-pwtdk:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  nodelocaldns-token-pwtdk
    Optional:    false
QoS Class:       Burstable
Node-Selectors:  <none>
Tolerations:     :NoScheduleop=Exists
                 :NoExecuteop=Exists
                 CriticalAddonsOnly op=Exists
                 node.kubernetes.io/disk-pressure:NoSchedule op=Exists
                 node.kubernetes.io/memory-pressure:NoSchedule op=Exists
                 node.kubernetes.io/network-unavailable:NoSchedule op=Exists
                 node.kubernetes.io/not-ready:NoExecute op=Exists
                 node.kubernetes.io/pid-pressure:NoSchedule op=Exists
                 node.kubernetes.io/unreachable:NoExecute op=Exists
                 node.kubernetes.io/unschedulable:NoSchedule op=Exists
Events:
  Type     Reason   Age                    From     Message
  ----     ------   ----                   ----     -------
  Warning  BackOff  4m28s (x2744 over 9h)  kubelet  Back-off restarting failed container

Yes , of course

24sama commented 3 years ago

According to this line:

plugin/forward: /etc/coredns/Corefile:10 - Error during parsing: Wrong argument count or unexpected line ending after '.'

Maybe there are something wrong in the nodelocaldnsConfigmap.yaml .

MonaxGT commented 3 years ago

According to this line:

plugin/forward: /etc/coredns/Corefile:10 - Error during parsing: Wrong argument count or unexpected line ending after '.'

Maybe there are something wrong in the nodelocaldnsConfigmap.yaml .

Yes< I understand. I compare that you sent early in this topic with what I applied. There is no difference between.

24sama commented 3 years ago

Sorry, maybe my reply is not clear. Are you configuring the variable { coredns svc address } correctly? The variable should be your coredns service ip address.

You can check it by command:

[root@node1 ~]# kubectl get svc -n kube-system
NAME      TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)                  AGE
coredns   ClusterIP   10.233.0.3   <none>        53/UDP,53/TCP,9153/TCP   3h2m

Then you should modify the nodelocaldnsConfigmap.yaml:

forward . 10.233.0.3 {
            force_tcp
        }
MonaxGT commented 3 years ago

HI all! Thanks @24sama It works. I really thought that { coredns svc address } some kind of template(

After deploying everything works excellent. I took a pause to test system and deploy OpenFaas project and it's seems work)

I noticed that etc is not installed in components. But it can be true, because cluster works. Maybe it is some deprecated feature to show etc in component?

Thanks a lot for all of your. I couldn't figure out why exactly ERRO[09:20:51 MSK] PATH=$PATH /bin/sh -c "/usr/local/bin/kubeadm init exist but i could deploy kubesphere.

FeynmanZhou commented 3 years ago

@MonaxGT You are welcome.

Feel free to join our community on the Slack channel!

24sama commented 3 years ago

@MonaxGT Enjoy the KubeSphere!