kubesphere / kubekey

Install Kubernetes/K3s only, both Kubernetes/K3s and KubeSphere, and related cloud-native add-ons, it supports all-in-one, multi-node, and HA 🔥 ⎈ 🐳
https://kubesphere.io
Apache License 2.0
2.37k stars 550 forks source link

安装KS 3.4.1,卡在最后的init,[InitKubernetesModule] Init cluster using kubeadm,kubelet status提示"Error getting node" err="node \"master\" not found #2248

Closed jinwendaiya closed 4 months ago

jinwendaiya commented 5 months ago

What is version of KubeKey has the issue?

v3.1.1"

What is your os environment?

openeuler 22.10LTS

KubeKey config file

apiVersion: kubekey.kubesphere.io/v1alpha2
kind: Cluster
metadata:
  name: sample
spec:
  hosts:
  - {name: master, address: 10.30.10.13, internalAddress: 10.30.10.13, user: root, password: "Ntt@2024!"}
  - {name: node1, address: 10.30.10.7, internalAddress: 10.30.10.7, user: root, password: "Ntt@2024!"}
  roleGroups:
    etcd:
    - master
    control-plane: 
    - master
    worker:
    - node1
  controlPlaneEndpoint:

    domain: 
    address: "10.30.10.13"
    port: 6443
  kubernetes:
    version: v1.25.3
    clusterName: cluster.local
    autoRenewCerts: true
    containerManager: containerd
  etcd:
    type: kubekey
  network:
    plugin: calico
    kubePodsCIDR: 10.233.64.0/16
    kubeServiceCIDR: 10.233.0.0/16

    multusCNI:
      enabled: false
  registry:
    privateRegistry: ""
    namespaceOverride: ""
    registryMirrors: []
    insecureRegistries: []
  addons: []

---
apiVersion: installer.kubesphere.io/v1alpha1
kind: ClusterConfiguration
metadata:
  name: ks-installer
  namespace: kubesphere-system
  labels:
    version: v3.4.1
spec:
  persistence:
    storageClass: ""
  authentication:
    jwtSecret: ""
  local_registry: ""

  etcd:
    monitoring: true
    endpointIps: localhost
    port: 2379
    tlsEnable: true
  common:
    core:
      console:
        enableMultiLogin: true
        port: 30880
        type: NodePort

    redis:
      enabled: false
      enableHA: false
      volumeSize: 2Gi
    openldap:
      enabled: false
      volumeSize: 2Gi
    minio:
      volumeSize: 20Gi
    monitoring:

      endpoint: http://prometheus-operated.kubesphere-monitoring-system.svc:9090
      GPUMonitoring:
        enabled: false
    gpu:
      kinds:
      - resourceName: "nvidia.com/gpu"
        resourceType: "GPU"
        default: false
    es:

      enabled: false
      logMaxAge: 7
      elkPrefix: logstash
      basicAuth:
        enabled: false
        username: ""
        password: ""
      externalElasticsearchHost: ""
      externalElasticsearchPort: ""
    opensearch:

      enabled: true
      logMaxAge: 7
      opensearchPrefix: whizard
      basicAuth:
        enabled: true
        username: "admin"
        password: "admin"
      externalOpensearchHost: ""
      externalOpensearchPort: ""
      dashboard:
        enabled: false
  alerting:
    enabled: false

  auditing:
    enabled: false

  devops:
    enabled: true
    jenkinsCpuReq: 0.5
    jenkinsCpuLim: 1
    jenkinsMemoryReq: 4Gi
    jenkinsMemoryLim: 4Gi
    jenkinsVolumeSize: 16Gi
  events:
    enabled: false

    ruler:
      enabled: true
      replicas: 2
    #   resources: {}
  logging:
    enabled: false
    logsidecar:
      enabled: true
      replicas: 2
      # resources: {}
  metrics_server:
    enabled: true
  monitoring:
    storageClass: ""
    node_exporter:
      port: 9100

    gpu:
      nvidia_dcgm_exporter:
        enabled: false
        # resources: {}
  multicluster:
    clusterRole: none
  network:
    networkpolicy:
      enabled: false
    ippool:
      type: none
    topology:
      type: none
  openpitrix:
    store:
      enabled: false
  servicemesh:
    enabled: true
    istio:
      components:
        ingressGateways:
        - name: istio-ingressgateway
          enabled: true
        cni:
          enabled: false
  edgeruntime:
    enabled: false
    kubeedge:
      enabled: false
      cloudCore:
        cloudHub:
          advertiseAddress:
            - ""
        service:
          cloudhubNodePort: "30000"
          cloudhubQuicNodePort: "30001"
          cloudhubHttpsNodePort: "30002"
          cloudstreamNodePort: "30003"
          tunnelNodePort: "30004"
        # resources: {}
        # hostNetWork: false
      iptables-manager:
        enabled: true
        mode: "external"
        # resources: {}
      # edgeService:
      #   resources: {}
  gatekeeper:
    enabled: false
    # controller_manager:
    #   resources: {}
    # audit:
    #   resources: {}
  terminal:
    timeout: 600

A clear and concise description of what happend.

安装出错过程: [kubeconfig] Using kubeconfig folder "/etc/kubernetes" [kubeconfig] Writing "admin.conf" kubeconfig file [kubeconfig] Writing "kubelet.conf" kubeconfig file [kubeconfig] Writing "controller-manager.conf" kubeconfig file [kubeconfig] Writing "scheduler.conf" kubeconfig file [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env" [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml" [kubelet-start] Starting the kubelet [control-plane] Using manifest folder "/etc/kubernetes/manifests" [control-plane] Creating static Pod manifest for "kube-apiserver" [control-plane] Creating static Pod manifest for "kube-controller-manager" [control-plane] Creating static Pod manifest for "kube-scheduler" [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s [kubelet-check] Initial timeout of 40s passed.

Unfortunately, an error has occurred: timed out waiting for the condition

This error is likely caused by:

If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:

Additionally, a control plane component may have crashed or exited when started by the container runtime. To troubleshoot, list all containers using your preferred container runtimes CLI. Here is one example how you may list all running Kubernetes containers by using crictl:

systemctl status kubelet [root@master ~]# systemctl status kubelet -l ○ kubelet.service - kubelet: The Kubernetes Node Agent Loaded: loaded (/etc/systemd/system/kubelet.service; enabled; vendor preset: disabled) Drop-In: /etc/systemd/system/kubelet.service.d └─10-kubeadm.conf Active: inactive (dead) since Sat 2024-05-18 13:10:52 CST; 20h ago Docs: http://kubernetes.io/docs/ Process: 25693 ExecStart=/usr/local/bin/kubelet $KUBELET_KUBECONFIG_ARGS $KUBELET_CONFIG_ARGS $KUBELET_KUBEADM_ARGS $KUBELET_EXTRA_ARGS (code=exited, status=0/SUCCESS) Main PID: 25693 (code=exited, status=0/SUCCESS) CPU: 2.152s

May 18 13:10:52 master kubelet[25693]: E0518 13:10:52.207042 25693 kubelet.go:2448] "Error getting node" err="node \"master\" not found" May 18 13:10:52 master kubelet[25693]: E0518 13:10:52.258281 25693 eviction_manager.go:256] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"master\" not found" May 18 13:10:52 master kubelet[25693]: E0518 13:10:52.290267 25693 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network>May 18 13:10:52 master kubelet[25693]: E0518 13:10:52.307719 25693 kubelet.go:2448] "Error getting node" err="node \"master\" not found" May 18 13:10:52 master kubelet[25693]: E0518 13:10:52.408646 25693 kubelet.go:2448] "Error getting node" err="node \"master\" not found" May 18 13:10:52 master kubelet[25693]: I0518 13:10:52.453330 25693 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 18 13:10:52 master systemd[1]: Stopping kubelet: The Kubernetes Node Agent... May 18 13:10:52 master systemd[1]: kubelet.service: Deactivated successfully. May 18 13:10:52 master systemd[1]: Stopped kubelet: The Kubernetes Node Agent. May 18 13:10:52 master systemd[1]: kubelet.service: Consumed 2.152s CPU time.

Relevant log output

journal -u kubelet | less

May 18 12:59:09 master kubelet[24629]: E0518 12:59:09.028247   24629 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = DeadlineExceeded desc = failed to get sandbox image \"registry.k8s.io/pause:3.8\": failed to pull image \"registry.k8s.io/pause:3.8\": failed to pull and unpack image \"registry.k8s.io/pause:3.8\": failed to resolve reference \"registry.k8s.io/pause:3.8\": failed to do request: Head \"https://us-west2-docker.pkg.dev/v2/k8s-artifacts-prod/images/pause/manifests/3.8\": dial tcp 173.194.174.82:443: i/o timeout"
May 18 12:59:09 master kubelet[24629]: E0518 12:59:09.028297   24629 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = DeadlineExceeded desc = failed to get sandbox image \"registry.k8s.io/pause:3.8\": failed to pull image \"registry.k8s.io/pause:3.8\": failed to pull and unpack image \"registry.k8s.io/pause:3.8\": failed to resolve reference \"registry.k8s.io/pause:3.8\": failed to do request: Head \"https://us-west2-docker.pkg.dev/v2/k8s-artifacts-prod/images/pause/manifests/3.8\": dial tcp 173.194.174.82:443: i/o timeout" pod="kube-system/kube-apiserver-master"
May 18 12:59:09 master kubelet[24629]: E0518 12:59:09.028322   24629 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = DeadlineExceeded desc = failed to get sandbox image \"registry.k8s.io/pause:3.8\": failed to pull image \"registry.k8s.io/pause:3.8\": failed to pull and unpack image \"registry.k8s.io/pause:3.8\": failed to resolve reference \"registry.k8s.io/pause:3.8\": failed to do request: Head \"https://us-west2-docker.pkg.dev/v2/k8s-artifacts-prod/images/pause/manifests/3.8\": dial tcp 173.194.174.82:443: i/o timeout" pod="kube-system/kube-apiserver-master"
May 18 12:59:09 master kubelet[24629]: E0518 12:59:09.028371   24629 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"kube-apiserver-master_kube-system(85c702565972003fef2047c1d4381b47)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"kube-apiserver-master_kube-system(85c702565972003fef2047c1d4381b47)\\\": rpc error: code = DeadlineExceeded desc = failed to get sandbox image \\\"registry.k8s.io/pause:3.8\\\": failed to pull image \\\"registry.k8s.io/pause:3.8\\\": failed to pull and unpack image \\\"registry.k8s.io/pause:3.8\\\": failed to resolve reference \\\"registry.k8s.io/pause:3.8\\\": failed to do request: Head \\\"https://us-west2-docker.pkg.dev/v2/k8s-artifacts-prod/images/pause/manifests/3.8\\\": dial tcp 173.194.174.82:443: i/o timeout\"" pod="kube-system/kube-apiserver-master" podUID=85c702565972003fef2047c1d4381b47
May 18 12:59:09 master kubelet[24629]: E0518 12:59:09.051437   24629 kubelet.go:2448] "Error getting node" err="node \"master\" not found"
May 18 12:59:09 master kubelet[24629]: E0518 12:59:09.152102   24629 kubelet.go:2448] "Error getting node" err="node \"master\" not found"
May 18 12:59:09 master kubelet[24629]: E0518 12:59:09.252506   24629 kubelet.go:2448] "Error getting node" err="node \"master\" not found"
May 18 12:59:09 master kubelet[24629]: E0518 12:59:09.353246   24629 kubelet.go:2448] "Error getting node" err="node \"master\" not found"
May 18 12:59:09 master kubelet[24629]: E0518 12:59:09.453684   24629 kubelet.go:2448] "Error getting node" err="node \"master\" not found"
May 18 12:59:09 master kubelet[24629]: E0518 12:59:09.554420   24629 kubelet.go:2448] "Error getting node" err="node \"master\" not found"
May 18 12:59:09 master kubelet[24629]: E0518 12:59:09.654520   24629 kubelet.go:2448] "Error getting node" err="node \"master\" not found"
May 18 12:59:09 master kubelet[24629]: E0518 12:59:09.754968   24629 kubelet.go:2448] "Error getting node" err="node \"master\" not found"
May 18 12:59:09 master kubelet[24629]: E0518 12:59:09.855634   24629 kubelet.go:2448] "Error getting node" err="node \"master\" not found"
May 18 12:59:09 master kubelet[24629]: E0518 12:59:09.955940   24629 kubelet.go:2448] "Error getting node" err="node \"master\" not found"
May 18 12:59:10 master kubelet[24629]: E0518 12:59:10.056573   24629 kubelet.go:2448] "Error getting node" err="node \"master\" not found"
May 18 12:59:10 master kubelet[24629]: E0518 12:59:10.157297   24629 kubelet.go:2448] "Error getting node" err="node \"master\" not found"
May 18 12:59:10 master kubelet[24629]: E0518 12:59:10.257810   24629 kubelet.go:2448] "Error getting node" err="node \"master\" not found"
May 18 12:59:10 master kubelet[24629]: E0518 12:59:10.358416   24629 kubelet.go:2448] "Error getting node" err="node \"master\" not found"
May 18 12:59:10 master kubelet[24629]: E0518 12:59:10.459080   24629 kubelet.go:2448] "Error getting node" err="node \"master\" not found"
May 18 12:59:10 master kubelet[24629]: E0518 12:59:10.471881   24629 controller.go:144] failed to ensure lease exists, will retry in 7s, error: Get "https://lb.kubesphere.local:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master?timeout=10s": dial tcp 10.30.10.13:6443: connect: connection refused
May 18 12:59:10 master kubelet[24629]: E0518 12:59:10.559284   24629 kubelet.go:2448] "Error getting node" err="node \"master\" not found"
May 18 12:59:10 master kubelet[24629]: I0518 12:59:10.578336   24629 kubelet_node_status.go:70] "Attempting to register node" node="master"
May 18 12:59:10 master kubelet[24629]: E0518 12:59:10.578790   24629 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://lb.kubesphere.local:6443/api/v1/nodes\": dial tcp 10.30.10.13:6443: connect: connection refused" node="master"
May 18 12:59:10 master kubelet[24629]: E0518 12:59:10.660110   24629 kubelet.go:2448] "Error getting node" err="node \"master\" not found"
May 18 12:59:10 master kubelet[24629]: E0518 12:59:10.760766   24629 kubelet.go:2448] "Error getting node" err="node \"master\" not found"
May 18 12:59:10 master kubelet[24629]: E0518 12:59:10.861511   24629 kubelet.go:2448] "Error getting node" err="node \"master\" not found"
May 18 12:59:10 master kubelet[24629]: E0518 12:59:10.962549   24629 kubelet.go:2448] "Error getting node" err="node \"master\" not found"
May 18 12:59:11 master kubelet[24629]: E0518 12:59:11.062884   24629 kubelet.go:2448] "Error getting node" err="node \"master\" not found"
May 18 12:59:11 master kubelet[24629]: E0518 12:59:11.163298   24629 kubelet.go:2448] "Error getting node" err="node \"master\" not found"
May 18 12:59:11 master kubelet[24629]: E0518 12:59:11.263576   24629 kubelet.go:2448] "Error getting node" err="node \"master\" not found"
May 18 12:59:11 master kubelet[24629]: E0518 12:59:11.364322   24629 kubelet.go:2448] "Error getting node" err="node \"master\" not found"
May 18 12:59:11 master kubelet[24629]: E0518 12:59:11.464921   24629 kubelet.go:2448] "Error getting node" err="node \"master\" not found"
May 18 12:59:11 master kubelet[24629]: E0518 12:59:11.565703   24629 kubelet.go:2448] "Error getting node" err="node \"master\" not found"
May 18 12:59:11 master kubelet[24629]: E0518 12:59:11.666495   24629 kubelet.go:2448] "Error getting node" err="node \"master\" not found"
May 18 12:59:11 master kubelet[24629]: E0518 12:59:11.767049   24629 kubelet.go:2448] "Error getting node" err="node \"master\" not found"

Additional information

k8s-1.25.3,

RuntimeName: containerd

RuntimeVersion: v1.7.1

JasonKamden commented 4 months ago

解决了嘛。我也遇到这个问题

nejinn commented 4 months ago

我也遇到这个问题了,你的haproxy有问题。把slb做好,再安装吧

jinwendaiya commented 4 months ago

解决了嘛。我也遇到这个问题

配置你的/etc/hosts master的解析,然后清理集群重新做

nejinn commented 4 months ago

Hi,this is Lerity      邮件已经收到      辛苦了

StrongLei commented 2 months ago

我也遇到这个问题了,应该是kubeadm对某些k8s的版本兼容性做的不好导致的,我通过:./kk create cluster --with-kubernetes v1.22.12 --with-kubesphere v3.3.2 -y 重新换一个k8s的版本把问题解决了:https://kubesphere.io/zh/docs/v3.3/quick-start/all-in-one-on-linux/

StrongLei commented 1 month ago

我也遇到这个问题了,应该是kubeadm对某些k8s的版本兼容性做的不好导致的,我通过:./kk create cluster --with-kubernetes v1.22.12 --with-kubesphere v3.3.2 -y 重新换一个k8s的版本把问题解决了:https://kubesphere.io/zh/docs/v3.3/quick-start/all-in-one-on-linux/

./kk version --show-supported-k8s