rancher / rke

Rancher Kubernetes Engine (RKE), an extremely simple, lightning fast Kubernetes distribution that runs entirely within containers.
Apache License 2.0
3.22k stars 582 forks source link

Failed to start Certificates deployer container on host #2417

Closed WMP closed 3 years ago

WMP commented 3 years ago

RKE version: 0.2.11

Docker version: (docker version,docker info preferred) Docker version 18.09.9, build 039a7df9ba

Operating system and kernel: (cat /etc/os-release, uname -r preferred) ubutnu 18.04 4.15.0-45-generic

Type/provider of hosts: (VirtualBox/Bare-metal/AWS/GCE/DO) Bare-Metal

cluster.yml file:

nodes:
  - address: XXX.XXX.0.38
    internal_address: XXX.YYY.0.1
    user: my_login
    role: [worker]
    labels:
        kubernetes.io/os: linux
  - address: XXX.XXX.0.25
    internal_address: XXX.YYY.0.2
    user: my_login
    role: [worker]
    labels:
        kubernetes.io/os: linux
  - address: XXX.XXX.0.7
    internal_address: XXX.YYY.0.3
    user: my_login
    role: [controlplane,worker,etcd]
  - hostname_override: kub02n07
    address: XXX.XXX.0.222
    internal_address: XXX.YYY.0.7
    user: my_login
    role: [controlplane,worker,etcd]
  - hostname_override: kub02n08
    address: XXX.XXX.0.223
    internal_address: XXX.YYY.0.8
    user: my_login
    role: [controlplane,worker,etcd]
  - hostname_override: kub02n09
    address: XXX.XXX.0.221
    internal_address: XXX.YYY.0.9
    user: my_login
    role: [worker]
    labels:
        kubernetes.io/os: linux
  - hostname_override: kub02n10
    address: XXX.XXX.0.41
    internal_address: XXX.YYY.0.10
    user: my_login
    role: [worker]
    labels:
        kubernetes.io/os: linux
  - hostname_override: kub02n11
    address: XXX.XXX.0.42
    internal_address: XXX.YYY.0.11
    user: my_login
    role: [worker]
    labels:
        kubernetes.io/os: linux

services:
  etcd:
    snapshot: true
    creation: 6h
    retention: 24h
  kube-api:
    extra_args:
      feature-gates: CSIPersistentVolume=true,MountPropagation=true,VolumeSnapshotDataSource=true
      runtime-config: storage.k8s.io/v1alpha1=true
  kubelet:
    extra_args:
      node-status-update-frequency: "5s"
      runtime-request-timeout: "1h"
      feature-gates: "VolumeSnapshotDataSource=true"
  kube-controller:
    extra_args:
      node-monitor-period: "2s"
      node-monitor-grace-period: "16s"
      pod-eviction-timeout: "30s"
      feature-gates: "VolumeSnapshotDataSource=true"
  scheduler:
    extra_args:
      feature-gates: "VolumeSnapshotDataSource=true"
  kubeproxy:
    extra_args:
      feature-gates: "VolumeSnapshotDataSource=true"

kubernetes_version: "v1.13.12-rancher1-1"
network:
    plugin: calico

Steps to Reproduce:

  1. Boot up your cluster in 2018
  2. Now remove one node from cluster.yml
  3. try to rke-0.2.11 --debug config --config rancher-cluster.yml Results:
    INFO[0000] Initiating Kubernetes cluster                
    DEBU[0000] No DNS provider configured, setting default based on cluster version [1.13.12-rancher1-1] 
    DEBU[0000] Cluster version [1.13.12-rancher1-1] is less than version [1.14.0], using DNS provider [kube-dns] 
    DEBU[0000] DNS provider set to [kube-dns]               
    DEBU[0000] Host: XXX.XXX.0.38 has role: worker           
    DEBU[0000] Host: XXX.XXX.0.25 has role: worker           
    DEBU[0000] Host: XXX.XXX.0.7 has role: controlplane      
    DEBU[0000] Host: XXX.XXX.0.7 has role: worker            
    DEBU[0000] Host: XXX.XXX.0.7 has role: etcd              
    DEBU[0000] Host: XXX.XXX.0.222 has role: controlplane    
    DEBU[0000] Host: XXX.XXX.0.222 has role: worker          
    DEBU[0000] Host: XXX.XXX.0.222 has role: etcd            
    DEBU[0000] Host: XXX.XXX.0.223 has role: controlplane    
    DEBU[0000] Host: XXX.XXX.0.223 has role: worker          
    DEBU[0000] Host: XXX.XXX.0.223 has role: etcd            
    DEBU[0000] Host: XXX.XXX.0.221 has role: worker          
    DEBU[0000] Host: XXX.XXX.0.41 has role: worker           
    DEBU[0000] Host: XXX.XXX.0.42 has role: worker           
    DEBU[0000] [state] previous state found, this is not a legacy cluster 
    INFO[0000] [certificates] Generating admin certificates and kubeconfig 
    DEBU[0000] Writing state file: {
    "desiredState": {
    "rkeConfig": {
      "nodes": [
        {
          "address": "XXX.XXX.0.38",
          "port": "22",
          "internalAddress": "XXX.YYY.0.1",
          "role": [
            "worker"
          ],
          "hostnameOverride": "XXX.XXX.0.38",
          "user": "my_login",
          "sshKeyPath": "~/.ssh/id_rsa",
          "labels": {
            "kubernetes.io/os": "linux"
          }
        },
        {
          "address": "XXX.XXX.0.25",
          "port": "22",
          "internalAddress": "XXX.YYY.0.2",
          "role": [
            "worker"
          ],
          "hostnameOverride": "XXX.XXX.0.25",
          "user": "my_login",
          "sshKeyPath": "~/.ssh/id_rsa",
          "labels": {
            "kubernetes.io/os": "linux"
          }
        },
        {
          "address": "XXX.XXX.0.7",
          "port": "22",
          "internalAddress": "XXX.YYY.0.3",
          "role": [
            "controlplane",
            "worker",
            "etcd"
          ],
          "hostnameOverride": "XXX.XXX.0.7",
          "user": "my_login",
          "sshKeyPath": "~/.ssh/id_rsa"
        },
        {
          "address": "XXX.XXX.0.222",
          "port": "22",
          "internalAddress": "XXX.YYY.0.7",
          "role": [
            "controlplane",
            "worker",
            "etcd"
          ],
          "hostnameOverride": "kub02n07",
          "user": "my_login",
          "sshKeyPath": "~/.ssh/id_rsa"
        },
        {
          "address": "XXX.XXX.0.223",
          "port": "22",
          "internalAddress": "XXX.YYY.0.8",
          "role": [
            "controlplane",
            "worker",
            "etcd"
          ],
          "hostnameOverride": "kub02n08",
          "user": "my_login",
          "sshKeyPath": "~/.ssh/id_rsa"
        },
        {
          "address": "XXX.XXX.0.221",
          "port": "22",
          "internalAddress": "XXX.YYY.0.9",
          "role": [
            "worker"
          ],
          "hostnameOverride": "kub02n09",
          "user": "my_login",
          "sshKeyPath": "~/.ssh/id_rsa",
          "labels": {
            "kubernetes.io/os": "linux"
          }
        },
        {
          "address": "XXX.XXX.0.41",
          "port": "22",
          "internalAddress": "XXX.YYY.0.10",
          "role": [
            "worker"
          ],
          "hostnameOverride": "kub02n10",
          "user": "my_login",
          "sshKeyPath": "~/.ssh/id_rsa",
          "labels": {
            "kubernetes.io/os": "linux"
          }
        },
        {
          "address": "XXX.XXX.0.42",
          "port": "22",
          "internalAddress": "XXX.YYY.0.11",
          "role": [
            "worker"
          ],
          "hostnameOverride": "kub02n11",
          "user": "my_login",
          "sshKeyPath": "~/.ssh/id_rsa",
          "labels": {
            "kubernetes.io/os": "linux"
          }
        }
      ],
      "services": {
        "etcd": {
          "image": "rancher/coreos-etcd:v3.2.24-rancher1",
          "extraArgs": {
            "election-timeout": "5000",
            "heartbeat-interval": "500"
          },
          "snapshot": true,
          "retention": "24h",
          "creation": "6h"
        },
        "kubeApi": {
          "image": "rancher/hyperkube:v1.13.12-rancher1",
          "extraArgs": {
            "feature-gates": "CSIPersistentVolume=true,MountPropagation=true,VolumeSnapshotDataSource=true",
            "runtime-config": "storage.k8s.io/v1alpha1=true"
          },
          "serviceClusterIpRange": "10.43.0.0/16",
          "serviceNodePortRange": "30000-32767"
        },
        "kubeController": {
          "image": "rancher/hyperkube:v1.13.12-rancher1",
          "extraArgs": {
            "feature-gates": "VolumeSnapshotDataSource=true",
            "node-monitor-grace-period": "16s",
            "node-monitor-period": "2s",
            "pod-eviction-timeout": "30s"
          },
          "clusterCidr": "10.42.0.0/16",
          "serviceClusterIpRange": "10.43.0.0/16"
        },
        "scheduler": {
          "image": "rancher/hyperkube:v1.13.12-rancher1",
          "extraArgs": {
            "feature-gates": "VolumeSnapshotDataSource=true"
          }
        },
        "kubelet": {
          "image": "rancher/hyperkube:v1.13.12-rancher1",
          "extraArgs": {
            "feature-gates": "VolumeSnapshotDataSource=true",
            "node-status-update-frequency": "5s",
            "runtime-request-timeout": "1h"
          },
          "clusterDomain": "cluster.local",
          "infraContainerImage": "rancher/pause:3.1",
          "clusterDnsServer": "10.43.0.10"
        },
        "kubeproxy": {
          "image": "rancher/hyperkube:v1.13.12-rancher1",
          "extraArgs": {
            "feature-gates": "VolumeSnapshotDataSource=true"
          }
        }
      },
      "network": {
        "plugin": "calico",
        "options": {
          "calico_cloud_provider": "none"
        }
      },
      "authentication": {
        "strategy": "x509"
      },
      "systemImages": {
        "etcd": "rancher/coreos-etcd:v3.2.24-rancher1",
        "alpine": "rancher/rke-tools:v0.1.50",
        "nginxProxy": "rancher/rke-tools:v0.1.50",
        "certDownloader": "rancher/rke-tools:v0.1.50",
        "kubernetesServicesSidecar": "rancher/rke-tools:v0.1.50",
        "kubedns": "rancher/k8s-dns-kube-dns:1.15.0",
        "dnsmasq": "rancher/k8s-dns-dnsmasq-nanny:1.15.0",
        "kubednsSidecar": "rancher/k8s-dns-sidecar:1.15.0",
        "kubednsAutoscaler": "rancher/cluster-proportional-autoscaler:1.0.0",
        "coredns": "rancher/coredns-coredns:1.2.6",
        "corednsAutoscaler": "rancher/cluster-proportional-autoscaler:1.0.0",
        "kubernetes": "rancher/hyperkube:v1.13.12-rancher1",
        "flannel": "rancher/coreos-flannel:v0.10.0-rancher1",
        "flannelCni": "rancher/flannel-cni:v0.3.0-rancher1",
        "calicoNode": "rancher/calico-node:v3.4.0",
        "calicoCni": "rancher/calico-cni:v3.4.0",
        "calicoCtl": "rancher/calico-ctl:v2.0.0",
        "canalNode": "rancher/calico-node:v3.4.0",
        "canalCni": "rancher/calico-cni:v3.4.0",
        "canalFlannel": "rancher/coreos-flannel:v0.10.0",
        "weaveNode": "weaveworks/weave-kube:2.5.0",
        "weaveCni": "weaveworks/weave-npc:2.5.0",
        "podInfraContainer": "rancher/pause:3.1",
        "ingress": "rancher/nginx-ingress-controller:nginx-0.25.1-rancher1",
        "ingressBackend": "rancher/nginx-ingress-controller-defaultbackend:1.4-rancher1",
        "metricsServer": "rancher/metrics-server:v0.3.1"
      },
      "sshKeyPath": "~/.ssh/id_rsa",
      "sshAgentAuth": false,
      "authorization": {
        "mode": "rbac"
      },
      "ignoreDockerVersion": false,
      "kubernetesVersion": "v1.13.12-rancher1-1",
      "ingress": {
        "provider": "nginx"
      },
      "clusterName": "local",
      "cloudProvider": {},
      "prefixPath": "/",
      "addonJobTimeout": 30,
      "bastionHost": {},
      "monitoring": {
        "provider": "metrics-server"
      },
      "restore": {},
      "dns": {
        "provider": "kube-dns"
      }
    },
    "certificatesBundle": {
      "kube-admin": {
    CUT HERE
        "name": "kube-admin",
        "commonName": "kube-admin",
        "ouName": "system:masters",
        "envName": "KUBE_ADMIN",
        "path": "/etc/kubernetes/ssl/kube-admin.pem",
        "keyEnvName": "KUBE_ADMIN_KEY",
        "keyPath": "/etc/kubernetes/ssl/kube-admin-key.pem",
        "configEnvName": "KUBECFG_KUBE_ADMIN",
        "configPath": "./kube_config_rancher-cluster.yml"
      },
      "kube-apiserver": {
    CUT HERE
        "config": "",
        "name": "kube-apiserver",
        "commonName": "system:kube-apiserver",
        "ouName": "",
        "envName": "KUBE_APISERVER",
        "path": "/etc/kubernetes/ssl/kube-apiserver.pem",
        "keyEnvName": "KUBE_APISERVER_KEY",
        "keyPath": "/etc/kubernetes/ssl/kube-apiserver-key.pem",
        "configEnvName": "",
        "configPath": ""
      },
      "kube-apiserver-proxy-client": {
    CUT HERE
        "config": "apiVersion: v1\nkind: Config\nclusters:\n- cluster:\n    api-version: v1\n    certificate-authority: /etc/kubernetes/ssl/kube-ca.pem\n    server: \"https://127.0.0.1:6443\"\n  name: \"local\"\ncontexts:\n- context:\n    cluster: \"local\"\n    user: \"kube-apiserver-proxy-client-local\"\n  name: \"local\"\ncurrent-context: \"local\"\nusers:\n- name: \"kube-apiserver-proxy-client-local\"\n  user:\n    client-certificate: /etc/kubernetes/ssl/kube-apiserver-proxy-client.pem\n    client-key: /etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem",
        "name": "kube-apiserver-proxy-client",
        "commonName": "system:kube-apiserver-proxy-client",
        "ouName": "",
        "envName": "KUBE_APISERVER_PROXY_CLIENT",
        "path": "/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem",
        "keyEnvName": "KUBE_APISERVER_PROXY_CLIENT_KEY",
        "keyPath": "/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem",
        "configEnvName": "KUBECFG_KUBE_APISERVER_PROXY_CLIENT",
        "configPath": "/etc/kubernetes/ssl/kubecfg-kube-apiserver-proxy-client.yaml"
      },
      "kube-apiserver-requestheader-ca": {
    CUT HERE
        "config": "apiVersion: v1\nkind: Config\nclusters:\n- cluster:\n    api-version: v1\n    certificate-authority: /etc/kubernetes/ssl/kube-ca.pem\n    server: \"https://127.0.0.1:6443\"\n  name: \"local\"\ncontexts:\n- context:\n    cluster: \"local\"\n    user: \"kube-apiserver-requestheader-ca-local\"\n  name: \"local\"\ncurrent-context: \"local\"\nusers:\n- name: \"kube-apiserver-requestheader-ca-local\"\n  user:\n    client-certificate: /etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem\n    client-key: /etc/kubernetes/ssl/kube-apiserver-requestheader-ca-key.pem",
        "name": "",
        "commonName": "",
        "ouName": "",
        "envName": "KUBE_APISERVER_REQUESTHEADER_CA",
        "path": "/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem",
        "keyEnvName": "KUBE_APISERVER_REQUESTHEADER_CA_KEY",
        "keyPath": "/etc/kubernetes/ssl/kube-apiserver-requestheader-ca-key.pem",
        "configEnvName": "KUBECFG_KUBE_APISERVER_REQUESTHEADER_CA",
        "configPath": "/etc/kubernetes/ssl/kubecfg-kube-apiserver-requestheader-ca.yaml"
      },
      "kube-ca": {
    CUT HERE
        "config": "",
        "name": "",
        "commonName": "",
        "ouName": "",
        "envName": "KUBE_CA",
        "path": "/etc/kubernetes/ssl/kube-ca.pem",
        "keyEnvName": "KUBE_CA_KEY",
        "keyPath": "/etc/kubernetes/ssl/kube-ca-key.pem",
        "configEnvName": "",
        "configPath": ""
      },
      "kube-controller-manager": {
    CUT HERE
        "config": "apiVersion: v1\nkind: Config\nclusters:\n- cluster:\n    api-version: v1\n    certificate-authority: /etc/kubernetes/ssl/kube-ca.pem\n    server: \"https://127.0.0.1:6443\"\n  name: \"local\"\ncontexts:\n- context:\n    cluster: \"local\"\n    user: \"kube-controller-manager-local\"\n  name: \"local\"\ncurrent-context: \"local\"\nusers:\n- name: \"kube-controller-manager-local\"\n  user:\n    client-certificate: /etc/kubernetes/ssl/kube-controller-manager.pem\n    client-key: /etc/kubernetes/ssl/kube-controller-manager-key.pem",
        "name": "kube-controller-manager",
        "commonName": "system:kube-controller-manager",
        "ouName": "",
        "envName": "KUBE_CONTROLLER_MANAGER",
        "path": "/etc/kubernetes/ssl/kube-controller-manager.pem",
        "keyEnvName": "KUBE_CONTROLLER_MANAGER_KEY",
        "keyPath": "/etc/kubernetes/ssl/kube-controller-manager-key.pem",
        "configEnvName": "KUBECFG_KUBE_CONTROLLER_MANAGER",
        "configPath": "/etc/kubernetes/ssl/kubecfg-kube-controller-manager.yaml"
      },
      "kube-etcd-172-19-0-3": {
    CUT HERE
        "config": "",
        "name": "kube-etcd-172-19-0-3",
        "commonName": "system:kube-etcd-172-19-0-3",
        "ouName": "",
        "envName": "KUBE_ETCD_172_19_0_3",
        "path": "/etc/kubernetes/ssl/kube-etcd-172-19-0-3.pem",
        "keyEnvName": "KUBE_ETCD_172_19_0_3_KEY",
        "keyPath": "/etc/kubernetes/ssl/kube-etcd-172-19-0-3-key.pem",
        "configEnvName": "",
        "configPath": ""
      },
      "kube-etcd-172-19-0-4": {
    CUT HERE
        "config": "",
        "name": "kube-etcd-172-19-0-4",
        "commonName": "system:kube-etcd-172-19-0-4",
        "ouName": "",
        "envName": "KUBE_ETCD_172_19_0_4",
        "path": "/etc/kubernetes/ssl/kube-etcd-172-19-0-4.pem",
        "keyEnvName": "KUBE_ETCD_172_19_0_4_KEY",
        "keyPath": "/etc/kubernetes/ssl/kube-etcd-172-19-0-4-key.pem",
        "configEnvName": "",
        "configPath": ""
      },
      "kube-etcd-172-19-0-5": {
    CUT HERE
        "config": "",
        "name": "kube-etcd-172-19-0-5",
        "commonName": "system:kube-etcd-172-19-0-5",
        "ouName": "",
        "envName": "KUBE_ETCD_172_19_0_5",
        "path": "/etc/kubernetes/ssl/kube-etcd-172-19-0-5.pem",
        "keyEnvName": "KUBE_ETCD_172_19_0_5_KEY",
        "keyPath": "/etc/kubernetes/ssl/kube-etcd-172-19-0-5-key.pem",
        "configEnvName": "",
        "configPath": ""
      },
      "kube-etcd-172-19-0-7": {
    CUT HERE
        "config": "",
        "name": "kube-etcd-172-19-0-7",
        "commonName": "system:kube-etcd-172-19-0-7",
        "ouName": "",
        "envName": "KUBE_ETCD_172_19_0_7",
        "path": "/etc/kubernetes/ssl/kube-etcd-172-19-0-7.pem",
        "keyEnvName": "KUBE_ETCD_172_19_0_7_KEY",
        "keyPath": "/etc/kubernetes/ssl/kube-etcd-172-19-0-7-key.pem",
        "configEnvName": "",
        "configPath": ""
      },
      "kube-etcd-172-19-0-8": {
    CUT HERE
        "config": "",
        "name": "kube-etcd-172-19-0-8",
        "commonName": "system:kube-etcd-172-19-0-8",
        "ouName": "",
        "envName": "KUBE_ETCD_172_19_0_8",
        "path": "/etc/kubernetes/ssl/kube-etcd-172-19-0-8.pem",
        "keyEnvName": "KUBE_ETCD_172_19_0_8_KEY",
        "keyPath": "/etc/kubernetes/ssl/kube-etcd-172-19-0-8-key.pem",
        "configEnvName": "",
        "configPath": ""
      },
      "kube-node": {
    CUT HERE
        "name": "kube-node",
        "commonName": "system:node",
        "ouName": "system:nodes",
        "envName": "KUBE_NODE",
        "path": "/etc/kubernetes/ssl/kube-node.pem",
        "keyEnvName": "KUBE_NODE_KEY",
        "keyPath": "/etc/kubernetes/ssl/kube-node-key.pem",
        "configEnvName": "KUBECFG_KUBE_NODE",
        "configPath": "/etc/kubernetes/ssl/kubecfg-kube-node.yaml"
      },
      "kube-proxy": {
    CUT HERE
        "config": "apiVersion: v1\nkind: Config\nclusters:\n- cluster:\n    api-version: v1\n    certificate-authority: /etc/kubernetes/ssl/kube-ca.pem\n    server: \"https://127.0.0.1:6443\"\n  name: \"local\"\ncontexts:\n- context:\n    cluster: \"local\"\n    user: \"kube-proxy-local\"\n  name: \"local\"\ncurrent-context: \"local\"\nusers:\n- name: \"kube-proxy-local\"\n  user:\n    client-certificate: /etc/kubernetes/ssl/kube-proxy.pem\n    client-key: /etc/kubernetes/ssl/kube-proxy-key.pem",
        "name": "kube-proxy",
        "commonName": "system:kube-proxy",
        "ouName": "",
        "envName": "KUBE_PROXY",
        "path": "/etc/kubernetes/ssl/kube-proxy.pem",
        "keyEnvName": "KUBE_PROXY_KEY",
        "keyPath": "/etc/kubernetes/ssl/kube-proxy-key.pem",
        "configEnvName": "KUBECFG_KUBE_PROXY",
        "configPath": "/etc/kubernetes/ssl/kubecfg-kube-proxy.yaml"
      },
      "kube-scheduler": {
    CUT HERE
        "config": "apiVersion: v1\nkind: Config\nclusters:\n- cluster:\n    api-version: v1\n    certificate-authority: /etc/kubernetes/ssl/kube-ca.pem\n    server: \"https://127.0.0.1:6443\"\n  name: \"local\"\ncontexts:\n- context:\n    cluster: \"local\"\n    user: \"kube-scheduler-local\"\n  name: \"local\"\ncurrent-context: \"local\"\nusers:\n- name: \"kube-scheduler-local\"\n  user:\n    client-certificate: /etc/kubernetes/ssl/kube-scheduler.pem\n    client-key: /etc/kubernetes/ssl/kube-scheduler-key.pem",
        "name": "kube-scheduler",
        "commonName": "system:kube-scheduler",
        "ouName": "",
        "envName": "KUBE_SCHEDULER",
        "path": "/etc/kubernetes/ssl/kube-scheduler.pem",
        "keyEnvName": "KUBE_SCHEDULER_KEY",
        "keyPath": "/etc/kubernetes/ssl/kube-scheduler-key.pem",
        "configEnvName": "KUBECFG_KUBE_SCHEDULER",
        "configPath": "/etc/kubernetes/ssl/kubecfg-kube-scheduler.yaml"
      },
      "kube-service-account-token": {
    CUT HERE
        "config": "",
        "name": "kube-service-account-token",
        "commonName": "kube-service-account-token",
        "ouName": "",
        "envName": "KUBE_SERVICE_ACCOUNT_TOKEN",
        "path": "/etc/kubernetes/ssl/kube-service-account-token.pem",
        "keyEnvName": "KUBE_SERVICE_ACCOUNT_TOKEN_KEY",
        "keyPath": "/etc/kubernetes/ssl/kube-service-account-token-key.pem",
        "configEnvName": "",
        "configPath": ""
      }
    }
    },
    "currentState": {
    "rkeConfig": {
      "nodes": [
        {
          "address": "XXX.XXX.0.38",
          "port": "22",
          "internalAddress": "XXX.YYY.0.1",
          "role": [
            "worker"
          ],
          "hostnameOverride": "XXX.XXX.0.38",
          "user": "my_login",
          "sshKeyPath": "~/.ssh/id_rsa",
          "labels": {
            "kubernetes.io/os": "linux"
          }
        },
        {
          "address": "XXX.XXX.0.25",
          "port": "22",
          "internalAddress": "XXX.YYY.0.2",
          "role": [
            "worker"
          ],
          "hostnameOverride": "XXX.XXX.0.25",
          "user": "my_login",
          "sshKeyPath": "~/.ssh/id_rsa",
          "labels": {
            "kubernetes.io/os": "linux"
          }
        },
        {
          "address": "XXX.XXX.0.7",
          "port": "22",
          "internalAddress": "XXX.YYY.0.3",
          "role": [
            "controlplane",
            "worker",
            "etcd"
          ],
          "hostnameOverride": "XXX.XXX.0.7",
          "user": "my_login",
          "sshKeyPath": "~/.ssh/id_rsa"
        },
        {
          "address": "XXX.XXX.0.31",
          "port": "22",
          "internalAddress": "XXX.YYY.0.6",
          "role": [
            "worker"
          ],
          "hostnameOverride": "XXX.XXX.0.31",
          "user": "my_login",
          "sshKeyPath": "~/.ssh/id_rsa"
        },
        {
          "address": "XXX.XXX.0.222",
          "port": "22",
          "internalAddress": "XXX.YYY.0.7",
          "role": [
            "controlplane",
            "worker",
            "etcd"
          ],
          "hostnameOverride": "kub02n07",
          "user": "my_login",
          "sshKeyPath": "~/.ssh/id_rsa"
        },
        {
          "address": "XXX.XXX.0.223",
          "port": "22",
          "internalAddress": "XXX.YYY.0.8",
          "role": [
            "controlplane",
            "worker",
            "etcd"
          ],
          "hostnameOverride": "kub02n08",
          "user": "my_login",
          "sshKeyPath": "~/.ssh/id_rsa"
        },
        {
          "address": "XXX.XXX.0.221",
          "port": "22",
          "internalAddress": "XXX.YYY.0.9",
          "role": [
            "worker"
          ],
          "hostnameOverride": "kub02n09",
          "user": "my_login",
          "sshKeyPath": "~/.ssh/id_rsa",
          "labels": {
            "kubernetes.io/os": "linux"
          }
        },
        {
          "address": "XXX.XXX.0.41",
          "port": "22",
          "internalAddress": "XXX.YYY.0.10",
          "role": [
            "worker"
          ],
          "hostnameOverride": "kub02n10",
          "user": "my_login",
          "sshKeyPath": "~/.ssh/id_rsa",
          "labels": {
            "kubernetes.io/os": "linux"
          }
        },
        {
          "address": "XXX.XXX.0.42",
          "port": "22",
          "internalAddress": "XXX.YYY.0.11",
          "role": [
            "worker"
          ],
          "hostnameOverride": "kub02n11",
          "user": "my_login",
          "sshKeyPath": "~/.ssh/id_rsa",
          "labels": {
            "kubernetes.io/os": "linux"
          }
        }
      ],
      "services": {
        "etcd": {
          "image": "rancher/coreos-etcd:v3.2.24-rancher1",
          "extraArgs": {
            "election-timeout": "5000",
            "heartbeat-interval": "500"
          },
          "snapshot": true,
          "retention": "24h",
          "creation": "6h"
        },
        "kubeApi": {
          "image": "rancher/hyperkube:v1.13.12-rancher1",
          "extraArgs": {
            "feature-gates": "CSIPersistentVolume=true,MountPropagation=true,VolumeSnapshotDataSource=true",
            "runtime-config": "storage.k8s.io/v1alpha1=true"
          },
          "serviceClusterIpRange": "10.43.0.0/16",
          "serviceNodePortRange": "30000-32767"
        },
        "kubeController": {
          "image": "rancher/hyperkube:v1.13.12-rancher1",
          "extraArgs": {
            "feature-gates": "VolumeSnapshotDataSource=true",
            "node-monitor-grace-period": "16s",
            "node-monitor-period": "2s",
            "pod-eviction-timeout": "30s"
          },
          "clusterCidr": "10.42.0.0/16",
          "serviceClusterIpRange": "10.43.0.0/16"
        },
        "scheduler": {
          "image": "rancher/hyperkube:v1.13.12-rancher1",
          "extraArgs": {
            "feature-gates": "VolumeSnapshotDataSource=true"
          }
        },
        "kubelet": {
          "image": "rancher/hyperkube:v1.13.12-rancher1",
          "extraArgs": {
            "feature-gates": "VolumeSnapshotDataSource=true",
            "node-status-update-frequency": "5s",
            "runtime-request-timeout": "1h"
          },
          "clusterDomain": "cluster.local",
          "infraContainerImage": "rancher/pause:3.1",
          "clusterDnsServer": "10.43.0.10"
        },
        "kubeproxy": {
          "image": "rancher/hyperkube:v1.13.12-rancher1",
          "extraArgs": {
            "feature-gates": "VolumeSnapshotDataSource=true"
          }
        }
      },
      "network": {
        "plugin": "calico",
        "options": {
          "calico_cloud_provider": "none"
        }
      },
      "authentication": {
        "strategy": "x509"
      },
      "systemImages": {
        "etcd": "rancher/coreos-etcd:v3.2.24-rancher1",
        "alpine": "rancher/rke-tools:v0.1.50",
        "nginxProxy": "rancher/rke-tools:v0.1.50",
        "certDownloader": "rancher/rke-tools:v0.1.50",
        "kubernetesServicesSidecar": "rancher/rke-tools:v0.1.50",
        "kubedns": "rancher/k8s-dns-kube-dns:1.15.0",
        "dnsmasq": "rancher/k8s-dns-dnsmasq-nanny:1.15.0",
        "kubednsSidecar": "rancher/k8s-dns-sidecar:1.15.0",
        "kubednsAutoscaler": "rancher/cluster-proportional-autoscaler:1.0.0",
        "coredns": "rancher/coredns-coredns:1.2.6",
        "corednsAutoscaler": "rancher/cluster-proportional-autoscaler:1.0.0",
        "kubernetes": "rancher/hyperkube:v1.13.12-rancher1",
        "flannel": "rancher/coreos-flannel:v0.10.0-rancher1",
        "flannelCni": "rancher/flannel-cni:v0.3.0-rancher1",
        "calicoNode": "rancher/calico-node:v3.4.0",
        "calicoCni": "rancher/calico-cni:v3.4.0",
        "calicoCtl": "rancher/calico-ctl:v2.0.0",
        "canalNode": "rancher/calico-node:v3.4.0",
        "canalCni": "rancher/calico-cni:v3.4.0",
        "canalFlannel": "rancher/coreos-flannel:v0.10.0",
        "weaveNode": "weaveworks/weave-kube:2.5.0",
        "weaveCni": "weaveworks/weave-npc:2.5.0",
        "podInfraContainer": "rancher/pause:3.1",
        "ingress": "rancher/nginx-ingress-controller:nginx-0.25.1-rancher1",
        "ingressBackend": "rancher/nginx-ingress-controller-defaultbackend:1.4-rancher1",
        "metricsServer": "rancher/metrics-server:v0.3.1"
      },
      "sshKeyPath": "~/.ssh/id_rsa",
      "sshAgentAuth": false,
      "authorization": {
        "mode": "rbac"
      },
      "ignoreDockerVersion": false,
      "kubernetesVersion": "v1.13.12-rancher1-1",
      "ingress": {
        "provider": "nginx"
      },
      "clusterName": "local",
      "cloudProvider": {},
      "prefixPath": "/",
      "addonJobTimeout": 30,
      "bastionHost": {},
      "monitoring": {
        "provider": "metrics-server"
      },
      "restore": {},
      "dns": {
        "provider": "kube-dns"
      }
    },
    "certificatesBundle": {
      "kube-admin": {
    CUT HERE
        "name": "kube-admin",
        "commonName": "kube-admin",
        "ouName": "system:masters",
        "envName": "KUBE_ADMIN",
        "path": "/etc/kubernetes/ssl/kube-admin.pem",
        "keyEnvName": "KUBE_ADMIN_KEY",
        "keyPath": "/etc/kubernetes/ssl/kube-admin-key.pem",
        "configEnvName": "KUBECFG_KUBE_ADMIN",
        "configPath": "./kube_config_rancher-cluster.yml"
      },
      "kube-apiserver": {
    CUT HERE
        "config": "",
        "name": "kube-apiserver",
        "commonName": "system:kube-apiserver",
        "ouName": "",
        "envName": "KUBE_APISERVER",
        "path": "/etc/kubernetes/ssl/kube-apiserver.pem",
        "keyEnvName": "KUBE_APISERVER_KEY",
        "keyPath": "/etc/kubernetes/ssl/kube-apiserver-key.pem",
        "configEnvName": "",
        "configPath": ""
      },
      "kube-apiserver-proxy-client": {
    CUT HERE
        "config": "apiVersion: v1\nkind: Config\nclusters:\n- cluster:\n    api-version: v1\n    certificate-authority: /etc/kubernetes/ssl/kube-ca.pem\n    server: \"https://127.0.0.1:6443\"\n  name: \"local\"\ncontexts:\n- context:\n    cluster: \"local\"\n    user: \"kube-apiserver-proxy-client-local\"\n  name: \"local\"\ncurrent-context: \"local\"\nusers:\n- name: \"kube-apiserver-proxy-client-local\"\n  user:\n    client-certificate: /etc/kubernetes/ssl/kube-apiserver-proxy-client.pem\n    client-key: /etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem",
        "name": "kube-apiserver-proxy-client",
        "commonName": "system:kube-apiserver-proxy-client",
        "ouName": "",
        "envName": "KUBE_APISERVER_PROXY_CLIENT",
        "path": "/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem",
        "keyEnvName": "KUBE_APISERVER_PROXY_CLIENT_KEY",
        "keyPath": "/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem",
        "configEnvName": "KUBECFG_KUBE_APISERVER_PROXY_CLIENT",
        "configPath": "/etc/kubernetes/ssl/kubecfg-kube-apiserver-proxy-client.yaml"
      },
      "kube-apiserver-requestheader-ca": {
    CUT HERE
        "config": "apiVersion: v1\nkind: Config\nclusters:\n- cluster:\n    api-version: v1\n    certificate-authority: /etc/kubernetes/ssl/kube-ca.pem\n    server: \"https://127.0.0.1:6443\"\n  name: \"local\"\ncontexts:\n- context:\n    cluster: \"local\"\n    user: \"kube-apiserver-requestheader-ca-local\"\n  name: \"local\"\ncurrent-context: \"local\"\nusers:\n- name: \"kube-apiserver-requestheader-ca-local\"\n  user:\n    client-certificate: /etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem\n    client-key: /etc/kubernetes/ssl/kube-apiserver-requestheader-ca-key.pem",
        "name": "",
        "commonName": "",
        "ouName": "",
        "envName": "KUBE_APISERVER_REQUESTHEADER_CA",
        "path": "/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem",
        "keyEnvName": "KUBE_APISERVER_REQUESTHEADER_CA_KEY",
        "keyPath": "/etc/kubernetes/ssl/kube-apiserver-requestheader-ca-key.pem",
        "configEnvName": "KUBECFG_KUBE_APISERVER_REQUESTHEADER_CA",
        "configPath": "/etc/kubernetes/ssl/kubecfg-kube-apiserver-requestheader-ca.yaml"
      },
      "kube-ca": {
    CUT HERE
        "config": "",
        "name": "",
        "commonName": "",
        "ouName": "",
        "envName": "KUBE_CA",
        "path": "/etc/kubernetes/ssl/kube-ca.pem",
        "keyEnvName": "KUBE_CA_KEY",
        "keyPath": "/etc/kubernetes/ssl/kube-ca-key.pem",
        "configEnvName": "",
        "configPath": ""
      },
      "kube-controller-manager": {
    CUT HERE
        "config": "apiVersion: v1\nkind: Config\nclusters:\n- cluster:\n    api-version: v1\n    certificate-authority: /etc/kubernetes/ssl/kube-ca.pem\n    server: \"https://127.0.0.1:6443\"\n  name: \"local\"\ncontexts:\n- context:\n    cluster: \"local\"\n    user: \"kube-controller-manager-local\"\n  name: \"local\"\ncurrent-context: \"local\"\nusers:\n- name: \"kube-controller-manager-local\"\n  user:\n    client-certificate: /etc/kubernetes/ssl/kube-controller-manager.pem\n    client-key: /etc/kubernetes/ssl/kube-controller-manager-key.pem",
        "name": "kube-controller-manager",
        "commonName": "system:kube-controller-manager",
        "ouName": "",
        "envName": "KUBE_CONTROLLER_MANAGER",
        "path": "/etc/kubernetes/ssl/kube-controller-manager.pem",
        "keyEnvName": "KUBE_CONTROLLER_MANAGER_KEY",
        "keyPath": "/etc/kubernetes/ssl/kube-controller-manager-key.pem",
        "configEnvName": "KUBECFG_KUBE_CONTROLLER_MANAGER",
        "configPath": "/etc/kubernetes/ssl/kubecfg-kube-controller-manager.yaml"
      },
      "kube-etcd-172-19-0-3": {
    CUT HERE
        "config": "",
        "name": "kube-etcd-172-19-0-3",
        "commonName": "system:kube-etcd-172-19-0-3",
        "ouName": "",
        "envName": "KUBE_ETCD_172_19_0_3",
        "path": "/etc/kubernetes/ssl/kube-etcd-172-19-0-3.pem",
        "keyEnvName": "KUBE_ETCD_172_19_0_3_KEY",
        "keyPath": "/etc/kubernetes/ssl/kube-etcd-172-19-0-3-key.pem",
        "configEnvName": "",
        "configPath": ""
      },
      "kube-etcd-172-19-0-4": {
    CUT HERE
        "config": "",
        "name": "kube-etcd-172-19-0-4",
        "commonName": "system:kube-etcd-172-19-0-4",
        "ouName": "",
        "envName": "KUBE_ETCD_172_19_0_4",
        "path": "/etc/kubernetes/ssl/kube-etcd-172-19-0-4.pem",
        "keyEnvName": "KUBE_ETCD_172_19_0_4_KEY",
        "keyPath": "/etc/kubernetes/ssl/kube-etcd-172-19-0-4-key.pem",
        "configEnvName": "",
        "configPath": ""
      },
      "kube-etcd-172-19-0-5": {
    CUT HERE
        "config": "",
        "name": "kube-etcd-172-19-0-5",
        "commonName": "system:kube-etcd-172-19-0-5",
        "ouName": "",
        "envName": "KUBE_ETCD_172_19_0_5",
        "path": "/etc/kubernetes/ssl/kube-etcd-172-19-0-5.pem",
        "keyEnvName": "KUBE_ETCD_172_19_0_5_KEY",
        "keyPath": "/etc/kubernetes/ssl/kube-etcd-172-19-0-5-key.pem",
        "configEnvName": "",
        "configPath": ""
      },
      "kube-etcd-172-19-0-7": {
    CUT HERE
        "config": "",
        "name": "kube-etcd-172-19-0-7",
        "commonName": "system:kube-etcd-172-19-0-7",
        "ouName": "",
        "envName": "KUBE_ETCD_172_19_0_7",
        "path": "/etc/kubernetes/ssl/kube-etcd-172-19-0-7.pem",
        "keyEnvName": "KUBE_ETCD_172_19_0_7_KEY",
        "keyPath": "/etc/kubernetes/ssl/kube-etcd-172-19-0-7-key.pem",
        "configEnvName": "",
        "configPath": ""
      },
      "kube-etcd-172-19-0-8": {
    CUT HERE
        "config": "",
        "name": "kube-etcd-172-19-0-8",
        "commonName": "system:kube-etcd-172-19-0-8",
        "ouName": "",
        "envName": "KUBE_ETCD_172_19_0_8",
        "path": "/etc/kubernetes/ssl/kube-etcd-172-19-0-8.pem",
        "keyEnvName": "KUBE_ETCD_172_19_0_8_KEY",
        "keyPath": "/etc/kubernetes/ssl/kube-etcd-172-19-0-8-key.pem",
        "configEnvName": "",
        "configPath": ""
      },
      "kube-node": {
    CUT HERE
        "config": "apiVersion: v1\nkind: Config\nclusters:\n- cluster:\n    api-version: v1\n    certificate-authority: /etc/kubernetes/ssl/kube-ca.pem\n    server: \"https://127.0.0.1:6443\"\n  name: \"local\"\ncontexts:\n- context:\n    cluster: \"local\"\n    user: \"kube-node-local\"\n  name: \"local\"\ncurrent-context: \"local\"\nusers:\n- name: \"kube-node-local\"\n  user:\n    client-certificate: /etc/kubernetes/ssl/kube-node.pem\n    client-key: /etc/kubernetes/ssl/kube-node-key.pem",
        "name": "kube-node",
        "commonName": "system:node",
        "ouName": "system:nodes",
        "envName": "KUBE_NODE",
        "path": "/etc/kubernetes/ssl/kube-node.pem",
        "keyEnvName": "KUBE_NODE_KEY",
        "keyPath": "/etc/kubernetes/ssl/kube-node-key.pem",
        "configEnvName": "KUBECFG_KUBE_NODE",
        "configPath": "/etc/kubernetes/ssl/kubecfg-kube-node.yaml"
      },
      "kube-proxy": {
    CUT HERE
        "config": "apiVersion: v1\nkind: Config\nclusters:\n- cluster:\n    api-version: v1\n    certificate-authority: /etc/kubernetes/ssl/kube-ca.pem\n    server: \"https://127.0.0.1:6443\"\n  name: \"local\"\ncontexts:\n- context:\n    cluster: \"local\"\n    user: \"kube-proxy-local\"\n  name: \"local\"\ncurrent-context: \"local\"\nusers:\n- name: \"kube-proxy-local\"\n  user:\n    client-certificate: /etc/kubernetes/ssl/kube-proxy.pem\n    client-key: /etc/kubernetes/ssl/kube-proxy-key.pem",
        "name": "kube-proxy",
        "commonName": "system:kube-proxy",
        "ouName": "",
        "envName": "KUBE_PROXY",
        "path": "/etc/kubernetes/ssl/kube-proxy.pem",
        "keyEnvName": "KUBE_PROXY_KEY",
        "keyPath": "/etc/kubernetes/ssl/kube-proxy-key.pem",
        "configEnvName": "KUBECFG_KUBE_PROXY",
        "configPath": "/etc/kubernetes/ssl/kubecfg-kube-proxy.yaml"
      },
      "kube-scheduler": {
    CUT HERE
        "config": "apiVersion: v1\nkind: Config\nclusters:\n- cluster:\n    api-version: v1\n    certificate-authority: /etc/kubernetes/ssl/kube-ca.pem\n    server: \"https://127.0.0.1:6443\"\n  name: \"local\"\ncontexts:\n- context:\n    cluster: \"local\"\n    user: \"kube-scheduler-local\"\n  name: \"local\"\ncurrent-context: \"local\"\nusers:\n- name: \"kube-scheduler-local\"\n  user:\n    client-certificate: /etc/kubernetes/ssl/kube-scheduler.pem\n    client-key: /etc/kubernetes/ssl/kube-scheduler-key.pem",
        "name": "kube-scheduler",
        "commonName": "system:kube-scheduler",
        "ouName": "",
        "envName": "KUBE_SCHEDULER",
        "path": "/etc/kubernetes/ssl/kube-scheduler.pem",
        "keyEnvName": "KUBE_SCHEDULER_KEY",
        "keyPath": "/etc/kubernetes/ssl/kube-scheduler-key.pem",
        "configEnvName": "KUBECFG_KUBE_SCHEDULER",
        "configPath": "/etc/kubernetes/ssl/kubecfg-kube-scheduler.yaml"
      },
      "kube-service-account-token": {
    CUT HERE
        "config": "",
        "name": "kube-service-account-token",
        "commonName": "kube-service-account-token",
        "ouName": "",
        "envName": "KUBE_SERVICE_ACCOUNT_TOKEN",
        "path": "/etc/kubernetes/ssl/kube-service-account-token.pem",
        "keyEnvName": "KUBE_SERVICE_ACCOUNT_TOKEN_KEY",
        "keyPath": "/etc/kubernetes/ssl/kube-service-account-token-key.pem",
        "configEnvName": "",
        "configPath": ""
      }
    }
    }
    } 
    INFO[0000] Successfully Deployed state file at [./rancher-cluster.rkestate] 
    DEBU[0000] Host: XXX.XXX.0.38 has role: worker           
    DEBU[0000] Host: XXX.XXX.0.25 has role: worker           
    DEBU[0000] Host: XXX.XXX.0.7 has role: controlplane      
    DEBU[0000] Host: XXX.XXX.0.7 has role: worker            
    DEBU[0000] Host: XXX.XXX.0.7 has role: etcd              
    DEBU[0000] Host: XXX.XXX.0.222 has role: controlplane    
    DEBU[0000] Host: XXX.XXX.0.222 has role: worker          
    DEBU[0000] Host: XXX.XXX.0.222 has role: etcd            
    DEBU[0000] Host: XXX.XXX.0.223 has role: controlplane    
    DEBU[0000] Host: XXX.XXX.0.223 has role: worker          
    DEBU[0000] Host: XXX.XXX.0.223 has role: etcd            
    DEBU[0000] Host: XXX.XXX.0.221 has role: worker          
    DEBU[0000] Host: XXX.XXX.0.41 has role: worker           
    DEBU[0000] Host: XXX.XXX.0.42 has role: worker           
    INFO[0000] Building Kubernetes cluster                  
    INFO[0000] [dialer] Setup tunnel for host [XXX.XXX.0.7]  
    INFO[0000] [dialer] Setup tunnel for host [XXX.XXX.0.223] 
    INFO[0000] [dialer] Setup tunnel for host [XXX.XXX.0.221] 
    DEBU[0000] Connecting to Docker API for host [XXX.XXX.0.7] 
    INFO[0000] [dialer] Setup tunnel for host [XXX.XXX.0.42] 
    INFO[0000] [dialer] Setup tunnel for host [XXX.XXX.0.38] 
    DEBU[0000] Connecting to Docker API for host [XXX.XXX.0.223] 
    INFO[0000] [dialer] Setup tunnel for host [XXX.XXX.0.222] 
    INFO[0000] [dialer] Setup tunnel for host [XXX.XXX.0.25] 
    DEBU[0000] Connecting to Docker API for host [XXX.XXX.0.42] 
    DEBU[0000] Connecting to Docker API for host [XXX.XXX.0.25] 
    DEBU[0000] Connecting to Docker API for host [XXX.XXX.0.222] 
    INFO[0000] [dialer] Setup tunnel for host [XXX.XXX.0.41] 
    DEBU[0000] Connecting to Docker API for host [XXX.XXX.0.41] 
    DEBU[0000] Connecting to Docker API for host [XXX.XXX.0.221] 
    DEBU[0000] Connecting to Docker API for host [XXX.XXX.0.38] 
    DEBU[0000] Docker Info found: types.Info{ID:"DFJO:55YS:6XWL:OOVX:MPG4:UMFI:HNLG:NVEI:C3XC:FQHX:3TVW:JA6R", Containers:228, ContainersRunning:182, ContainersPaused:0, ContainersStopped:46, Images:151, Driver:"overlay2", DriverStatus:[][2]string{[2]string{"Backing Filesystem", "xfs"}, [2]string{"Supports d_type", "true"}, [2]string{"Native Overlay Diff", "true"}}, SystemStatus:[][2]string(nil), Plugins:types.PluginsInfo{Volume:[]string{"local"}, Network:[]string{"bridge", "host", "macvlan", "null", "overlay"}, Authorization:[]string(nil), Log:[]string{"awslogs", "fluentd", "gcplogs", "gelf", "journald", "json-file", "local", "logentries", "splunk", "syslog"}}, MemoryLimit:true, SwapLimit:true, KernelMemory:true, CPUCfsPeriod:true, CPUCfsQuota:true, CPUShares:true, CPUSet:true, IPv4Forwarding:true, BridgeNfIptables:true, BridgeNfIP6tables:true, Debug:false, NFd:1039, OomKillDisable:true, NGoroutines:810, SystemTime:"2021-01-14T16:11:31.793249536+01:00", LoggingDriver:"json-file", CgroupDriver:"cgroupfs", NEventsListener:0, KernelVersion:"4.15.0-101-generic", OperatingSystem:"Ubuntu 18.04.4 LTS", OSType:"linux", Architecture:"x86_64", IndexServerAddress:"https://index.docker.io/v1/", RegistryConfig:(*registry.ServiceConfig)(0xc0002a82a0), NCPU:40, MemTotal:135152173056, GenericResources:[]swarm.GenericResource(nil), DockerRootDir:"/var/lib/docker", HTTPProxy:"", HTTPSProxy:"", NoProxy:"", Name:"kub02n11", Labels:[]string{}, ExperimentalBuild:false, ServerVersion:"18.09.9", ClusterStore:"", ClusterAdvertise:"", Runtimes:map[string]types.Runtime{"runc":types.Runtime{Path:"runc", Args:[]string(nil)}}, DefaultRuntime:"runc", Swarm:swarm.Info{NodeID:"", NodeAddr:"", LocalNodeState:"inactive", ControlAvailable:false, Error:"", RemoteManagers:[]swarm.Peer(nil), Nodes:0, Managers:0, Cluster:(*swarm.ClusterInfo)(nil)}, LiveRestoreEnabled:false, Isolation:"", InitBinary:"docker-init", ContainerdCommit:types.Commit{ID:"7ad184331fa3e55e52b890ea95e65ba581ae3429", Expected:"7ad184331fa3e55e52b890ea95e65ba581ae3429"}, RuncCommit:types.Commit{ID:"dc9208a3303feef5b3839f4323d9beb36df0a9dd", Expected:"dc9208a3303feef5b3839f4323d9beb36df0a9dd"}, InitCommit:types.Commit{ID:"fec3683", Expected:"fec3683"}, SecurityOptions:[]string{"apparmor", "seccomp"}} 
    DEBU[0000] Docker Info found: types.Info{ID:"B3UE:W3UZ:AOML:J3HM:OA3S:TAKG:B2AH:3RDA:ACQA:BMSW:5L3S:WYKI", Containers:151, ContainersRunning:130, ContainersPaused:0, ContainersStopped:21, Images:62, Driver:"overlay2", DriverStatus:[][2]string{[2]string{"Backing Filesystem", "xfs"}, [2]string{"Supports d_type", "true"}, [2]string{"Native Overlay Diff", "true"}}, SystemStatus:[][2]string(nil), Plugins:types.PluginsInfo{Volume:[]string{"local"}, Network:[]string{"bridge", "host", "macvlan", "null", "overlay"}, Authorization:[]string(nil), Log:[]string{"awslogs", "fluentd", "gcplogs", "gelf", "journald", "json-file", "local", "logentries", "splunk", "syslog"}}, MemoryLimit:true, SwapLimit:false, KernelMemory:true, CPUCfsPeriod:true, CPUCfsQuota:true, CPUShares:true, CPUSet:true, IPv4Forwarding:true, BridgeNfIptables:true, BridgeNfIP6tables:true, Debug:false, NFd:950, OomKillDisable:true, NGoroutines:707, SystemTime:"2021-01-14T16:11:31.80264944+01:00", LoggingDriver:"json-file", CgroupDriver:"cgroupfs", NEventsListener:0, KernelVersion:"4.15.0-88-generic", OperatingSystem:"Ubuntu 18.04.4 LTS", OSType:"linux", Architecture:"x86_64", IndexServerAddress:"https://index.docker.io/v1/", RegistryConfig:(*registry.ServiceConfig)(0xc000336230), NCPU:40, MemTotal:135152189440, GenericResources:[]swarm.GenericResource(nil), DockerRootDir:"/var/lib/docker", HTTPProxy:"", HTTPSProxy:"", NoProxy:"", Name:"kub02n09", Labels:[]string{}, ExperimentalBuild:false, ServerVersion:"18.09.9", ClusterStore:"", ClusterAdvertise:"", Runtimes:map[string]types.Runtime{"runc":types.Runtime{Path:"runc", Args:[]string(nil)}}, DefaultRuntime:"runc", Swarm:swarm.Info{NodeID:"", NodeAddr:"", LocalNodeState:"inactive", ControlAvailable:false, Error:"", RemoteManagers:[]swarm.Peer(nil), Nodes:0, Managers:0, Cluster:(*swarm.ClusterInfo)(nil)}, LiveRestoreEnabled:false, Isolation:"", InitBinary:"docker-init", ContainerdCommit:types.Commit{ID:"b34a5c8af56e510852c35414db4c1f4fa6172339", Expected:"b34a5c8af56e510852c35414db4c1f4fa6172339"}, RuncCommit:types.Commit{ID:"3e425f80a8c931f88e6d94a8c831b9d5aa481657", Expected:"3e425f80a8c931f88e6d94a8c831b9d5aa481657"}, InitCommit:types.Commit{ID:"fec3683", Expected:"fec3683"}, SecurityOptions:[]string{"apparmor", "seccomp"}} 
    DEBU[0000] Docker Info found: types.Info{ID:"ZDEO:LQYQ:GDA2:A26T:AEBI:PVVM:AFLP:YOUN:JHYI:IRWG:C4RB:XNMK", Containers:200, ContainersRunning:177, ContainersPaused:0, ContainersStopped:23, Images:64, Driver:"overlay2", DriverStatus:[][2]string{[2]string{"Backing Filesystem", "xfs"}, [2]string{"Supports d_type", "true"}, [2]string{"Native Overlay Diff", "true"}}, SystemStatus:[][2]string(nil), Plugins:types.PluginsInfo{Volume:[]string{"local"}, Network:[]string{"bridge", "host", "macvlan", "null", "overlay"}, Authorization:[]string(nil), Log:[]string{"awslogs", "fluentd", "gcplogs", "gelf", "journald", "json-file", "local", "logentries", "splunk", "syslog"}}, MemoryLimit:true, SwapLimit:true, KernelMemory:true, CPUCfsPeriod:true, CPUCfsQuota:true, CPUShares:true, CPUSet:true, IPv4Forwarding:true, BridgeNfIptables:true, BridgeNfIP6tables:true, Debug:false, NFd:1018, OomKillDisable:true, NGoroutines:819, SystemTime:"2021-01-14T16:11:31.797212206+01:00", LoggingDriver:"json-file", CgroupDriver:"cgroupfs", NEventsListener:0, KernelVersion:"4.15.0-101-generic", OperatingSystem:"Ubuntu 18.04.4 LTS", OSType:"linux", Architecture:"x86_64", IndexServerAddress:"https://index.docker.io/v1/", RegistryConfig:(*registry.ServiceConfig)(0xc00018a0e0), NCPU:40, MemTotal:135152181248, GenericResources:[]swarm.GenericResource(nil), DockerRootDir:"/var/lib/docker", HTTPProxy:"", HTTPSProxy:"", NoProxy:"", Name:"kub02n10", Labels:[]string{}, ExperimentalBuild:false, ServerVersion:"18.09.9", ClusterStore:"", ClusterAdvertise:"", Runtimes:map[string]types.Runtime{"runc":types.Runtime{Path:"runc", Args:[]string(nil)}}, DefaultRuntime:"runc", Swarm:swarm.Info{NodeID:"", NodeAddr:"", LocalNodeState:"inactive", ControlAvailable:false, Error:"", RemoteManagers:[]swarm.Peer(nil), Nodes:0, Managers:0, Cluster:(*swarm.ClusterInfo)(nil)}, LiveRestoreEnabled:false, Isolation:"", InitBinary:"docker-init", ContainerdCommit:types.Commit{ID:"7ad184331fa3e55e52b890ea95e65ba581ae3429", Expected:"7ad184331fa3e55e52b890ea95e65ba581ae3429"}, RuncCommit:types.Commit{ID:"dc9208a3303feef5b3839f4323d9beb36df0a9dd", Expected:"dc9208a3303feef5b3839f4323d9beb36df0a9dd"}, InitCommit:types.Commit{ID:"fec3683", Expected:"fec3683"}, SecurityOptions:[]string{"apparmor", "seccomp"}} 
    DEBU[0000] Docker Info found: types.Info{ID:"CQRC:BMRY:WDL5:YWRV:GOBG:OG7S:GWIN:DZUL:TH7Q:PFJU:5NMZ:7OFD", Containers:120, ContainersRunning:111, ContainersPaused:0, ContainersStopped:9, Images:94, Driver:"overlay2", DriverStatus:[][2]string{[2]string{"Backing Filesystem", "xfs"}, [2]string{"Supports d_type", "true"}, [2]string{"Native Overlay Diff", "true"}}, SystemStatus:[][2]string(nil), Plugins:types.PluginsInfo{Volume:[]string{"local"}, Network:[]string{"bridge", "host", "macvlan", "null", "overlay"}, Authorization:[]string(nil), Log:[]string{"awslogs", "fluentd", "gcplogs", "gelf", "journald", "json-file", "local", "logentries", "splunk", "syslog"}}, MemoryLimit:true, SwapLimit:true, KernelMemory:true, CPUCfsPeriod:true, CPUCfsQuota:true, CPUShares:true, CPUSet:true, IPv4Forwarding:true, BridgeNfIptables:true, BridgeNfIP6tables:true, Debug:false, NFd:630, OomKillDisable:true, NGoroutines:487, SystemTime:"2021-01-14T16:11:31.788912484+01:00", LoggingDriver:"json-file", CgroupDriver:"cgroupfs", NEventsListener:0, KernelVersion:"4.15.0-88-generic", OperatingSystem:"Ubuntu 18.04.4 LTS", OSType:"linux", Architecture:"x86_64", IndexServerAddress:"https://index.docker.io/v1/", RegistryConfig:(*registry.ServiceConfig)(0xc00018a070), NCPU:40, MemTotal:135152185344, GenericResources:[]swarm.GenericResource(nil), DockerRootDir:"/var/lib/docker", HTTPProxy:"", HTTPSProxy:"", NoProxy:"", Name:"kub02n07", Labels:[]string{}, ExperimentalBuild:false, ServerVersion:"18.09.9", ClusterStore:"", ClusterAdvertise:"", Runtimes:map[string]types.Runtime{"runc":types.Runtime{Path:"runc", Args:[]string(nil)}}, DefaultRuntime:"runc", Swarm:swarm.Info{NodeID:"", NodeAddr:"", LocalNodeState:"inactive", ControlAvailable:false, Error:"", RemoteManagers:[]swarm.Peer(nil), Nodes:0, Managers:0, Cluster:(*swarm.ClusterInfo)(nil)}, LiveRestoreEnabled:false, Isolation:"", InitBinary:"docker-init", ContainerdCommit:types.Commit{ID:"b34a5c8af56e510852c35414db4c1f4fa6172339", Expected:"b34a5c8af56e510852c35414db4c1f4fa6172339"}, RuncCommit:types.Commit{ID:"3e425f80a8c931f88e6d94a8c831b9d5aa481657", Expected:"3e425f80a8c931f88e6d94a8c831b9d5aa481657"}, InitCommit:types.Commit{ID:"fec3683", Expected:"fec3683"}, SecurityOptions:[]string{"apparmor", "seccomp"}} 
    DEBU[0000] Docker Info found: types.Info{ID:"A5MO:TXWG:CXWF:4NGT:DIL4:KSTL:NIQ6:6Q2Y:EPRI:QVSP:NZHF:HSYC", Containers:138, ContainersRunning:122, ContainersPaused:0, ContainersStopped:16, Images:54, Driver:"overlay2", DriverStatus:[][2]string{[2]string{"Backing Filesystem", "xfs"}, [2]string{"Supports d_type", "true"}, [2]string{"Native Overlay Diff", "true"}}, SystemStatus:[][2]string(nil), Plugins:types.PluginsInfo{Volume:[]string{"local"}, Network:[]string{"bridge", "host", "macvlan", "null", "overlay"}, Authorization:[]string(nil), Log:[]string{"awslogs", "fluentd", "gcplogs", "gelf", "journald", "json-file", "local", "logentries", "splunk", "syslog"}}, MemoryLimit:true, SwapLimit:true, KernelMemory:true, CPUCfsPeriod:true, CPUCfsQuota:true, CPUShares:true, CPUSet:true, IPv4Forwarding:true, BridgeNfIptables:true, BridgeNfIP6tables:true, Debug:false, NFd:685, OomKillDisable:true, NGoroutines:529, SystemTime:"2021-01-14T16:11:31.822365625+01:00", LoggingDriver:"json-file", CgroupDriver:"cgroupfs", NEventsListener:0, KernelVersion:"4.15.0-88-generic", OperatingSystem:"Ubuntu 18.04.4 LTS", OSType:"linux", Architecture:"x86_64", IndexServerAddress:"https://index.docker.io/v1/", RegistryConfig:(*registry.ServiceConfig)(0xc000324150), NCPU:40, MemTotal:135152185344, GenericResources:[]swarm.GenericResource(nil), DockerRootDir:"/var/lib/docker", HTTPProxy:"", HTTPSProxy:"", NoProxy:"", Name:"kub02n08", Labels:[]string{}, ExperimentalBuild:false, ServerVersion:"18.09.9", ClusterStore:"", ClusterAdvertise:"", Runtimes:map[string]types.Runtime{"runc":types.Runtime{Path:"runc", Args:[]string(nil)}}, DefaultRuntime:"runc", Swarm:swarm.Info{NodeID:"", NodeAddr:"", LocalNodeState:"inactive", ControlAvailable:false, Error:"", RemoteManagers:[]swarm.Peer(nil), Nodes:0, Managers:0, Cluster:(*swarm.ClusterInfo)(nil)}, LiveRestoreEnabled:false, Isolation:"", InitBinary:"docker-init", ContainerdCommit:types.Commit{ID:"b34a5c8af56e510852c35414db4c1f4fa6172339", Expected:"b34a5c8af56e510852c35414db4c1f4fa6172339"}, RuncCommit:types.Commit{ID:"3e425f80a8c931f88e6d94a8c831b9d5aa481657", Expected:"3e425f80a8c931f88e6d94a8c831b9d5aa481657"}, InitCommit:types.Commit{ID:"fec3683", Expected:"fec3683"}, SecurityOptions:[]string{"apparmor", "seccomp"}} 
    DEBU[0001] Docker Info found: types.Info{ID:"R3WT:TEQP:3FBW:GZ7V:WBYC:UHV5:VP5O:JEXN:K55A:PYJ5:I35H:OTLE", Containers:225, ContainersRunning:195, ContainersPaused:0, ContainersStopped:30, Images:112, Driver:"overlay2", DriverStatus:[][2]string{[2]string{"Backing Filesystem", "xfs"}, [2]string{"Supports d_type", "true"}, [2]string{"Native Overlay Diff", "true"}}, SystemStatus:[][2]string(nil), Plugins:types.PluginsInfo{Volume:[]string{"local"}, Network:[]string{"bridge", "host", "macvlan", "null", "overlay"}, Authorization:[]string(nil), Log:[]string{"awslogs", "fluentd", "gcplogs", "gelf", "journald", "json-file", "local", "logentries", "splunk", "syslog"}}, MemoryLimit:true, SwapLimit:false, KernelMemory:true, CPUCfsPeriod:true, CPUCfsQuota:true, CPUShares:true, CPUSet:true, IPv4Forwarding:true, BridgeNfIptables:true, BridgeNfIP6tables:true, Debug:false, NFd:1138, OomKillDisable:true, NGoroutines:858, SystemTime:"2021-01-14T16:11:31.97155096+01:00", LoggingDriver:"json-file", CgroupDriver:"cgroupfs", NEventsListener:0, KernelVersion:"4.15.0-45-generic", OperatingSystem:"Ubuntu 18.04.1 LTS", OSType:"linux", Architecture:"x86_64", IndexServerAddress:"https://index.docker.io/v1/", RegistryConfig:(*registry.ServiceConfig)(0xc000324230), NCPU:32, MemTotal:236620808192, GenericResources:[]swarm.GenericResource(nil), DockerRootDir:"/var/lib/docker", HTTPProxy:"", HTTPSProxy:"", NoProxy:"", Name:"kub02n03", Labels:[]string{}, ExperimentalBuild:false, ServerVersion:"18.09.9", ClusterStore:"", ClusterAdvertise:"", Runtimes:map[string]types.Runtime{"runc":types.Runtime{Path:"runc", Args:[]string(nil)}}, DefaultRuntime:"runc", Swarm:swarm.Info{NodeID:"", NodeAddr:"", LocalNodeState:"inactive", ControlAvailable:false, Error:"", RemoteManagers:[]swarm.Peer(nil), Nodes:0, Managers:0, Cluster:(*swarm.ClusterInfo)(nil)}, LiveRestoreEnabled:false, Isolation:"", InitBinary:"docker-init", ContainerdCommit:types.Commit{ID:"894b81a4b802e4eb2a91d1ce216b8817763c29fb", Expected:"894b81a4b802e4eb2a91d1ce216b8817763c29fb"}, RuncCommit:types.Commit{ID:"425e105d5a03fabd737a126ad93d62a9eeede87f", Expected:"425e105d5a03fabd737a126ad93d62a9eeede87f"}, InitCommit:types.Commit{ID:"fec3683", Expected:"fec3683"}, SecurityOptions:[]string{"apparmor", "seccomp"}} 
    DEBU[0001] Docker Info found: types.Info{ID:"27D7:MLZA:VDKU:TTEP:3YQQ:TSJT:3JJL:G3LB:M7GP:YV6J:KC7B:T2G4", Containers:235, ContainersRunning:212, ContainersPaused:0, ContainersStopped:23, Images:130, Driver:"overlay2", DriverStatus:[][2]string{[2]string{"Backing Filesystem", "xfs"}, [2]string{"Supports d_type", "true"}, [2]string{"Native Overlay Diff", "true"}}, SystemStatus:[][2]string(nil), Plugins:types.PluginsInfo{Volume:[]string{"local"}, Network:[]string{"bridge", "host", "macvlan", "null", "overlay"}, Authorization:[]string(nil), Log:[]string{"awslogs", "fluentd", "gcplogs", "gelf", "journald", "json-file", "local", "logentries", "splunk", "syslog"}}, MemoryLimit:true, SwapLimit:true, KernelMemory:true, CPUCfsPeriod:true, CPUCfsQuota:true, CPUShares:true, CPUSet:true, IPv4Forwarding:true, BridgeNfIptables:true, BridgeNfIP6tables:true, Debug:false, NFd:1214, OomKillDisable:true, NGoroutines:907, SystemTime:"2021-01-14T16:11:32.851449981+01:00", LoggingDriver:"json-file", CgroupDriver:"cgroupfs", NEventsListener:0, KernelVersion:"4.15.0-51-generic", OperatingSystem:"Ubuntu 18.04.2 LTS", OSType:"linux", Architecture:"x86_64", IndexServerAddress:"https://index.docker.io/v1/", RegistryConfig:(*registry.ServiceConfig)(0xc00021a0e0), NCPU:40, MemTotal:270348169216, GenericResources:[]swarm.GenericResource(nil), DockerRootDir:"/var/lib/docker", HTTPProxy:"", HTTPSProxy:"", NoProxy:"", Name:"kub02n01", Labels:[]string{}, ExperimentalBuild:false, ServerVersion:"18.09.9", ClusterStore:"", ClusterAdvertise:"", Runtimes:map[string]types.Runtime{"runc":types.Runtime{Path:"runc", Args:[]string(nil)}}, DefaultRuntime:"runc", Swarm:swarm.Info{NodeID:"", NodeAddr:"", LocalNodeState:"inactive", ControlAvailable:false, Error:"", RemoteManagers:[]swarm.Peer(nil), Nodes:0, Managers:0, Cluster:(*swarm.ClusterInfo)(nil)}, LiveRestoreEnabled:false, Isolation:"", InitBinary:"docker-init", ContainerdCommit:types.Commit{ID:"bb71b10fd8f58240ca47fbb579b9d1028eea7c84", Expected:"bb71b10fd8f58240ca47fbb579b9d1028eea7c84"}, RuncCommit:types.Commit{ID:"2b18fe1d885ee5083ef9f0838fee39b62d653e30", Expected:"2b18fe1d885ee5083ef9f0838fee39b62d653e30"}, InitCommit:types.Commit{ID:"fec3683", Expected:"fec3683"}, SecurityOptions:[]string{"apparmor", "seccomp"}} 
    DEBU[0002] Docker Info found: types.Info{ID:"4XMD:YVST:63RT:UE4U:QPVN:ZWLO:UL3Z:CK5B:7VMM:KUCV:EJBB:GZYK", Containers:263, ContainersRunning:198, ContainersPaused:0, ContainersStopped:65, Images:98, Driver:"overlay2", DriverStatus:[][2]string{[2]string{"Backing Filesystem", "xfs"}, [2]string{"Supports d_type", "true"}, [2]string{"Native Overlay Diff", "true"}}, SystemStatus:[][2]string(nil), Plugins:types.PluginsInfo{Volume:[]string{"local"}, Network:[]string{"bridge", "host", "macvlan", "null", "overlay"}, Authorization:[]string(nil), Log:[]string{"awslogs", "fluentd", "gcplogs", "gelf", "journald", "json-file", "local", "logentries", "splunk", "syslog"}}, MemoryLimit:true, SwapLimit:false, KernelMemory:true, CPUCfsPeriod:true, CPUCfsQuota:true, CPUShares:true, CPUSet:true, IPv4Forwarding:true, BridgeNfIptables:true, BridgeNfIP6tables:true, Debug:false, NFd:1178, OomKillDisable:true, NGoroutines:1021, SystemTime:"2021-01-14T16:11:32.940139551+01:00", LoggingDriver:"json-file", CgroupDriver:"cgroupfs", NEventsListener:0, KernelVersion:"4.15.0-51-generic", OperatingSystem:"Ubuntu 18.04.2 LTS", OSType:"linux", Architecture:"x86_64", IndexServerAddress:"https://index.docker.io/v1/", RegistryConfig:(*registry.ServiceConfig)(0xc00021a1c0), NCPU:40, MemTotal:270348169216, GenericResources:[]swarm.GenericResource(nil), DockerRootDir:"/var/lib/docker", HTTPProxy:"", HTTPSProxy:"", NoProxy:"", Name:"kub02n02", Labels:[]string{}, ExperimentalBuild:false, ServerVersion:"18.09.9", ClusterStore:"", ClusterAdvertise:"", Runtimes:map[string]types.Runtime{"runc":types.Runtime{Path:"runc", Args:[]string(nil)}}, DefaultRuntime:"runc", Swarm:swarm.Info{NodeID:"", NodeAddr:"", LocalNodeState:"inactive", ControlAvailable:false, Error:"", RemoteManagers:[]swarm.Peer(nil), Nodes:0, Managers:0, Cluster:(*swarm.ClusterInfo)(nil)}, LiveRestoreEnabled:false, Isolation:"", InitBinary:"docker-init", ContainerdCommit:types.Commit{ID:"bb71b10fd8f58240ca47fbb579b9d1028eea7c84", Expected:"bb71b10fd8f58240ca47fbb579b9d1028eea7c84"}, RuncCommit:types.Commit{ID:"2b18fe1d885ee5083ef9f0838fee39b62d653e30", Expected:"2b18fe1d885ee5083ef9f0838fee39b62d653e30"}, InitCommit:types.Commit{ID:"fec3683", Expected:"fec3683"}, SecurityOptions:[]string{"apparmor", "seccomp"}} 
    DEBU[0002] Host: XXX.XXX.0.38 has role: worker           
    DEBU[0002] Host: XXX.XXX.0.25 has role: worker           
    DEBU[0002] Host: XXX.XXX.0.7 has role: controlplane      
    DEBU[0002] Host: XXX.XXX.0.7 has role: worker            
    DEBU[0002] Host: XXX.XXX.0.7 has role: etcd              
    DEBU[0002] Host: XXX.XXX.0.31 has role: worker           
    DEBU[0002] Host: XXX.XXX.0.222 has role: controlplane    
    DEBU[0002] Host: XXX.XXX.0.222 has role: worker          
    DEBU[0002] Host: XXX.XXX.0.222 has role: etcd            
    DEBU[0002] Host: XXX.XXX.0.223 has role: controlplane    
    DEBU[0002] Host: XXX.XXX.0.223 has role: worker          
    DEBU[0002] Host: XXX.XXX.0.223 has role: etcd            
    DEBU[0002] Host: XXX.XXX.0.221 has role: worker          
    DEBU[0002] Host: XXX.XXX.0.41 has role: worker           
    DEBU[0002] Host: XXX.XXX.0.42 has role: worker           
    INFO[0002] [network] No hosts added existing cluster, skipping port check 
    INFO[0002] [certificates] kube-apiserver certificate changed, force deploying certs 
    INFO[0002] [certificates] Deploying kubernetes certificates to Cluster nodes 
    DEBU[0002] Checking if container [cert-deployer] is running on host [XXX.XXX.0.25] 
    DEBU[0002] Checking if container [cert-deployer] is running on host [XXX.XXX.0.42] 
    DEBU[0002] Checking if container [cert-deployer] is running on host [XXX.XXX.0.221] 
    DEBU[0002] Checking if container [cert-deployer] is running on host [XXX.XXX.0.41] 
    DEBU[0002] Checking if container [cert-deployer] is running on host [XXX.XXX.0.38] 
    DEBU[0002] Checking if container [cert-deployer] is running on host [XXX.XXX.0.222] 
    DEBU[0002] Checking if container [cert-deployer] is running on host [XXX.XXX.0.7] 
    DEBU[0002] Checking if container [cert-deployer] is running on host [XXX.XXX.0.223] 
    DEBU[0002] [certificates] Checking image [rancher/rke-tools:v0.1.50] on host [XXX.XXX.0.221] 
    DEBU[0002] Checking if image [rancher/rke-tools:v0.1.50] exists on host [XXX.XXX.0.221] 
    DEBU[0002] [certificates] Checking image [rancher/rke-tools:v0.1.50] on host [XXX.XXX.0.222] 
    DEBU[0002] Checking if image [rancher/rke-tools:v0.1.50] exists on host [XXX.XXX.0.222] 
    DEBU[0002] Image [rancher/rke-tools:v0.1.50] exists on host [XXX.XXX.0.221] 
    DEBU[0002] [certificates] No pull necessary, image [rancher/rke-tools:v0.1.50] exists on host [XXX.XXX.0.221] 
    DEBU[0002] Image [rancher/rke-tools:v0.1.50] exists on host [XXX.XXX.0.222] 
    DEBU[0002] [certificates] No pull necessary, image [rancher/rke-tools:v0.1.50] exists on host [XXX.XXX.0.222] 
    DEBU[0002] [certificates] Checking image [rancher/rke-tools:v0.1.50] on host [XXX.XXX.0.223] 
    DEBU[0002] Checking if image [rancher/rke-tools:v0.1.50] exists on host [XXX.XXX.0.223] 
    DEBU[0002] Image [rancher/rke-tools:v0.1.50] exists on host [XXX.XXX.0.223] 
    DEBU[0002] [certificates] No pull necessary, image [rancher/rke-tools:v0.1.50] exists on host [XXX.XXX.0.223] 
    DEBU[0002] [certificates] Checking image [rancher/rke-tools:v0.1.50] on host [XXX.XXX.0.38] 
    DEBU[0002] Checking if image [rancher/rke-tools:v0.1.50] exists on host [XXX.XXX.0.38] 
    DEBU[0002] [certificates] Checking image [rancher/rke-tools:v0.1.50] on host [XXX.XXX.0.41] 
    DEBU[0002] Checking if image [rancher/rke-tools:v0.1.50] exists on host [XXX.XXX.0.41] 
    DEBU[0002] [certificates] Checking image [rancher/rke-tools:v0.1.50] on host [XXX.XXX.0.42] 
    DEBU[0002] Checking if image [rancher/rke-tools:v0.1.50] exists on host [XXX.XXX.0.42] 
    DEBU[0002] Image [rancher/rke-tools:v0.1.50] exists on host [XXX.XXX.0.42] 
    DEBU[0002] [certificates] No pull necessary, image [rancher/rke-tools:v0.1.50] exists on host [XXX.XXX.0.42] 
    DEBU[0002] Image [rancher/rke-tools:v0.1.50] exists on host [XXX.XXX.0.41] 
    DEBU[0002] [certificates] No pull necessary, image [rancher/rke-tools:v0.1.50] exists on host [XXX.XXX.0.41] 
    DEBU[0002] Image [rancher/rke-tools:v0.1.50] exists on host [XXX.XXX.0.38] 
    DEBU[0002] [certificates] No pull necessary, image [rancher/rke-tools:v0.1.50] exists on host [XXX.XXX.0.38] 
    DEBU[0003] [certificates] Checking image [rancher/rke-tools:v0.1.50] on host [XXX.XXX.0.25] 
    DEBU[0003] Checking if image [rancher/rke-tools:v0.1.50] exists on host [XXX.XXX.0.25] 
    DEBU[0003] [certificates] Checking image [rancher/rke-tools:v0.1.50] on host [XXX.XXX.0.7] 
    DEBU[0003] Checking if image [rancher/rke-tools:v0.1.50] exists on host [XXX.XXX.0.7] 
    DEBU[0003] Image [rancher/rke-tools:v0.1.50] exists on host [XXX.XXX.0.25] 
    DEBU[0003] [certificates] No pull necessary, image [rancher/rke-tools:v0.1.50] exists on host [XXX.XXX.0.25] 
    DEBU[0003] Image [rancher/rke-tools:v0.1.50] exists on host [XXX.XXX.0.7] 
    DEBU[0003] [certificates] No pull necessary, image [rancher/rke-tools:v0.1.50] exists on host [XXX.XXX.0.7] 
    DEBU[0003] [certificates] Successfully started Certificate deployer container: 10f83d877ad2b9aa3c9c51f4cd50994cd2315ab89b3f332b0b1df39580027286 
    DEBU[0003] Checking if container [cert-deployer] is running on host [XXX.XXX.0.221] 
    DEBU[0004] [certificates] Successfully started Certificate deployer container: a0c1e7cbebb35570f45ac1f899bba66bdcf18ed705a0ec2f62db7acd311e8c04 
    DEBU[0004] Checking if container [cert-deployer] is running on host [XXX.XXX.0.41] 
    DEBU[0004] [certificates] Successfully started Certificate deployer container: 8442ef6faab281c810c13b5afbeaf356f813fc2ecdbf2b2a8952ffa33d795efc 
    DEBU[0004] Checking if container [cert-deployer] is running on host [XXX.XXX.0.222] 
    DEBU[0004] [certificates] Successfully started Certificate deployer container: 0789a793941e474cc46fa3e4853697586d6c025aa28009dfd88917d5173a8c46 
    DEBU[0004] Checking if container [cert-deployer] is running on host [XXX.XXX.0.42] 
    DEBU[0004] [certificates] Successfully started Certificate deployer container: 4695ca19b6d7adc0d3c6d4f47af94be3e5d5cbb131c7d314519b8f0beba98c25 
    DEBU[0004] Checking if container [cert-deployer] is running on host [XXX.XXX.0.223] 
    DEBU[0004] [certificates] Successfully started Certificate deployer container: 2d69c2b2a0a5e7c7f119255914763e3aa812de905ed55d7e254250585dd86e31 
    DEBU[0004] Checking if container [cert-deployer] is running on host [XXX.XXX.0.38] 
    DEBU[0009] Checking if container [cert-deployer] is running on host [XXX.XXX.0.221] 
    DEBU[0009] Checking if container [cert-deployer] is running on host [XXX.XXX.0.222] 
    DEBU[0009] Checking if container [cert-deployer] is running on host [XXX.XXX.0.41] 
    DEBU[0009] Checking if container [cert-deployer] is running on host [XXX.XXX.0.223] 
    DEBU[0009] Checking if container [cert-deployer] is running on host [XXX.XXX.0.42] 
    DEBU[0009] Checking if container [cert-deployer] is running on host [XXX.XXX.0.38] 
    FATA[0053] [Failed to start Certificates deployer container on host [XXX.XXX.0.25]: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?]

I have always this error on XXX.XXX.0.25 or on XXX.XXX.0.7

My daemon.json:

{
  "log-level": "info",
  "storage-driver": "overlay2",
  "log-driver": "json-file",
  "log-opts": {
    "max-size": "1g",
    "max-file": "3" 
  }
}

I can without any problems do: docker -H ssh://XXX.XXX.0.25 ps and this took 2s. I cannot see any interesting errors in docker log or in dmesg.

superseb commented 3 years ago

This is probably hitting a timeout on the daemon, I have to check where that is set. In newer versions (0.3.x and up) we added Docker retries when we hit errors which might help in this situation.

WMP commented 3 years ago

So if i good understand, you want to check where this timeout is set and i can increase this timeout in source, yes? I cannot use rke 0.3 because i dont want to upgrade this k8s, because now calico is 0/1 ready.

WMP commented 3 years ago

This isnt that line: https://github.com/rancher/rke/blob/v0.2.11/hosts/dialer.go#L16 ?

WMP commented 3 years ago

We change this timeout to 600 and rke up execute successfull. Is possible to make this parameter configurable?

DEBU[0057] [certificates] Successfully started Certificate deployer container: 5cf12890e6a19e1354bb80adabfaa076a6566dbe84359efe0370a9065ca82335  
DEBU[0057] Checking if container [cert-deployer] is running on host [XXX.XXX.0.25]  
DEBU[0058] [certificates] Successfully started Certificate deployer container: 3a286802e66e3f69b4f1b0b35057a6027d31c63a34800191465610672569a26f  
DEBU[0058] Checking if container [cert-deployer] is running on host [XXX.XXX.0.7]
superseb commented 3 years ago

Can you confirm the timeout is hit when you query the Docker daemon when rke up is running? 50 seconds is already quite a lot but I want to confirm that it is not enough, and making it configurable is another configuration option to consider. It might help to raise the default timeout but I need some more info for that. I can also try to reproduce myself but it will take a bit more time.

WMP commented 3 years ago

How can i reproduce over docker -H what is doing in step: Successfully started Certificate deployer container ?

If DEBU[0057] is seconds, this means that rke got result from docker after 50s. In log with timeout, on this host previous log is from 0003 s:

DEBU[0003] [certificates] No pull necessary, image [rancher/rke-tools:v0.1.50] exists on host [XXX.XXX.0.25] 

and timeout i have on

FATA[0053] [Failed to start Certificates deployer container on host [XXX.XXX.0.25]: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?]

I havent whole log from success, but i can suspect that entry No pull necessary, image [rancher/rke-tools:v0.1.50] exists on host [XXX.XXX.0.25] has this same seconds: 0003, so in success deployment step DEBU[0057] Checking if container [cert-deployer] is running on host [XXX.XXX.0.25] took 57 - 3 = 54 seconds.

You must know that yesterday i have huge load on this node:
image

superseb commented 3 years ago

Right, so I'm wondering if it's worth to make it configurable as with the current version, it will be retried and when the node recovered it would still work. And 50 seconds is quite a timeout already.

WMP commented 3 years ago

I thinks that is worth to make possible set this timeout from CLI. I imagine that i must add new node because my old nodes has very huge loadt, and i cannot do that because static timeout is too short. When rke trying to retried, then close current connection to docker daemon and and try next time with this same timeout. If you really want to use retried, i thinks that timeout should be increased on every retried.

superseb commented 3 years ago

I think if we make this configurable, it will not solve everything as it will hang or break on another component that has a fixed timeout or retry. So if we are going to fix it, we need to test on a host under huge load and test if it can survive on all steps of the process.

stale[bot] commented 3 years ago

This issue/PR has been automatically marked as stale because it has not had activity (commit/comment/label) for 60 days. It will be closed in 14 days if no further activity occurs. Thank you for your contributions.