sguyennet / terraform-vsphere-kubespray

Deploy a Kubernetes HA cluster on VMware vSphere
https://blog.inkubate.io/install-and-manage-automatically-a-kubernetes-cluster-on-vmware-vsphere-with-terraform-and-kubespray/
Apache License 2.0
174 stars 89 forks source link

ha_loadbalance_vip failure #8

Closed RELATO closed 5 years ago

RELATO commented 5 years ago

Hello,

First things first, GREAT JOB! Congrats!

I am facing an error in the final step ( cannot connect to 192.168.1.113:6443 ). This is from vm_haproxy_vip = "192.168.1.113" (terraform.tfvars). I cannot ping such IP so I think there is some kind of mistake in my config

vm_master_ips = { "0" = "192.168.1.110" "1" = "192.168.1.111" "2" = "192.168.1.112" }

vm_worker_ips = { "0" = "192.168.1.120" "1" = "192.168.1.121" "2" = "192.168.1.122" }

vm_haproxy_vip = "192.168.1.113"

vm_haproxy_ips = { "0" = "192.168.1.222" "1" = "192.168.1.221" }

Thank you in advance

sguyennet commented 5 years ago

Hi,

Which Linux distribution are you using ?

The VIP should be assigned to your first HAProxy server. You should try to SSH to 192.168.1.222 and do the command "ip a" to list your IP addresses. Please check that your interface is called ens192 and check if you have 192.168.1.222 and 192.168.1.113 assigned to your ens192 interface.

If the VIP is assigned to the ens192 interface, you should try to check if the port 6443 is open with the command "sudo netstat -panot | grep :6443".

If the port is open, you should verify that the firewall on the HAProxy machine is disable or open for port 6443.

Regards,

Simon.

RELATO commented 5 years ago

Hi,

There were no second IP in such machine. I am using Ubuntu 16.04 as bellow:

relato@k8s-kubespray-haproxy-0:~$ ifconfig -a ens192 Link encap:Ethernet HWaddr 00:50:56:bd:80:cb inet addr:192.168.1.222 Bcast:192.168.1.255 Mask:255.255.255.0 inet6 addr: fe80::250:56ff:febd:80cb/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:10406 errors:0 dropped:18 overruns:0 frame:0 TX packets:6338 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:13572139 (13.5 MB) TX bytes:473250 (473.2 KB)

lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 inet6 addr: ::1/128 Scope:Host UP LOOPBACK RUNNING MTU:65536 Metric:1 RX packets:160 errors:0 dropped:0 overruns:0 frame:0 TX packets:160 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:11840 (11.8 KB) TX bytes:11840 (11.8 KB)

relato@k8s-kubespray-haproxy-0:~$ uname -a Linux k8s-kubespray-haproxy-0 4.15.0-15-generic #16~16.04.1-Ubuntu SMP Thu Apr 5 12:19:23 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux

I added the following lines in /etc/network/interfaces auto ens192:1 iface ens192:1 inet static address 192.168.1.113

and restarted network ( sudo service networking restart ) and I will try to complete the process

RELATO commented 5 years ago

Hi,

Unfortunately fixing the IP was not enough to solve the problem.

NO MORE HOSTS LEFT ***** to retry, use: --limit @/Users/home/projetos/clusters/terraform-vsphere-kubespray/ansible/kubespray/cluster.retry

PLAY RECAP ***** k8s-kubespray-master-0 : ok=313 changed=86 unreachable=0 failed=1 k8s-kubespray-master-1 : ok=286 changed=82 unreachable=0 failed=0 k8s-kubespray-master-2 : ok=287 changed=82 unreachable=0 failed=0 k8s-kubespray-worker-0 : ok=226 changed=62 unreachable=0 failed=0 k8s-kubespray-worker-1 : ok=226 changed=62 unreachable=0 failed=0 k8s-kubespray-worker-2 : ok=226 changed=62 unreachable=0 failed=0 localhost : ok=1 changed=0 unreachable=0 failed=0

Tuesday 26 March 2019 17:06:17 -0300 (0:03:36.726) 0:33:10.730 *****

download : file_download | Download item ------------------------------ 372.97s kubernetes/master : kubeadm | Initialize first master ----------------- 216.73s bootstrap-os : Bootstrap | Install python 2.x and pip ----------------- 197.13s download : container_download | Download containers if pull is required or told to always pull (all nodes) -- 99.30s container-engine/docker : ensure docker packages are installed --------- 96.59s download : file_download | Download item ------------------------------- 85.99s download : container_download | Download containers if pull is required or told to always pull (all nodes) -- 74.64s download : container_download | Download containers if pull is required or told to always pull (all nodes) -- 64.87s download : container_download | Download containers if pull is required or told to always pull (all nodes) -- 44.54s container-engine/docker : ensure docker-ce repository is enabled ------- 33.43s kubernetes/node : install | Copy kubelet binary from download dir ------ 31.55s download : container_download | Download containers if pull is required or told to always pull (all nodes) -- 30.17s download : container_download | Download containers if pull is required or told to always pull (all nodes) -- 27.22s download : file_download | Download item ------------------------------- 26.79s kubernetes/preinstall : Install packages requirements ------------------ 19.84s download : container_download | Download containers if pull is required or told to always pull (all nodes) -- 16.56s kubernetes/master : install | Set kubectl binary permissions ----------- 16.34s download : container_download | Download containers if pull is required or told to always pull (all nodes) -- 15.68s download : container_download | Download containers if pull is required or told to always pull (all nodes) -- 14.84s download : container_download | Download containers if pull is required or told to always pull (all nodes) -- 14.24s

Terraform does not automatically rollback in the face of errors. Instead, your Terraform state file has been partially updated with any resources that successfully completed. Please address the error above and apply again to incrementally change your infrastructure.

sguyennet commented 5 years ago

Hi,

You should not set the IP 192.168.1.113 in the /etc/network/interfaces configuration file. It is a virtual IP that is configured by keepalived. If everything goes well you should not have to modify something on the machine by hand.

You should destroy your deployment and create a fresh new one. Then check on the two HAProxy machines that you have "net.ipv4.ip_nonlocal_bind=1" at the end of the /etc/sysctl.conf configuration file. This variable allow the use of a virtual IP on your network card.

Check if keepalived is running fine with "ps aux | grep keepalived", check that the virtual IP is correct in /etc/keepalived/keepalived.conf and check the keepalived logs with "journalctl -u keepalived" for any issue.

Regards,

Simon.

RELATO commented 5 years ago

Hi,

Things are evolving! I manually installed keepalive in both ha-proxys ( IPs: 222 and 221 ) with

sudo apt-get install linux-headers-$(uname -r) sudo apt-get install keepalived sudo vim /etc/keepalived/keepalived.conf

global_defs {
    lvs_id haproxy_DH
}

vrrp_script check_haproxy {
    script "killall -0 haproxy"
    interval 2
    weight 2
}

vrrp_instance VI_01 {
    state MASTER
    interface ens192
    virtual_router_id 51
    priority 101

    virtual_ipaddress {
        192.168.1.113
    }

    track_script {
        check_haproxy
    }
}

and for the second ha-proxy I used the same config but 'priority 100' i.e. less than 101

Rebooted and tested again (notice: there are no firewall)

relato@k8s-kubespray-haproxy-0:~$ ip addr show ens192
2: ens192: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
    link/ether 00:50:56:bd:e4:59 brd ff:ff:ff:ff:ff:ff
    inet 192.168.1.222/24 brd 192.168.1.255 scope global ens192
       valid_lft forever preferred_lft forever
    inet 192.168.1.113/32 scope global ens192
       valid_lft forever preferred_lft forever
    inet6 fe80::250:56ff:febd:e459/64 scope link
       valid_lft forever preferred_lft forever

relato@k8s-kubespray-haproxy-0:~$ sudo netstat -panot | grep :6443
tcp        0      0 192.168.1.113:6443      0.0.0.0:*               LISTEN      1532/haproxy     off (0.00/0/0)
tcp        0      0 192.168.1.113:6443      192.168.1.110:59998     TIME_WAIT   -                timewait (13.70/0/0)
tcp        0      0 192.168.1.113:6443      192.168.1.110:60060     TIME_WAIT   -                timewait (32.45/0/0)
tcp        0      0 192.168.1.113:6443      192.168.1.110:59986     TIME_WAIT   -                timewait (11.70/0/0)
tcp        0      0 192.168.1.113:6443      192.168.1.110:60038     TIME_WAIT   -                timewait (19.70/0/0)

relato@k8s-kubespray-haproxy-0:~$sudo iptables -t nat -L
Chain PREROUTING (policy ACCEPT)
target     prot opt source               destination

Chain INPUT (policy ACCEPT)
target     prot opt source               destination

Chain OUTPUT (policy ACCEPT)
target     prot opt source               destination

Chain POSTROUTING (policy ACCEPT)
target     prot opt source               destination

Trying again:


home@Macintosh  ~/projetos/clusters/terraform-vsphere-kubespray/ansible/kubespray  ➦ 4167807f cd ansible/kubespray && ansible-playbook -i ../../config/hosts.ini -b -u relato -e 'ansible_ssh_pass=PassWord ansible_become_pass=PassWord kube_version=v1.12.5' -T 300 -v cluster.yml --limit @/Users/home/projetos/clusters/terraform-vsphere-kubespray/ansible/kubespray/cluster.retry

fatal: [k8s-kubespray-master-0]: FAILED! => {"attempts": 10, "changed": false, "cmd": ["/usr/local/bin/kubeadm", "alpha", "phase", "addon", "kube-proxy", "--config=/etc/kubernetes/kubeadm-config.v1alpha3.yaml"], "delta": "0:00:00.032985", "end": "2019-03-27 10:39:30.911076", "msg": "non-zero return code", "rc": 1, "start": "2019-03-27 10:39:30.878091", "stderr": "error when creating kube-proxy service account: unable to create serviceaccount: Post https://192.168.1.113:6443/api/v1/namespaces/kube-system/serviceaccounts: EOF", "stderr_lines": ["error when creating kube-proxy service account: unable to create serviceaccount: Post https://192.168.1.113:6443/api/v1/namespaces/kube-system/serviceaccounts: EOF"], "stdout": "", "stdout_lines": []}

Even tough the port 6443 is LISTENING it seems there is a problem there because the timewait

sguyennet commented 5 years ago

Hi, Keepalived should be installed and configured automatically by ansible (terraform-vsphere-kubespray/ansible/haproxy/haproxy.yml). If keepalived is not installed automatically on your two HAProxy machines it means something is wrong. I guess haproxy is not installed on your machines too? If this is the case it means that the ansible playbook failed to execute.

Are you using SSH key or SSH password to login to your virtual machines? Do you need a password when you do a sudo command on the machines?

RELATO commented 5 years ago

Hi,

I am using SSH password to login to those virtual machines. I don't need a password when I use sudo commands.

In terraform.tfvars:

vm_user = "relato"
vm_password = "PassWord"
vm_privilege_password = "PassWord"

Should I use vm_privilege_password = "" instead ?

Thank you

sguyennet commented 5 years ago

Ok that's what is causing the installation of Keepalived and HAproxy to failed. Ansible is expecting to get a prompt for the password when a command is executed with sudo. In your case this pompt is never presented to ansible. You should remove "[YOUR_USERNAME] ALL=(ALL) NOPASSWD: ALL" from the /etc/sudoers file in your Ubuntu template and then redeploy everything.

RELATO commented 5 years ago

Ok

"RELATO ALL=(ALL) NOPASSWD: ALL" line was removed from the /etc/sudoers file in the Ubuntu template and then everything was redeployed using such template.

Now root password is asking when applying sudo commands. keepalive was installed successfully but timewait still there:

relato@k8s-kubespray-haproxy-0:~$ sudo netstat -panot | grep :6443
[sudo] password for relato:
tcp        0      0 192.168.1.113:6443      0.0.0.0:*               LISTEN      4485/haproxy     off (0.00/0/0)
tcp        0      0 192.168.1.113:6443      192.168.1.110:44024     TIME_WAIT   -                timewait (36.16/0/0)
tcp        0      0 192.168.1.113:6443      192.168.1.110:44086     TIME_WAIT   -                timewait (54.90/0/0)
tcp        0      0 192.168.1.113:6443      192.168.1.110:43976     TIME_WAIT   -                timewait (19.65/0/0)
tcp        0      0 192.168.1.113:6443      192.168.1.110:43924     TIME_WAIT   -                timewait (3.14/0/0)
tcp        0      0 192.168.1.113:6443      192.168.1.110:43972     TIME_WAIT   -                timewait (19.65/0/0)
tcp        0      0 192.168.1.113:6443      192.168.1.110:44094     TIME_WAIT   -                timewait (55.91/0/0)
tcp        0      0 192.168.1.113:6443      192.168.1.110:43946     TIME_WAIT   -                timewait (15.64/0/0)
tcp        0      0 192.168.1.113:6443      192.168.1.110:43944     TIME_WAIT   -                timewait (15.64/0/0)
tcp        0      0 192.168.1.113:6443      192.168.1.110:43922     TIME_WAIT   -                timewait (3.14/0/0)
tcp        0      0 192.168.1.113:6443      192.168.1.110:44092     TIME_WAIT   -                timewait (55.64/0/0)
tcp        0      0 192.168.1.113:6443      192.168.1.110:44026     TIME_WAIT   -                timewait (36.16/0/0)
tcp        0      0 192.168.1.113:6443      192.168.1.110:43896     TIME_WAIT   -                timewait (0.00/0/0)
tcp        0      0 192.168.1.113:6443      192.168.1.110:44030     TIME_WAIT   -                timewait (37.16/0/0)
tcp        0      0 192.168.1.113:6443      192.168.1.110:43894     TIME_WAIT   -                timewait (0.00/0/0)
fatal: [k8s-kubespray-master-0]: FAILED! => {"attempts": 10, "changed": false, "cmd": ["/usr/local/bin/kubeadm", "alpha", "phase", "addon", "kube-proxy", "--config=/etc/kubernetes/kubeadm-config.v1alpha3.yaml"], "delta": "0:00:00.048301", "end": "2019-03-27 13:43:42.986489", "msg": "non-zero return code", "rc": 1, "start": "2019-03-27 13:43:42.938188", "stderr": "error when creating kube-proxy service account: unable to create serviceaccount: Post https://192.168.1.113:6443/api/v1/namespaces/kube-system/serviceaccounts: EOF", "stderr_lines": ["error when creating kube-proxy service account: unable to create serviceaccount: Post https://192.168.1.113:6443/api/v1/namespaces/kube-system/serviceaccounts: EOF"], "stdout": "", "stdout_lines": []}

NO MORE HOSTS LEFT **************************************************************************************************************************************************************
    to retry, use: --limit @/Users/home/projetos/clusters/terraform-vsphere-kubespray/ansible/kubespray/cluster.retry

PLAY RECAP **********************************************************************************************************************************************************************
k8s-kubespray-master-0     : ok=275  changed=11   unreachable=0    failed=1
k8s-kubespray-master-1     : ok=252  changed=9    unreachable=0    failed=0
k8s-kubespray-master-2     : ok=252  changed=9    unreachable=0    failed=0

Wednesday 27 March 2019  13:43:43 -0300 (0:00:51.934)       0:05:13.961 *******
===============================================================================
kubernetes/master : kubeadm | Enable kube-proxy ------------------------------------------------------------------------------------------------------------------------- 51.93s
kubernetes/preinstall : Update package management cache (APT) ------------------------------------------------------------------------------------------------------------ 9.28s
gather facts from all instances ------------------------------------------------------------------------------------------------------------------------------------------ 8.15s
download : container_download | Download containers if pull is required or told to always pull (all nodes) --------------------------------------------------------------- 5.62s
container-engine/docker : ensure docker packages are installed ----------------------------------------------------------------------------------------------------------- 3.04s
download : Download items ------------------------------------------------------------------------------------------------------------------------------------------------ 3.04s
etcd : Install | Copy etcdctl binary from docker container --------------------------------------------------------------------------------------------------------------- 2.43s
container-engine/docker : Ensure old versions of Docker are not installed. | Debian -------------------------------------------------------------------------------------- 2.40s
download : Sync container ------------------------------------------------------------------------------------------------------------------------------------------------ 2.28s
download : Download items ------------------------------------------------------------------------------------------------------------------------------------------------ 1.80s
kubernetes/master : kubeadm | Create kubeadm config ---------------------------------------------------------------------------------------------------------------------- 1.78s
kubernetes/preinstall : Install packages requirements -------------------------------------------------------------------------------------------------------------------- 1.72s
kubernetes/preinstall : Hosts | populate inventory into hosts file ------------------------------------------------------------------------------------------------------- 1.69s
bootstrap-os : Assign inventory name to unconfigured hostnames (non-CoreOS and Tumbleweed) ------------------------------------------------------------------------------- 1.66s
download : Sync container ------------------------------------------------------------------------------------------------------------------------------------------------ 1.60s
download : Download items ------------------------------------------------------------------------------------------------------------------------------------------------ 1.56s
download : Download items ------------------------------------------------------------------------------------------------------------------------------------------------ 1.55s
download : Sync container ------------------------------------------------------------------------------------------------------------------------------------------------ 1.55s
download : Sync container ------------------------------------------------------------------------------------------------------------------------------------------------ 1.54s
download : Sync container ------------------------------------------------------------------------------------------------------------------------------------------------ 1.52s
sguyennet commented 5 years ago

I forgot to ask you which version of kubesray and of kubernetes did you configured in the terraform.tfvars file?

RELATO commented 5 years ago

I am following your tutorial instructions from blog.


#===============================================================================
# Kubernetes parameters
#===============================================================================

# The Git repository to clone Kubespray from #
k8s_kubespray_url = "https://github.com/kubernetes-sigs/kubespray.git"

# The version of Kubespray that will be used to deploy Kubernetes #
k8s_kubespray_version = "v2.8.2"

# The Kubernetes version that will be deployed #
k8s_version = "v1.12.5"

# The overlay network plugin used by the Kubernetes cluster #
k8s_network_plugin = "calico"

# If you use Weavenet as an overlay network, you need to specify an encryption password #
k8s_weave_encryption_password = ""

# The DNS service used by the Kubernetes cluster (coredns/kubedns) #
k8s_dns_mode = "coredns"
sguyennet commented 5 years ago

The blog post is outdated and I should update it. Anyway the versions you are using are correct. Could you try to access the 3 master APIs from the first HAProxy machine (the one with the VIP): $ curl -k https://192.168.1.110:6443 $ curl -k https://192.168.1.111:6443 $ curl -k https://192.168.1.112:6443

You should see something like this: { "kind": "Status", "apiVersion": "v1", "metadata": {

}, "status": "Failure", "message": "forbidden: User \"system:anonymous\" cannot get path \"/\"", "reason": "Forbidden", "details": {

}, "code": 403

If you cannot reach the port 6443, please check if the kube-apiserver container is running on the master nodes:

$ sudo docker ps | grep kube-apiserver

RELATO commented 5 years ago

Hi,

You are right! There are no cube-apiserver container running:

home@Macintosh  ~/projetos/clusters/terraform-vsphere-kubespray   master ●  ssh relato@192.168.1.110
relato@192.168.1.110's password:
Welcome to Ubuntu 16.04.6 LTS (GNU/Linux 4.15.0-15-generic x86_64)

 * Documentation:  https://help.ubuntu.com
 * Management:     https://landscape.canonical.com
 * Support:        https://ubuntu.com/advantage

21 packages can be updated.
12 updates are security updates.

Last login: Thu Mar 28 09:12:05 2019 from 192.168.1.5
relato@k8s-kubespray-master-0:~$ sudo docker ps | grep kube-apiserver
[sudo] password for relato:
relato@k8s-kubespray-master-0:~$ netstat -na | grep 6443
relato@k8s-kubespray-master-0:~$

ssh relato@192.168.1.111
relato@192.168.1.111's password:
Welcome to Ubuntu 16.04.6 LTS (GNU/Linux 4.15.0-15-generic x86_64)

 * Documentation:  https://help.ubuntu.com
 * Management:     https://landscape.canonical.com
 * Support:        https://ubuntu.com/advantage

21 packages can be updated.
12 updates are security updates.

Last login: Thu Mar 28 09:12:05 2019 from 192.168.1.5
relato@k8s-kubespray-master-1:~$ sudo docker ps | grep kube-apiserver
[sudo] password for relato:
relato@k8s-kubespray-master-1:~$ netstat -na | grep 6443
relato@k8s-kubespray-master-1:~$

home@Macintosh  ~  ssh relato@192.168.1.112
The authenticity of host '192.168.1.112 (192.168.1.112)' can't be established.
ECDSA key fingerprint is SHA256:lLvvfhIB4AQlamzZVGZJE5soyEM88JEcAQQ5j9lK3Qc.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added '192.168.1.112' (ECDSA) to the list of known hosts.
relato@192.168.1.112's password:
Welcome to Ubuntu 16.04.6 LTS (GNU/Linux 4.15.0-15-generic x86_64)

 * Documentation:  https://help.ubuntu.com
 * Management:     https://landscape.canonical.com
 * Support:        https://ubuntu.com/advantage

21 packages can be updated.
12 updates are security updates.

Last login: Thu Mar 28 09:12:05 2019 from 192.168.1.5
relato@k8s-kubespray-master-2:~$ sudo docker ps | grep kube-apiserver
[sudo] password for relato:
relato@k8s-kubespray-master-2:~$ netstat -na | grep 6443
relato@k8s-kubespray-master-2:~$ sudo netstat -panot | grep :6443
relato@k8s-kubespray-master-2:~$ sudo docker ps -a | grep kube-apiserver
relato@k8s-kubespray-master-2:~$ 

home@Macintosh  ~  ssh relato@192.168.1.113
The authenticity of host '192.168.1.113 (192.168.1.113)' can't be established.
ECDSA key fingerprint is SHA256:lLvvfhIB4AQlamzZVGZJE5soyEM88JEcAQQ5j9lK3Qc.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added '192.168.1.113' (ECDSA) to the list of known hosts.
relato@192.168.1.113's password:
Welcome to Ubuntu 16.04.6 LTS (GNU/Linux 4.15.0-15-generic x86_64)

 * Documentation:  https://help.ubuntu.com
 * Management:     https://landscape.canonical.com
 * Support:        https://ubuntu.com/advantage

19 packages can be updated.
12 updates are security updates.

Last login: Thu Mar 28 08:54:05 2019 from 192.168.1.5
relato@k8s-kubespray-haproxy-0:~$ sudo netstat -panot | grep :6443
[sudo] password for relato:
tcp        0      0 192.168.1.113:6443      0.0.0.0:*               LISTEN      4418/haproxy     off (0.00/0/0)
tcp        0      0 192.168.1.113:6443      192.168.1.110:55086     TIME_WAIT   -                timewait (25.82/0/0)
tcp        0      0 192.168.1.113:6443      192.168.1.110:55090     TIME_WAIT   -                timewait (25.83/0/0)
tcp        0      0 192.168.1.113:6443      192.168.1.110:55050     TIME_WAIT   -                timewait (11.10/0/0)
tcp        0      0 192.168.1.113:6443      192.168.1.110:55052     TIME_WAIT   -                timewait (11.10/0/0)
tcp        0      0 192.168.1.113:6443      192.168.1.110:55042     TIME_WAIT   -                timewait (10.10/0/0)
tcp        0      0 192.168.1.113:6443      192.168.1.110:55162     TIME_WAIT   -                timewait (46.35/0/0)
tcp        0      0 192.168.1.113:6443      192.168.1.110:55066     TIME_WAIT   -                timewait (22.82/0/0)
tcp        0      0 192.168.1.113:6443      192.168.1.110:55168     TIME_WAIT   -                timewait (47.35/0/0)
tcp        0      0 192.168.1.113:6443      192.168.1.110:55048     TIME_WAIT   -                timewait (11.10/0/0)
tcp        0      0 192.168.1.113:6443      192.168.1.110:55148     TIME_WAIT   -                timewait (44.35/0/0)
tcp        0      0 192.168.1.113:6443      192.168.1.110:55088     TIME_WAIT   -                timewait (25.83/0/0)
tcp        0      0 192.168.1.113:6443      192.168.1.110:55022     TIME_WAIT   -                timewait (7.10/0/0)

There is only one container running in all those three masters

relato@k8s-kubespray-master-0:~$ sudo docker ps
CONTAINER ID        IMAGE                         COMMAND                 CREATED             STATUS              PORTS               NAMES
3560257e2377        quay.io/coreos/etcd:v3.2.24   "/usr/local/bin/etcd"   16 minutes ago      Up 16 minutes                           etcd1
sguyennet commented 5 years ago

Ok, that makes sense. Kubespray install and configure the ETCD cluster and then execute kubeadm to configure the other Kubernetes components. Could you execute "sudo kubeadm init" on the first master to check why kubeadm is failing?

RELATO commented 5 years ago

Of course,

relato@k8s-kubespray-master-0:~$ sudo kubeadm init
[sudo] password for relato:
I0328 10:43:30.527132   10759 version.go:236] remote version is much newer: v1.14.0; falling back to: stable-1.12
[init] using Kubernetes version: v1.12.7
[preflight] running pre-flight checks
[preflight] Some fatal errors occurred:
    [ERROR FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml]: /etc/kubernetes/manifests/kube-apiserver.yaml already exists
    [ERROR FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml]: /etc/kubernetes/manifests/kube-controller-manager.yaml already exists
    [ERROR FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml]: /etc/kubernetes/manifests/kube-scheduler.yaml already exists
    [ERROR Port-2379]: Port 2379 is in use
    [ERROR DirAvailable--var-lib-etcd]: /var/lib/etcd is not empty
[preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`
sguyennet commented 5 years ago

Ok it doesn't give us relevant information. From your deployment machine, could you execute:

$ cd terraform-vsphere-kubespray/ansible/kubespray $ ansible-playbook -i ../../config/hosts.ini -b -u [YOUR_USERNAME] -e "ansible_ssh_pass='[YOUR_PASSWORD]' ansible_become_pass='[YOUR_PASSWORD]' kube_version=v1.12.5" -T 300 -v cluster.yml

This should failed and display the relevant error in red.

RELATO commented 5 years ago

I retried twice. The errors does not give us relevant information as well.

fatal: [k8s-kubespray-master-0]: FAILED! => {"attempts": 10, "changed": false, "cmd": ["/usr/local/bin/kubeadm", "alpha", "phase", "addon", "kube-proxy", "--config=/etc/kubernetes/kubeadm-config.v1alpha3.yaml"], "delta": "0:00:00.048301", "end": "2019-03-27 13:43:42.986489", "msg": "non-zero return code", "rc": 1, "start": "2019-03-27 13:43:42.938188", "stderr": "error when creating kube-proxy service account: unable to create serviceaccount: Post https://192.168.1.113:6443/api/v1/namespaces/kube-system/serviceaccounts: EOF", "stderr_lines": ["error when creating kube-proxy service account: unable to create serviceaccount: Post https://192.168.1.113:6443/api/v1/namespaces/kube-system/serviceaccounts: EOF"], "stdout": "", "stdout_lines": []}

NO MORE HOSTS LEFT **************************************************************************************************************************************************************
    to retry, use: --limit @/Users/home/projetos/clusters/terraform-vsphere-kubespray/ansible/kubespray/cluster.retry

PLAY RECAP **********************************************************************************************************************************************************************
k8s-kubespray-master-0     : ok=275  changed=11   unreachable=0    failed=1
k8s-kubespray-master-1     : ok=252  changed=9    unreachable=0    failed=0
k8s-kubespray-master-2     : ok=252  changed=9    unreachable=0    failed=0
sguyennet commented 5 years ago

Could you post more lines from the output. I would like to see why the API is not up and running.

sguyennet commented 5 years ago

The task just before enabling kube-proxy is "TASK [kubernetes/master : kubeadm | Initialize first master]". Do you see any error there?

RELATO commented 5 years ago

Hi

Here is all the outputs: There is only one error.

Bellow is the retry command I used:

cd ansible/kubespray && ansible-playbook -i ../../config/hosts.ini -b -u relato -e 'ansible_ssh_pass=PassWord ansible_become_pass=PassWord kube_version=v1.12.5' -T 300 -v cluster.yml --limit @/Users/home/projetos/clusters/terraform-vsphere-kubespray/ansible/kubespray/cluster.retry

And the last lines of output are:

TASK [kubernetes/master : kubeadm | Delete old admin.conf] ******************************************************************************************************************************************************************************
Thursday 28 March 2019  13:08:36 -0300 (0:00:00.458)       0:04:05.369 ********
ok: [k8s-kubespray-master-1] => {"changed": false, "path": "/etc/kubernetes/admin.conf", "state": "absent"}
ok: [k8s-kubespray-master-2] => {"changed": false, "path": "/etc/kubernetes/admin.conf", "state": "absent"}

TASK [kubernetes/master : kubeadm | Delete old static pods] *****************************************************************************************************************************************************************************
Thursday 28 March 2019  13:08:37 -0300 (0:00:00.442)       0:04:05.811 ********

TASK [kubernetes/master : kubeadm | Forcefully delete old static pods] ******************************************************************************************************************************************************************
Thursday 28 March 2019  13:08:37 -0300 (0:00:00.329)       0:04:06.141 ********

TASK [kubernetes/master : kubeadm | aggregate all SANs] *********************************************************************************************************************************************************************************
Thursday 28 March 2019  13:08:37 -0300 (0:00:00.301)       0:04:06.443 ********
ok: [k8s-kubespray-master-0] => {"ansible_facts": {"apiserver_sans": "kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local 10.233.0.1 localhost 127.0.0.1 k8s-kubespray-master-0 k8s-kubespray-master-1 k8s-kubespray-master-2 192.168.1.113  192.168.1.110 192.168.1.111 192.168.1.112"}, "changed": false}
ok: [k8s-kubespray-master-1] => {"ansible_facts": {"apiserver_sans": "kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local 10.233.0.1 localhost 127.0.0.1 k8s-kubespray-master-0 k8s-kubespray-master-1 k8s-kubespray-master-2 192.168.1.113  192.168.1.110 192.168.1.111 192.168.1.112"}, "changed": false}
ok: [k8s-kubespray-master-2] => {"ansible_facts": {"apiserver_sans": "kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local 10.233.0.1 localhost 127.0.0.1 k8s-kubespray-master-0 k8s-kubespray-master-1 k8s-kubespray-master-2 192.168.1.113  192.168.1.110 192.168.1.111 192.168.1.112"}, "changed": false}

TASK [kubernetes/master : kubeadm | Copy etcd cert dir under k8s cert dir] **************************************************************************************************************************************************************
Thursday 28 March 2019  13:08:38 -0300 (0:00:00.773)       0:04:07.216 ********
ok: [k8s-kubespray-master-0] => {"changed": false, "cmd": ["cp", "-TR", "/etc/ssl/etcd/ssl", "/etc/kubernetes/ssl/etcd"], "delta": "0:00:00.004445", "end": "2019-03-28 13:08:38.798448", "rc": 0, "start": "2019-03-28 13:08:38.794003", "stderr": "", "stderr_lines": [], "stdout": "", "stdout_lines": []}
ok: [k8s-kubespray-master-1] => {"changed": false, "cmd": ["cp", "-TR", "/etc/ssl/etcd/ssl", "/etc/kubernetes/ssl/etcd"], "delta": "0:00:00.006173", "end": "2019-03-28 13:08:38.869393", "rc": 0, "start": "2019-03-28 13:08:38.863220", "stderr": "", "stderr_lines": [], "stdout": "", "stdout_lines": []}
ok: [k8s-kubespray-master-2] => {"changed": false, "cmd": ["cp", "-TR", "/etc/ssl/etcd/ssl", "/etc/kubernetes/ssl/etcd"], "delta": "0:00:00.003603", "end": "2019-03-28 13:08:38.924180", "rc": 0, "start": "2019-03-28 13:08:38.920577", "stderr": "", "stderr_lines": [], "stdout": "", "stdout_lines": []}

TASK [kubernetes/master : Create audit-policy directory] ********************************************************************************************************************************************************************************
Thursday 28 March 2019  13:08:39 -0300 (0:00:00.487)       0:04:07.704 ********

TASK [kubernetes/master : Write api audit policy yaml] **********************************************************************************************************************************************************************************
Thursday 28 March 2019  13:08:39 -0300 (0:00:00.233)       0:04:07.938 ********

TASK [kubernetes/master : gets the kubeadm version] *************************************************************************************************************************************************************************************
Thursday 28 March 2019  13:08:39 -0300 (0:00:00.227)       0:04:08.166 ********
changed: [k8s-kubespray-master-0] => {"changed": true, "cmd": ["/usr/local/bin/kubeadm", "version", "-o", "short"], "delta": "0:00:00.025582", "end": "2019-03-28 13:08:39.758572", "rc": 0, "start": "2019-03-28 13:08:39.732990", "stderr": "", "stderr_lines": [], "stdout": "v1.12.5", "stdout_lines": ["v1.12.5"]}
changed: [k8s-kubespray-master-1] => {"changed": true, "cmd": ["/usr/local/bin/kubeadm", "version", "-o", "short"], "delta": "0:00:00.032091", "end": "2019-03-28 13:08:39.823653", "rc": 0, "start": "2019-03-28 13:08:39.791562", "stderr": "", "stderr_lines": [], "stdout": "v1.12.5", "stdout_lines": ["v1.12.5"]}
changed: [k8s-kubespray-master-2] => {"changed": true, "cmd": ["/usr/local/bin/kubeadm", "version", "-o", "short"], "delta": "0:00:00.024110", "end": "2019-03-28 13:08:39.869175", "rc": 0, "start": "2019-03-28 13:08:39.845065", "stderr": "", "stderr_lines": [], "stdout": "v1.12.5", "stdout_lines": ["v1.12.5"]}

TASK [kubernetes/master : sets kubeadm api version to v1alpha1] *************************************************************************************************************************************************************************
Thursday 28 March 2019  13:08:39 -0300 (0:00:00.489)       0:04:08.656 ********

TASK [kubernetes/master : sets kubeadm api version to v1alpha2] *************************************************************************************************************************************************************************
Thursday 28 March 2019  13:08:40 -0300 (0:00:00.239)       0:04:08.895 ********

TASK [kubernetes/master : sets kubeadm api version to v1alpha3] *************************************************************************************************************************************************************************
Thursday 28 March 2019  13:08:40 -0300 (0:00:00.233)       0:04:09.128 ********
ok: [k8s-kubespray-master-0] => {"ansible_facts": {"kubeadmConfig_api_version": "v1alpha3"}, "changed": false}
ok: [k8s-kubespray-master-1] => {"ansible_facts": {"kubeadmConfig_api_version": "v1alpha3"}, "changed": false}
ok: [k8s-kubespray-master-2] => {"ansible_facts": {"kubeadmConfig_api_version": "v1alpha3"}, "changed": false}

TASK [kubernetes/master : set kubeadm_config_api_fqdn define] ***************************************************************************************************************************************************************************
Thursday 28 March 2019  13:08:40 -0300 (0:00:00.331)       0:04:09.460 ********
ok: [k8s-kubespray-master-0] => {"ansible_facts": {"kubeadm_config_api_fqdn": "192.168.1.113"}, "changed": false}
ok: [k8s-kubespray-master-1] => {"ansible_facts": {"kubeadm_config_api_fqdn": "192.168.1.113"}, "changed": false}
ok: [k8s-kubespray-master-2] => {"ansible_facts": {"kubeadm_config_api_fqdn": "192.168.1.113"}, "changed": false}

TASK [kubernetes/master : kubeadm | Create kubeadm config] ******************************************************************************************************************************************************************************
Thursday 28 March 2019  13:08:41 -0300 (0:00:00.350)       0:04:09.810 ********
ok: [k8s-kubespray-master-0] => {"changed": false, "checksum": "61bb967f332554bf6b04daa40d142c02f21f385c", "dest": "/etc/kubernetes/kubeadm-config.v1alpha3.yaml", "gid": 0, "group": "root", "mode": "0644", "owner": "root", "path": "/etc/kubernetes/kubeadm-config.v1alpha3.yaml", "size": 3027, "state": "file", "uid": 0}
ok: [k8s-kubespray-master-1] => {"changed": false, "checksum": "fda226f2f821f1a6cac1fa9dadb44993eb317e05", "dest": "/etc/kubernetes/kubeadm-config.v1alpha3.yaml", "gid": 0, "group": "root", "mode": "0644", "owner": "root", "path": "/etc/kubernetes/kubeadm-config.v1alpha3.yaml", "size": 3027, "state": "file", "uid": 0}
ok: [k8s-kubespray-master-2] => {"changed": false, "checksum": "2f88551a3f14743093d7f85ff0a64bdf307248fa", "dest": "/etc/kubernetes/kubeadm-config.v1alpha3.yaml", "gid": 0, "group": "root", "mode": "0644", "owner": "root", "path": "/
TASK [kubernetes/master : kubeadm | Initialize first master] ****************************************************************************************************************************************************************************
Thursday 28 March 2019  13:08:43 -0300 (0:00:02.016)       0:04:11.827 ********

TASK [kubernetes/master : kubeadm | Upgrade first master] *******************************************************************************************************************************************************************************
Thursday 28 March 2019  13:08:43 -0300 (0:00:00.244)       0:04:12.072 ********

TASK [kubernetes/master : kubeadm | Enable kube-proxy] **********************************************************************************************************************************************************************************
Thursday 28 March 2019  13:08:43 -0300 (0:00:00.238)       0:04:12.310 ********
FAILED - RETRYING: kubeadm | Enable kube-proxy (10 retries left).
FAILED - RETRYING: kubeadm | Enable kube-proxy (9 retries left).
FAILED - RETRYING: kubeadm | Enable kube-proxy (8 retries left).
FAILED - RETRYING: kubeadm | Enable kube-proxy (7 retries left).
FAILED - RETRYING: kubeadm | Enable kube-proxy (6 retries left).
FAILED - RETRYING: kubeadm | Enable kube-proxy (5 retries left).
FAILED - RETRYING: kubeadm | Enable kube-proxy (4 retries left).
FAILED - RETRYING: kubeadm | Enable kube-proxy (3 retries left).
FAILED - RETRYING: kubeadm | Enable kube-proxy (2 retries left).
FAILED - RETRYING: kubeadm | Enable kube-proxy (1 retries left).
fatal: [k8s-kubespray-master-0]: FAILED! => {"attempts": 10, "changed": false, "cmd": ["/usr/local/bin/kubeadm", "alpha", "phase", "addon", "kube-proxy", "--config=/etc/kubernetes/kubeadm-config.v1alpha3.yaml"], "delta": "0:00:00.036236", "end": "2019-03-28 13:09:35.627372", "msg": "non-zero return code", "rc": 1, "start": "2019-03-28 13:09:35.591136", "stderr": "error when creating kube-proxy service account: unable to create serviceaccount: Post https://192.168.1.113:6443/api/v1/namespaces/kube-system/serviceaccounts: EOF", "stderr_lines": ["error when creating kube-proxy service account: unable to create serviceaccount: Post https://192.168.1.113:6443/api/v1/namespaces/kube-system/serviceaccounts: EOF"], "stdout": "", "stdout_lines": []}

NO MORE HOSTS LEFT **********************************************************************************************************************************************************************************************************************
        to retry, use: --limit @/Users/home/projetos/clusters/terraform-vsphere-kubespray/ansible/kubespray/cluster.retry

PLAY RECAP ******************************************************************************************************************************************************************************************************************************
k8s-kubespray-master-0     : ok=275  changed=11   unreachable=0    failed=1
k8s-kubespray-master-1     : ok=252  changed=9    unreachable=0    failed=0
k8s-kubespray-master-2     : ok=252  changed=9    unreachable=0    failed=0

Thursday 28 March 2019  13:09:35 -0300 (0:00:52.041)       0:05:04.352 ********
===============================================================================
kubernetes/master : kubeadm | Enable kube-proxy --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- 52.04s
download : container_download | Download containers if pull is required or told to always pull (all nodes) ----------------------------------------------------------------------------------------------------------------------- 5.54s
gather facts from all instances -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- 4.80s
container-engine/docker : ensure docker packages are installed ------------------------------------------------------------------------------------------------------------------------------------------------------------------- 3.13s
download : Download items -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- 2.95s
kubernetes/preinstall : Update package management cache (APT) -------------------------------------------------------------------------------------------------------------------------------------------------------------------- 2.81s
container-engine/docker : Ensure old versions of Docker are not installed. | Debian ---------------------------------------------------------------------------------------------------------------------------------------------- 2.40s
download : Sync container -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- 2.02s
kubernetes/master : kubeadm | Create kubeadm config ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ 2.02s
download : Download items -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- 1.83s
kubernetes/preinstall : Hosts | populate inventory into hosts file --------------------------------------------------------------------------------------------------------------------------------------------------------------- 1.76s
kubernetes/preinstall : Install packages requirements ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------- 1.71s
download : Download items -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- 1.69s
download : file_download | Download item ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- 1.63s
etcd : Refresh config | Create etcd config file ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- 1.61s
download : Sync container -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- 1.55s
download : Download items -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- 1.54s
download : Sync container -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- 1.53s
download : Sync container -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- 1.52s
kubernetes/node : Modprode Kernel Module for IPVS -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- 1.52s
RELATO commented 5 years ago

Hi,

I am pasting the kubeadm-config.v1alpha3.yaml content.

apiVersion: kubeadm.k8s.io/v1alpha3
kind: InitConfiguration
apiEndpoint:
  advertiseAddress: 192.168.1.110
  bindPort: 6443
nodeRegistration:
  name: k8s-kubespray-master-0
  taints:
  - effect: NoSchedule
    key: node-role.kubernetes.io/master
  criSocket: /var/run/dockershim.sock
---
apiVersion: kubeadm.k8s.io/v1alpha3
kind: ClusterConfiguration
clusterName: cluster.local
etcd:
  external:
      endpoints:
      - https://192.168.1.110:2379
      - https://192.168.1.111:2379
      - https://192.168.1.112:2379
      caFile: /etc/kubernetes/ssl/etcd/ca.pem
      certFile: /etc/kubernetes/ssl/etcd/node-k8s-kubespray-master-0.pem
      keyFile: /etc/kubernetes/ssl/etcd/node-k8s-kubespray-master-0-key.pem
networking:
  dnsDomain: cluster.local
  serviceSubnet: 10.233.0.0/18
  podSubnet: 10.233.64.0/18
kubernetesVersion: v1.12.5
controlPlaneEndpoint: 192.168.1.113:6443
apiServerCertSANs:
  - kubernetes
  - kubernetes.default
  - kubernetes.default.svc
  - kubernetes.default.svc.cluster.local
  - 10.233.0.1
  - localhost
  - 127.0.0.1
  - k8s-kubespray-master-0
  - k8s-kubespray-master-1
  - k8s-kubespray-master-2
  - 192.168.1.113
  - 192.168.1.110
  - 192.168.1.111
  - 192.168.1.112
certificatesDir: /etc/kubernetes/ssl
imageRepository: gcr.io/google-containers
unifiedControlPlaneImage: ""
apiServerExtraArgs:
  authorization-mode: Node,RBAC
  bind-address: 0.0.0.0
  insecure-port: "0"
  apiserver-count: "3"
  endpoint-reconciler-type: lease
  service-node-port-range: 30000-32767
  kubelet-preferred-address-types: "InternalDNS,InternalIP,Hostname,ExternalDNS,ExternalIP"
                                                                                                                                                                                                                       1,1           Top
  runtime-config: admissionregistration.k8s.io/v1alpha1
  allow-privileged: "true"
  cloud-provider: vsphere
  cloud-config: /etc/kubernetes/cloud_config
controllerManagerExtraArgs:
  node-monitor-grace-period: 40s
  node-monitor-period: 5s
  pod-eviction-timeout: 5m0s
  node-cidr-mask-size: "24"
  cloud-provider: vsphere
  cloud-config: /etc/kubernetes/cloud_config
schedulerExtraArgs:
apiServerExtraVolumes:
- name: cloud-config
  hostPath: /etc/kubernetes/cloud_config
  mountPath: /etc/kubernetes/cloud_config
controllerManagerExtraVolumes:
- name: cloud-config
  hostPath: /etc/kubernetes/cloud_config
  mountPath: /etc/kubernetes/cloud_config
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
bindAddress: 0.0.0.0
clientConnection:
 acceptContentTypes: ""
 burst: 10
 contentType: application/vnd.kubernetes.protobuf
 kubeconfig: /var/lib/kube-proxy/kubeconfig.conf
 qps: 5
clusterCIDR: ""
configSyncPeriod: 15m0s
conntrack:
 max: null
 maxPerCore: 32768
 min: 131072
 tcpCloseWaitTimeout: 1h0m0s
 tcpEstablishedTimeout: 24h0m0s
enableProfiling: false
healthzBindAddress: 0.0.0.0:10256
iptables:
 masqueradeAll: false
 masqueradeBit: 14
 minSyncPeriod: 0s
 syncPeriod: 30s
ipvs:
 excludeCIDRs: null
 minSyncPeriod: 0s
 scheduler: ""
 syncPeriod: 30s
metricsBindAddress: 127.0.0.1:10249
mode: ipvs
oomScoreAdj: -999
portRange: ""
resourceContainer: ""
udpIdleTimeout: 250ms
sguyennet commented 5 years ago

Did the kube-apiserver container failed to start by any chance ?

$ sudo docker ps -a | grep kube-apiserver

Have you got any error in the kubelet logs on the masters?

$ journalctl -u kubelet

RELATO commented 5 years ago

Hi

sudo docker ps -a | grep kube-apiserver shows nothing.

Here is the journalctl -u kubelet output:

-- Logs begin at Thu 2019-03-28 14:16:27 -03, end at Thu 2019-03-28 14:34:49 -03. --
Mar 28 14:16:27 k8s-kubespray-master-0 kubelet[27293]: E0328 14:16:27.907459   27293 kubelet.go:2236] node "k8s-kubespray-master-0" not found
Mar 28 14:16:28 k8s-kubespray-master-0 kubelet[27293]: E0328 14:16:28.007623   27293 kubelet.go:2236] node "k8s-kubespray-master-0" not found
Mar 28 14:16:28 k8s-kubespray-master-0 kubelet[27293]: E0328 14:16:28.108207   27293 kubelet.go:2236] node "k8s-kubespray-master-0" not found
Mar 28 14:16:28 k8s-kubespray-master-0 kubelet[27293]: E0328 14:16:28.208974   27293 kubelet.go:2236] node "k8s-kubespray-master-0" not found
Mar 28 14:16:28 k8s-kubespray-master-0 kubelet[27293]: E0328 14:16:28.309762   27293 kubelet.go:2236] node "k8s-kubespray-master-0" not found
Mar 28 14:16:28 k8s-kubespray-master-0 kubelet[27293]: E0328 14:16:28.410317   27293 kubelet.go:2236] node "k8s-kubespray-master-0" not found
Mar 28 14:16:28 k8s-kubespray-master-0 kubelet[27293]: E0328 14:16:28.458135   27293 reflector.go:134] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://192.168.1.113:6443/api/v1/pods?fieldSele
Mar 28 14:16:28 k8s-kubespray-master-0 kubelet[27293]: E0328 14:16:28.458923   27293 reflector.go:134] k8s.io/kubernetes/pkg/kubelet/kubelet.go:451: Failed to list *v1.Node: Get https://192.168.1.113:6443/api/v1/nodes?fieldSelector=m
Mar 28 14:16:28 k8s-kubespray-master-0 kubelet[27293]: E0328 14:16:28.460112   27293 reflector.go:134] k8s.io/kubernetes/pkg/kubelet/kubelet.go:442: Failed to list *v1.Service: Get https://192.168.1.113:6443/api/v1/services?limit=500
Mar 28 14:16:28 k8s-kubespray-master-0 kubelet[27293]: E0328 14:16:28.510912   27293 kubelet.go:2236] node "k8s-kubespray-master-0" not found
Mar 28 14:16:28 k8s-kubespray-master-0 kubelet[27293]: E0328 14:16:28.611116   27293 kubelet.go:2236] node "k8s-kubespray-master-0" not found
Mar 28 14:16:28 k8s-kubespray-master-0 kubelet[27293]: E0328 14:16:28.680747   27293 connection.go:65] Failed to create govmomi client. err: ServerFaultCode: Cannot complete login due to an incorrect user name or password.
Mar 28 14:16:28 k8s-kubespray-master-0 kubelet[27293]: E0328 14:16:28.680779   27293 nodemanager.go:382] Cannot connect to vCenter with err: ServerFaultCode: Cannot complete login due to an incorrect user name or password.
Mar 28 14:16:28 k8s-kubespray-master-0 kubelet[27293]: E0328 14:16:28.680789   27293 vsphere.go:589] failed connecting to vcServer "192.168.1.224" with error ServerFaultCode: Cannot complete login due to an incorrect user name or pas
Mar 28 14:16:28 k8s-kubespray-master-0 kubelet[27293]: E0328 14:16:28.680798   27293 vsphere.go:1330] Cannot connent to vsphere. Get zone for node k8s-kubespray-master-0 error
Mar 28 14:16:28 k8s-kubespray-master-0 kubelet[27293]: E0328 14:16:28.680808   27293 kubelet_node_status.go:66] Unable to construct v1.Node object for kubelet: failed to get zone from cloud provider: ServerFaultCode: Cannot complete
Mar 28 14:16:28 k8s-kubespray-master-0 kubelet[27293]: E0328 14:16:28.711830   27293 kubelet.go:2236] node "k8s-kubespray-master-0" not found
Mar 28 14:16:28 k8s-kubespray-master-0 kubelet[27293]: E0328 14:16:28.811997   27293 kubelet.go:2236] node "k8s-kubespray-master-0" not found
Mar 28 14:16:28 k8s-kubespray-master-0 kubelet[27293]: I0328 14:16:28.880978   27293 kubelet_node_status.go:276] Setting node annotation to enable volume controller attach/detach
Mar 28 14:16:28 k8s-kubespray-master-0 kubelet[27293]: E0328 14:16:28.912684   27293 kubelet.go:2236] node "k8s-kubespray-master-0" not found
Mar 28 14:16:29 k8s-kubespray-master-0 kubelet[27293]: E0328 14:16:29.012943   27293 kubelet.go:2236] node "k8s-kubespray-master-0" not found
Mar 28 14:16:29 k8s-kubespray-master-0 kubelet[27293]: E0328 14:16:29.113768   27293 kubelet.go:2236] node "k8s-kubespray-master-0" not found
Mar 28 14:16:29 k8s-kubespray-master-0 kubelet[27293]: E0328 14:16:29.214537   27293 kubelet.go:2236] node "k8s-kubespray-master-0" not found
Mar 28 14:16:29 k8s-kubespray-master-0 kubelet[27293]: E0328 14:16:29.314715   27293 kubelet.go:2236] node "k8s-kubespray-master-0" not found
Mar 28 14:16:29 k8s-kubespray-master-0 kubelet[27293]: E0328 14:16:29.414893   27293 kubelet.go:2236] node "k8s-kubespray-master-0" not found
Mar 28 14:16:29 k8s-kubespray-master-0 kubelet[27293]: E0328 14:16:29.458961   27293 reflector.go:134] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://192.168.1.113:6443/api/v1/pods?fieldSele
Mar 28 14:16:29 k8s-kubespray-master-0 kubelet[27293]: E0328 14:16:29.459664   27293 reflector.go:134] k8s.io/kubernetes/pkg/kubelet/kubelet.go:451: Failed to list *v1.Node: Get https://192.168.1.113:6443/api/v1/nodes?fieldSelector=m
Mar 28 14:16:29 k8s-kubespray-master-0 kubelet[27293]: E0328 14:16:29.460754   27293 reflector.go:134] k8s.io/kubernetes/pkg/kubelet/kubelet.go:442: Failed to list *v1.Service: Get https://192.168.1.113:6443/api/v1/services?limit=500
Mar 28 14:16:29 k8s-kubespray-master-0 kubelet[27293]: E0328 14:16:29.515103   27293 kubelet.go:2236] node "k8s-kubespray-master-0" not found
Mar 28 14:16:29 k8s-kubespray-master-0 kubelet[27293]: E0328 14:16:29.615614   27293 kubelet.go:2236] node "k8s-kubespray-master-0" not found
Mar 28 14:16:29 k8s-kubespray-master-0 kubelet[27293]: E0328 14:16:29.715834   27293 kubelet.go:2236] node "k8s-kubespray-master-0" not found
Mar 28 14:16:29 k8s-kubespray-master-0 kubelet[27293]: I0328 14:16:29.784733   27293 kubelet.go:1821] skipping pod synchronization - [container runtime is down]
Mar 28 14:16:29 k8s-kubespray-master-0 kubelet[27293]: E0328 14:16:29.816035   27293 kubelet.go:2236] node "k8s-kubespray-master-0" not found
Mar 28 14:16:29 k8s-kubespray-master-0 kubelet[27293]: E0328 14:16:29.916242   27293 kubelet.go:2236] node "k8s-kubespray-master-0" not found
Mar 28 14:16:30 k8s-kubespray-master-0 kubelet[27293]: E0328 14:16:30.016416   27293 kubelet.go:2236] node "k8s-kubespray-master-0" not found
Mar 28 14:16:30 k8s-kubespray-master-0 kubelet[27293]: E0328 14:16:30.116587   27293 kubelet.go:2236] node "k8s-kubespray-master-0" not found
Mar 28 14:16:30 k8s-kubespray-master-0 kubelet[27293]: E0328 14:16:30.216786   27293 kubelet.go:2236] node "k8s-kubespray-master-0" not found
Mar 28 14:16:30 k8s-kubespray-master-0 kubelet[27293]: E0328 14:16:30.316978   27293 kubelet.go:2236] node "k8s-kubespray-master-0" not found
Mar 28 14:16:30 k8s-kubespray-master-0 kubelet[27293]: E0328 14:16:30.417153   27293 kubelet.go:2236] node "k8s-kubespray-master-0" not found
Mar 28 14:16:30 k8s-kubespray-master-0 kubelet[27293]: E0328 14:16:30.459849   27293 reflector.go:134] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://192.168.1.113:6443/api/v1/pods?fieldSele
Mar 28 14:16:30 k8s-kubespray-master-0 kubelet[27293]: E0328 14:16:30.460601   27293 reflector.go:134] k8s.io/kubernetes/pkg/kubelet/kubelet.go:451: Failed to list *v1.Node: Get https://192.168.1.113:6443/api/v1/nodes?fieldSelector=m
Mar 28 14:16:30 k8s-kubespray-master-0 kubelet[27293]: E0328 14:16:30.461711   27293 reflector.go:134] k8s.io/kubernetes/pkg/kubelet/kubelet.go:442: Failed to list *v1.Service: Get https://192.168.1.113:6443/api/v1/services?limit=500
Mar 28 14:16:30 k8s-kubespray-master-0 kubelet[27293]: E0328 14:16:30.517319   27293 kubelet.go:2236] node "k8s-kubespray-master-0" not found
Mar 28 14:16:30 k8s-kubespray-master-0 kubelet[27293]: E0328 14:16:30.617483   27293 kubelet.go:2236] node "k8s-kubespray-master-0" not found
Mar 28 14:16:30 k8s-kubespray-master-0 kubelet[27293]: E0328 14:16:30.717690   27293 kubelet.go:2236] node "k8s-kubespray-master-0" not found
Mar 28 14:16:30 k8s-kubespray-master-0 kubelet[27293]: E0328 14:16:30.817856   27293 kubelet.go:2236] node "k8s-kubespray-master-0" not found
Mar 28 14:16:30 k8s-kubespray-master-0 kubelet[27293]: E0328 14:16:30.918068   27293 kubelet.go:2236] node "k8s-kubespray-master-0" not found
Mar 28 14:16:31 k8s-kubespray-master-0 kubelet[27293]: E0328 14:16:31.018287   27293 kubelet.go:2236] node "k8s-kubespray-master-0" not found
Mar 28 14:16:31 k8s-kubespray-master-0 kubelet[27293]: E0328 14:16:31.118473   27293 kubelet.go:2236] node "k8s-kubespray-master-0" not found
Mar 28 14:16:31 k8s-kubespray-master-0 kubelet[27293]: E0328 14:16:31.218667   27293 kubelet.go:2236] node "k8s-kubespray-master-0" not found
Mar 28 14:16:31 k8s-kubespray-master-0 kubelet[27293]: E0328 14:16:31.318839   27293 kubelet.go:2236] node "k8s-kubespray-master-0" not found
Mar 28 14:16:31 k8s-kubespray-master-0 kubelet[27293]: E0328 14:16:31.419050   27293 kubelet.go:2236] node "k8s-kubespray-master-0" not found
Mar 28 14:16:31 k8s-kubespray-master-0 kubelet[27293]: E0328 14:16:31.460816   27293 reflector.go:134] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://192.168.1.113:6443/api/v1/pods?fieldSele
Mar 28 14:16:31 k8s-kubespray-master-0 kubelet[27293]: E0328 14:16:31.461414   27293 reflector.go:134] k8s.io/kubernetes/pkg/kubelet/kubelet.go:451: Failed to list *v1.Node: Get https://192.168.1.113:6443/api/v1/nodes?fieldSelector=m
Mar 28 14:16:31 k8s-kubespray-master-0 kubelet[27293]: E0328 14:16:31.462474   27293 reflector.go:134] k8s.io/kubernetes/pkg/kubelet/kubelet.go:442: Failed to list *v1.Service: Get https://192.168.1.113:6443/api/v1/services?limit=500
Mar 28 14:16:31 k8s-kubespray-master-0 kubelet[27293]: E0328 14:16:31.519247   27293 kubelet.go:2236] node "k8s-kubespray-master-0" not found
Mar 28 14:16:31 k8s-kubespray-master-0 kubelet[27293]: E0328 14:16:31.619437   27293 kubelet.go:2236] node "k8s-kubespray-master-0" not found
Mar 28 14:16:31 k8s-kubespray-master-0 kubelet[27293]: E0328 14:16:31.719647   27293 kubelet.go:2236] node "k8s-kubespray-master-0" not found
Mar 28 14:16:31 k8s-kubespray-master-0 kubelet[27293]: E0328 14:16:31.730266   27293 connection.go:65] Failed to create govmomi client. err: ServerFaultCode: Cannot complete login due to an incorrect user name or password.
Mar 28 14:16:31 k8s-kubespray-master-0 kubelet[27293]: E0328 14:16:31.730311   27293 nodemanager.go:382] Cannot connect to vCenter with err: ServerFaultCode: Cannot complete login due to an incorrect user name or password.
Mar 28 14:16:31 k8s-kubespray-master-0 kubelet[27293]: E0328 14:16:31.730328   27293 vsphere.go:589] failed connecting to vcServer "192.168.1.224" with error ServerFaultCode: Cannot complete login due to an incorrect user name or pas
Mar 28 14:16:31 k8s-kubespray-master-0 kubelet[27293]: E0328 14:16:31.730346   27293 vsphere.go:1330] Cannot connent to vsphere. Get zone for node k8s-kubespray-master-0 error
Mar 28 14:16:31 k8s-kubespray-master-0 kubelet[27293]: F0328 14:16:31.730370   27293 kubelet.go:1354] Kubelet failed to get node info: failed to get zone from cloud provider: ServerFaultCode: Cannot complete login due to an incorrect
Mar 28 14:16:31 k8s-kubespray-master-0 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/n/a
Mar 28 14:16:31 k8s-kubespray-master-0 systemd[1]: kubelet.service: Unit entered failed state.
Mar 28 14:16:31 k8s-kubespray-master-0 systemd[1]: kubelet.service: Failed with result 'exit-code'.
Mar 28 14:16:42 k8s-kubespray-master-0 systemd[1]: kubelet.service: Service hold-off time over, scheduling restart.
Mar 28 14:16:42 k8s-kubespray-master-0 systemd[1]: Stopped Kubernetes Kubelet Server.
Mar 28 14:16:42 k8s-kubespray-master-0 systemd[1]: Starting Kubernetes Kubelet Server...
Mar 28 14:16:42 k8s-kubespray-master-0 systemd[1]: Started Kubernetes Kubelet Server.
Mar 28 14:16:42 k8s-kubespray-master-0 kubelet[27413]: Flag --address has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-clu
Mar 28 14:16:42 k8s-kubespray-master-0 kubelet[27413]: Flag --allow-privileged has been deprecated, will be removed in a future version
Mar 28 14:16:42 k8s-kubespray-master-0 kubelet[27413]: Flag --authentication-token-webhook has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs
Mar 28 14:16:42 k8s-kubespray-master-0 kubelet[27413]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tas
Mar 28 14:16:42 k8s-kubespray-master-0 kubelet[27413]: Flag --client-ca-file has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/adminis
Mar 28 14:16:42 k8s-kubespray-master-0 kubelet[27413]: Flag --pod-manifest-path has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/admi
Mar 28 14:16:42 k8s-kubespray-master-0 kubelet[27413]: Flag --node-status-update-frequency has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs
Mar 28 14:16:42 k8s-kubespray-master-0 kubelet[27413]: Flag --cgroup-driver has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administ
Mar 28 14:16:42 k8s-kubespray-master-0 kubelet[27413]: Flag --max-pods has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cl
Mar 28 14:16:42 k8s-kubespray-master-0 kubelet[27413]: Flag --anonymous-auth has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/adminis
Mar 28 14:16:42 k8s-kubespray-master-0 kubelet[27413]: Flag --read-only-port has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/adminis
Mar 28 14:16:42 k8s-kubespray-master-0 kubelet[27413]: Flag --fail-swap-on has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administe
Mar 28 14:16:42 k8s-kubespray-master-0 kubelet[27413]: Flag --kubelet-cgroups has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/admini
Mar 28 14:16:42 k8s-kubespray-master-0 kubelet[27413]: Flag --cluster-dns has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer
Mar 28 14:16:42 k8s-kubespray-master-0 kubelet[27413]: Flag --cluster-domain has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/adminis
Mar 28 14:16:42 k8s-kubespray-master-0 kubelet[27413]: Flag --resolv-conf has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer
Mar 28 14:16:42 k8s-kubespray-master-0 kubelet[27413]: Flag --kube-reserved has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administ
Mar 28 14:16:42 k8s-kubespray-master-0 kubelet[27413]: I0328 14:16:42.135167   27413 flags.go:33] FLAG: --address="0.0.0.0"
Mar 28 14:16:42 k8s-kubespray-master-0 kubelet[27413]: I0328 14:16:42.135416   27413 flags.go:33] FLAG: --allow-privileged="true"
Mar 28 14:16:42 k8s-kubespray-master-0 kubelet[27413]: I0328 14:16:42.135637   27413 flags.go:33] FLAG: --allowed-unsafe-sysctls="[]"
Mar 28 14:16:42 k8s-kubespray-master-0 kubelet[27413]: I0328 14:16:42.135861   27413 flags.go:33] FLAG: --alsologtostderr="false"
Mar 28 14:16:42 k8s-kubespray-master-0 kubelet[27413]: I0328 14:16:42.136079   27413 flags.go:33] FLAG: --anonymous-auth="false"
Mar 28 14:16:42 k8s-kubespray-master-0 kubelet[27413]: I0328 14:16:42.136295   27413 flags.go:33] FLAG: --application-metrics-count-limit="100"
Mar 28 14:16:42 k8s-kubespray-master-0 kubelet[27413]: I0328 14:16:42.136509   27413 flags.go:33] FLAG: --authentication-token-webhook="true"
Mar 28 14:16:42 k8s-kubespray-master-0 kubelet[27413]: I0328 14:16:42.136530   27413 flags.go:33] FLAG: --authentication-token-webhook-cache-ttl="2m0s"
Mar 28 14:16:42 k8s-kubespray-master-0 kubelet[27413]: I0328 14:16:42.136538   27413 flags.go:33] FLAG: --authorization-mode="AlwaysAllow"
Mar 28 14:16:42 k8s-kubespray-master-0 kubelet[27413]: I0328 14:16:42.136545   27413 flags.go:33] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s"
Mar 28 14:16:42 k8s-kubespray-master-0 kubelet[27413]: I0328 14:16:42.136551   27413 flags.go:33] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s"
Mar 28 14:16:42 k8s-kubespray-master-0 kubelet[27413]: I0328 14:16:42.136556   27413 flags.go:33] FLAG: --azure-container-registry-config=""
Mar 28 14:16:42 k8s-kubespray-master-0 kubelet[27413]: I0328 14:16:42.136562   27413 flags.go:33] FLAG: --boot-id-file="/proc/sys/kernel/random/boot_id"
Mar 28 14:16:42 k8s-kubespray-master-0 kubelet[27413]: I0328 14:16:42.136568   27413 flags.go:33] FLAG: --bootstrap-checkpoint-path=""
Mar 28 14:16:42 k8s-kubespray-master-0 kubelet[27413]: I0328 14:16:42.136573   27413 flags.go:33] FLAG: --bootstrap-kubeconfig="/etc/kubernetes/bootstrap-kubelet.conf"
Mar 28 14:16:42 k8s-kubespray-master-0 kubelet[27413]: I0328 14:16:42.136579   27413 flags.go:33] FLAG: --cert-dir="/var/lib/kubelet/pki"
Mar 28 14:16:42 k8s-kubespray-master-0 kubelet[27413]: I0328 14:16:42.136585   27413 flags.go:33] FLAG: --cgroup-driver="cgroupfs"
Mar 28 14:16:42 k8s-kubespray-master-0 kubelet[27413]: I0328 14:16:42.136590   27413 flags.go:33] FLAG: --cgroup-root=""
Mar 28 14:16:42 k8s-kubespray-master-0 kubelet[27413]: I0328 14:16:42.136595   27413 flags.go:33] FLAG: --cgroups-per-qos="true"
Mar 28 14:16:42 k8s-kubespray-master-0 kubelet[27413]: I0328 14:16:42.136600   27413 flags.go:33] FLAG: --chaos-chance="0"
Mar 28 14:16:42 k8s-kubespray-master-0 kubelet[27413]: I0328 14:16:42.136608   27413 flags.go:33] FLAG: --client-ca-file="/etc/kubernetes/ssl/ca.crt"
Mar 28 14:16:42 k8s-kubespray-master-0 kubelet[27413]: I0328 14:16:42.136613   27413 flags.go:33] FLAG: --cloud-config="/etc/kubernetes/cloud_config"
Mar 28 14:16:42 k8s-kubespray-master-0 kubelet[27413]: I0328 14:16:42.136619   27413 flags.go:33] FLAG: --cloud-provider="vsphere"
Mar 28 14:16:42 k8s-kubespray-master-0 kubelet[27413]: I0328 14:16:42.136627   27413 flags.go:33] FLAG: --cluster-dns="[10.233.0.3]"
Mar 28 14:16:42 k8s-kubespray-master-0 kubelet[27413]: I0328 14:16:42.136642   27413 flags.go:33] FLAG: --cluster-domain="cluster.local"
Mar 28 14:16:42 k8s-kubespray-master-0 kubelet[27413]: I0328 14:16:42.136647   27413 flags.go:33] FLAG: --cni-bin-dir="/opt/cni/bin"
Mar 28 14:16:42 k8s-kubespray-master-0 kubelet[27413]: I0328 14:16:42.136653   27413 flags.go:33] FLAG: --cni-conf-dir="/etc/cni/net.d"
Mar 28 14:16:42 k8s-kubespray-master-0 kubelet[27413]: I0328 14:16:42.136684   27413 flags.go:33] FLAG: --config=""
Mar 28 14:16:42 k8s-kubespray-master-0 kubelet[27413]: I0328 14:16:42.136691   27413 flags.go:33] FLAG: --container-hints="/etc/cadvisor/container_hints.json"
Mar 28 14:16:42 k8s-kubespray-master-0 kubelet[27413]: I0328 14:16:42.136697   27413 flags.go:33] FLAG: --container-log-max-files="5"
Mar 28 14:16:42 k8s-kubespray-master-0 kubelet[27413]: I0328 14:16:42.136705   27413 flags.go:33] FLAG: --container-log-max-size="10Mi"
Mar 28 14:16:42 k8s-kubespray-master-0 kubelet[27413]: I0328 14:16:42.136710   27413 flags.go:33] FLAG: --container-runtime="docker"
Mar 28 14:16:42 k8s-kubespray-master-0 kubelet[27413]: I0328 14:16:42.136715   27413 flags.go:33] FLAG: --container-runtime-endpoint="unix:///var/run/dockershim.sock"
Mar 28 14:16:42 k8s-kubespray-master-0 kubelet[27413]: I0328 14:16:42.136720   27413 flags.go:33] FLAG: --containerd="unix:///var/run/containerd.sock"
Mar 28 14:16:42 k8s-kubespray-master-0 kubelet[27413]: I0328 14:16:42.136725   27413 flags.go:33] FLAG: --containerized="false"
Mar 28 14:16:42 k8s-kubespray-master-0 kubelet[27413]: I0328 14:16:42.136731   27413 flags.go:33] FLAG: --contention-profiling="false"
Mar 28 14:16:42 k8s-kubespray-master-0 kubelet[27413]: I0328 14:16:42.136736   27413 flags.go:33] FLAG: --cpu-cfs-quota="true"
Mar 28 14:16:42 k8s-kubespray-master-0 kubelet[27413]: I0328 14:16:42.136741   27413 flags.go:33] FLAG: --cpu-cfs-quota-period="100ms"
Mar 28 14:16:42 k8s-kubespray-master-0 kubelet[27413]: I0328 14:16:42.136746   27413 flags.go:33] FLAG: --cpu-manager-policy="none"
Mar 28 14:16:42 k8s-kubespray-master-0 kubelet[27413]: I0328 14:16:42.136754   27413 flags.go:33] FLAG: --cpu-manager-reconcile-period="10s"
Mar 28 14:16:42 k8s-kubespray-master-0 kubelet[27413]: I0328 14:16:42.136759   27413 flags.go:33] FLAG: --docker="unix:///var/run/docker.sock"
Mar 28 14:16:42 k8s-kubespray-master-0 kubelet[27413]: I0328 14:16:42.136765   27413 flags.go:33] FLAG: --docker-endpoint="unix:///var/run/docker.sock"
Mar 28 14:16:42 k8s-kubespray-master-0 kubelet[27413]: I0328 14:16:42.136770   27413 flags.go:33] FLAG: --docker-env-metadata-whitelist=""
Mar 28 14:16:42 k8s-kubespray-master-0 kubelet[27413]: I0328 14:16:42.136775   27413 flags.go:33] FLAG: --docker-only="false"
Mar 28 14:16:42 k8s-kubespray-master-0 kubelet[27413]: I0328 14:16:42.136780   27413 flags.go:33] FLAG: --docker-root="/var/lib/docker"
Mar 28 14:16:42 k8s-kubespray-master-0 kubelet[27413]: I0328 14:16:42.136785   27413 flags.go:33] FLAG: --docker-tls="false"
Mar 28 14:16:42 k8s-kubespray-master-0 kubelet[27413]: I0328 14:16:42.136790   27413 flags.go:33] FLAG: --docker-tls-ca="ca.pem"
Mar 28 14:16:42 k8s-kubespray-master-0 kubelet[27413]: I0328 14:16:42.136795   27413 flags.go:33] FLAG: --docker-tls-cert="cert.pem"
Mar 28 14:16:42 k8s-kubespray-master-0 kubelet[27413]: I0328 14:16:42.136800   27413 flags.go:33] FLAG: --docker-tls-key="key.pem"
Mar 28 14:16:42 k8s-kubespray-master-0 kubelet[27413]: I0328 14:16:42.136805   27413 flags.go:33] FLAG: --dynamic-config-dir=""
Mar 28 14:16:42 k8s-kubespray-master-0 kubelet[27413]: I0328 14:16:42.136812   27413 flags.go:33] FLAG: --enable-controller-attach-detach="true"
Mar 28 14:16:42 k8s-kubespray-master-0 kubelet[27413]: I0328 14:16:42.136817   27413 flags.go:33] FLAG: --enable-debugging-handlers="true"
Mar 28 14:16:42 k8s-kubespray-master-0 kubelet[27413]: I0328 14:16:42.136822   27413 flags.go:33] FLAG: --enable-load-reader="false"
Mar 28 14:16:42 k8s-kubespray-master-0 kubelet[27413]: I0328 14:16:42.136827   27413 flags.go:33] FLAG: --enable-server="true"
Mar 28 14:16:42 k8s-kubespray-master-0 kubelet[27413]: I0328 14:16:42.136831   27413 flags.go:33] FLAG: --enforce-node-allocatable="[]"
Mar 28 14:16:42 k8s-kubespray-master-0 kubelet[27413]: I0328 14:16:42.136843   27413 flags.go:33] FLAG: --event-burst="10"
Mar 28 14:16:42 k8s-kubespray-master-0 kubelet[27413]: I0328 14:16:42.136848   27413 flags.go:33] FLAG: --event-qps="5"
Mar 28 14:16:42 k8s-kubespray-master-0 kubelet[27413]: I0328 14:16:42.136854   27413 flags.go:33] FLAG: --event-storage-age-limit="default=0"
Mar 28 14:16:42 k8s-kubespray-master-0 kubelet[27413]: I0328 14:16:42.136859   27413 flags.go:33] FLAG: --event-storage-event-limit="default=0"
Mar 28 14:16:42 k8s-kubespray-master-0 kubelet[27413]: I0328 14:16:42.136864   27413 flags.go:33] FLAG: --eviction-hard="imagefs.available<15%,memory.available<100Mi,nodefs.available<10%,nodefs.inodesFree<5%"
Mar 28 14:16:42 k8s-kubespray-master-0 kubelet[27413]: I0328 14:16:42.136878   27413 flags.go:33] FLAG: --eviction-max-pod-grace-period="0"
Mar 28 14:16:42 k8s-kubespray-master-0 kubelet[27413]: I0328 14:16:42.136883   27413 flags.go:33] FLAG: --eviction-minimum-reclaim=""
Mar 28 14:16:42 k8s-kubespray-master-0 kubelet[27413]: I0328 14:16:42.136890   27413 flags.go:33] FLAG: --eviction-pressure-transition-period="5m0s"
Mar 28 14:16:42 k8s-kubespray-master-0 kubelet[27413]: I0328 14:16:42.136895   27413 flags.go:33] FLAG: --eviction-soft=""
Mar 28 14:16:42 k8s-kubespray-master-0 kubelet[27413]: I0328 14:16:42.136901   27413 flags.go:33] FLAG: --eviction-soft-grace-period=""
Mar 28 14:16:42 k8s-kubespray-master-0 kubelet[27413]: I0328 14:16:42.136906   27413 flags.go:33] FLAG: --exit-on-lock-contention="false"
Mar 28 14:16:42 k8s-kubespray-master-0 kubelet[27413]: I0328 14:16:42.136911   27413 flags.go:33] FLAG: --experimental-allocatable-ignore-eviction="false"
Mar 28 14:16:42 k8s-kubespray-master-0 kubelet[27413]: I0328 14:16:42.136916   27413 flags.go:33] FLAG: --experimental-bootstrap-kubeconfig="/etc/kubernetes/bootstrap-kubelet.conf"
Mar 28 14:16:42 k8s-kubespray-master-0 kubelet[27413]: I0328 14:16:42.136921   27413 flags.go:33] FLAG: --experimental-check-node-capabilities-before-mount="false"
Mar 28 14:16:42 k8s-kubespray-master-0 kubelet[27413]: I0328 14:16:42.136926   27413 flags.go:33] FLAG: --experimental-dockershim="false"
Mar 28 14:16:42 k8s-kubespray-master-0 kubelet[27413]: I0328 14:16:42.136931   27413 flags.go:33] FLAG: --experimental-dockershim-root-directory="/var/lib/dockershim"
Mar 28 14:16:42 k8s-kubespray-master-0 kubelet[27413]: I0328 14:16:42.136938   27413 flags.go:33] FLAG: --experimental-fail-swap-on="true"
Mar 28 14:16:42 k8s-kubespray-master-0 kubelet[27413]: I0328 14:16:42.136944   27413 flags.go:33] FLAG: --experimental-kernel-memcg-notification="false"
Mar 28 14:16:42 k8s-kubespray-master-0 kubelet[27413]: I0328 14:16:42.136949   27413 flags.go:33] FLAG: --experimental-mounter-path=""
Mar 28 14:16:42 k8s-kubespray-master-0 kubelet[27413]: I0328 14:16:42.136954   27413 flags.go:33] FLAG: --fail-swap-on="true"
Mar 28 14:16:42 k8s-kubespray-master-0 kubelet[27413]: I0328 14:16:42.136959   27413 flags.go:33] FLAG: --feature-gates=""
Mar 28 14:16:42 k8s-kubespray-master-0 kubelet[27413]: I0328 14:16:42.136966   27413 flags.go:33] FLAG: --file-check-frequency="20s"
Mar 28 14:16:42 k8s-kubespray-master-0 kubelet[27413]: I0328 14:16:42.136971   27413 flags.go:33] FLAG: --global-housekeeping-interval="1m0s"
Mar 28 14:16:42 k8s-kubespray-master-0 kubelet[27413]: I0328 14:16:42.136976   27413 flags.go:33] FLAG: --google-json-key=""
Mar 28 14:16:42 k8s-kubespray-master-0 kubelet[27413]: I0328 14:16:42.136981   27413 flags.go:33] FLAG: --hairpin-mode="promiscuous-bridge"
Mar 28 14:16:42 k8s-kubespray-master-0 kubelet[27413]: I0328 14:16:42.137408   27413 flags.go:33] FLAG: --storage-driver-db="cadvisor"
Mar 28 14:16:42 k8s-kubespray-master-0 kubelet[27413]: I0328 14:16:42.137413   27413 flags.go:33] FLAG: --storage-driver-host="localhost:8086"
Mar 28 14:16:42 k8s-kubespray-master-0 kubelet[27413]: I0328 14:16:42.137418   27413 flags.go:33] FLAG: --storage-driver-password="root"
Mar 28 14:16:42 k8s-kubespray-master-0 kubelet[27413]: I0328 14:16:42.137423   27413 flags.go:33] FLAG: --storage-driver-secure="false"
Mar 28 14:16:42 k8s-kubespray-master-0 kubelet[27413]: I0328 14:16:42.137428   27413 flags.go:33] FLAG: --storage-driver-table="stats"
Mar 28 14:16:42 k8s-kubespray-master-0 kubelet[27413]: I0328 14:16:42.137432   27413 flags.go:33] FLAG: --storage-driver-user="root"
Mar 28 14:16:42 k8s-kubespray-master-0 kubelet[27413]: I0328 14:16:42.137437   27413 flags.go:33] FLAG: --streaming-connection-idle-timeout="4h0m0s"
Mar 28 14:16:42 k8s-kubespray-master-0 kubelet[27413]: I0328 14:16:42.137442   27413 flags.go:33] FLAG: --sync-frequency="1m0s"
Mar 28 14:16:42 k8s-kubespray-master-0 kubelet[27413]: I0328 14:16:42.137447   27413 flags.go:33] FLAG: --system-cgroups=""
Mar 28 14:16:42 k8s-kubespray-master-0 kubelet[27413]: I0328 14:16:42.137452   27413 flags.go:33] FLAG: --system-reserved=""
Mar 28 14:16:42 k8s-kubespray-master-0 kubelet[27413]: I0328 14:16:42.137457   27413 flags.go:33] FLAG: --system-reserved-cgroup=""
Mar 28 14:16:42 k8s-kubespray-master-0 kubelet[27413]: I0328 14:16:42.137464   27413 flags.go:33] FLAG: --tls-cert-file=""
Mar 28 14:16:42 k8s-kubespray-master-0 kubelet[27413]: I0328 14:16:42.137469   27413 flags.go:33] FLAG: --tls-cipher-suites="[]"
Mar 28 14:16:42 k8s-kubespray-master-0 kubelet[27413]: I0328 14:16:42.137478   27413 flags.go:33] FLAG: --tls-min-version=""
Mar 28 14:16:42 k8s-kubespray-master-0 kubelet[27413]: I0328 14:16:42.137483   27413 flags.go:33] FLAG: --tls-private-key-file=""
Mar 28 14:16:42 k8s-kubespray-master-0 kubelet[27413]: I0328 14:16:42.137488   27413 flags.go:33] FLAG: --v="2"
Mar 28 14:16:42 k8s-kubespray-master-0 kubelet[27413]: I0328 14:16:42.137493   27413 flags.go:33] FLAG: --version="false"
Mar 28 14:16:42 k8s-kubespray-master-0 kubelet[27413]: I0328 14:16:42.137501   27413 flags.go:33] FLAG: --vmodule=""
Mar 28 14:16:42 k8s-kubespray-master-0 kubelet[27413]: I0328 14:16:42.137506   27413 flags.go:33] FLAG: --volume-plugin-dir="/usr/libexec/kubernetes/kubelet-plugins/volume/exec/"
Mar 28 14:16:42 k8s-kubespray-master-0 kubelet[27413]: I0328 14:16:42.137512   27413 flags.go:33] FLAG: --volume-stats-agg-period="1m0s"
Mar 28 14:16:42 k8s-kubespray-master-0 kubelet[27413]: I0328 14:16:42.137545   27413 feature_gate.go:206] feature gates: &{map[]}
Mar 28 14:16:42 k8s-kubespray-master-0 kubelet[27413]: I0328 14:16:42.137594   27413 feature_gate.go:206] feature gates: &{map[]}
Mar 28 14:16:42 k8s-kubespray-master-0 kubelet[27413]: I0328 14:16:42.155909   27413 mount_linux.go:180] Detected OS with systemd
Mar 28 14:16:42 k8s-kubespray-master-0 kubelet[27413]: I0328 14:16:42.155951   27413 server.go:408] Version: v1.12.5
Mar 28 14:16:42 k8s-kubespray-master-0 kubelet[27413]: I0328 14:16:42.155994   27413 feature_gate.go:206] feature gates: &{map[]}
Mar 28 14:16:42 k8s-kubespray-master-0 kubelet[27413]: I0328 14:16:42.156045   27413 feature_gate.go:206] feature gates: &{map[]}
Mar 28 14:16:42 k8s-kubespray-master-0 kubelet[27413]: W0328 14:16:42.156351   27413 vsphere.go:300] SecretName and/or SecretNamespace is not provided. VCP will use username and password from config file
Mar 28 14:16:42 k8s-kubespray-master-0 kubelet[27413]: I0328 14:16:42.156429   27413 server.go:526] Successfully initialized cloud provider: "vsphere" from the config file: "/etc/kubernetes/cloud_config"
Mar 28 14:16:42 k8s-kubespray-master-0 kubelet[27413]: I0328 14:16:42.156440   27413 server.go:792] cloud provider determined current node name to be k8s-kubespray-master-0
Mar 28 14:16:42 k8s-kubespray-master-0 kubelet[27413]: I0328 14:16:42.158316   27413 bootstrap.go:57] Kubeconfig /etc/kubernetes/kubelet.conf exists and is valid, skipping bootstrap
Mar 28 14:16:42 k8s-kubespray-master-0 kubelet[27413]: I0328 14:16:42.161271   27413 manager.go:155] cAdvisor running in container: "/sys/fs/cgroup/cpu,cpuacct/system.slice/kubelet.service"
Mar 28 14:16:42 k8s-kubespray-master-0 kubelet[27413]: I0328 14:16:42.182015   27413 fs.go:142] Filesystem UUIDs: map[c0e9ebf8-69a1-4d1d-9bda-bad7c06de1e6:/dev/sda1 2f684596-66b7-4fce-b43c-1baff2f8749f:/dev/dm-0 712470b1-759e-4266-b2
Mar 28 14:16:42 k8s-kubespray-master-0 kubelet[27413]: I0328 14:16:42.182042   27413 fs.go:143] Filesystem partitions: map[tmpfs:{mountpoint:/run major:0 minor:22 fsType:tmpfs blockSize:0} /dev/mapper/ubuntu--1604--terraform--templat
Mar 28 14:16:42 k8s-kubespray-master-0 kubelet[27413]: I0328 14:16:42.183931   27413 manager.go:229] Machine: {NumCores:2 CpuFrequency:3092974 MemoryCapacity:2090643456 HugePages:[{PageSize:2048 NumPages:0}] MachineID:d39adbe11657f20
Mar 28 14:16:42 k8s-kubespray-master-0 kubelet[27413]: I0328 14:16:42.184573   27413 manager.go:235] Version: {KernelVersion:4.15.0-15-generic ContainerOsVersion:Ubuntu 16.04.6 LTS DockerVersion:18.06.1-ce DockerAPIVersion:1.38 Cadvi
Mar 28 14:16:42 k8s-kubespray-master-0 kubelet[27413]: I0328 14:16:42.184650   27413 server.go:667] --cgroups-per-qos enabled, but --cgroup-root was not specified.  defaulting to /
Mar 28 14:16:42 k8s-kubespray-master-0 kubelet[27413]: I0328 14:16:42.184965   27413 container_manager_linux.go:247] container manager verified user specified cgroup-root exists: []
Mar 28 14:16:42 k8s-kubespray-master-0 kubelet[27413]: I0328 14:16:42.184978   27413 container_manager_linux.go:252] Creating Container Manager object based on Node Config: {RuntimeCgroupsName:/systemd/system.slice SystemCgroupsName:
Mar 28 14:16:42 k8s-kubespray-master-0 kubelet[27413]: I0328 14:16:42.185466   27413 container_manager_linux.go:271] Creating device plugin manager: true
Mar 28 14:16:42 k8s-kubespray-master-0 kubelet[27413]: I0328 14:16:42.185475   27413 manager.go:108] Creating Device Plugin manager at /var/lib/kubelet/device-plugins/kubelet.sock
Mar 28 14:16:42 k8s-kubespray-master-0 kubelet[27413]: I0328 14:16:42.185498   27413 state_mem.go:36] [cpumanager] initializing new in-memory state store
Mar 28 14:16:42 k8s-kubespray-master-0 kubelet[27413]: I0328 14:16:42.185590   27413 state_mem.go:84] [cpumanager] updated default cpuset: ""
Mar 28 14:16:42 k8s-kubespray-master-0 kubelet[27413]: I0328 14:16:42.185601   27413 state_mem.go:92] [cpumanager] updated cpuset assignments: "map[]"
Mar 28 14:16:42 k8s-kubespray-master-0 kubelet[27413]: I0328 14:16:42.185609   27413 state_checkpoint.go:100] [cpumanager] state checkpoint: restored state from checkpoint
Mar 28 14:16:42 k8s-kubespray-master-0 kubelet[27413]: I0328 14:16:42.185615   27413 state_checkpoint.go:101] [cpumanager] state checkpoint: defaultCPUSet:
Mar 28 14:16:42 k8s-kubespray-master-0 kubelet[27413]: I0328 14:16:42.185673   27413 server.go:792] cloud provider determined current node name to be k8s-kubespray-master-0
Mar 28 14:16:42 k8s-kubespray-master-0 kubelet[27413]: I0328 14:16:42.185684   27413 server.go:955] Using root directory: /var/lib/kubelet
Mar 28 14:16:42 k8s-kubespray-master-0 kubelet[27413]: I0328 14:16:42.185715   27413 kubelet.go:395] cloud provider determined current node name to be k8s-kubespray-master-0
Mar 28 14:16:42 k8s-kubespray-master-0 kubelet[27413]: I0328 14:16:42.185727   27413 kubelet.go:279] Adding pod path: /etc/kubernetes/manifests
Mar 28 14:16:42 k8s-kubespray-master-0 kubelet[27413]: I0328 14:16:42.185743   27413 file.go:68] Watching path "/etc/kubernetes/manifests"
Mar 28 14:16:42 k8s-kubespray-master-0 kubelet[27413]: I0328 14:16:42.185754   27413 kubelet.go:304] Watching apiserver
Mar 28 14:16:42 k8s-kubespray-master-0 kubelet[27413]: E0328 14:16:42.211259   27413 reflector.go:134] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://192.168.1.113:6443/api/v1/pods?fieldSele
Mar 28 14:16:42 k8s-kubespray-master-0 kubelet[27413]: E0328 14:16:42.212208   27413 reflector.go:134] k8s.io/kubernetes/pkg/kubelet/kubelet.go:442: Failed to list *v1.Service: Get https://192.168.1.113:6443/api/v1/services?limit=500
Mar 28 14:16:42 k8s-kubespray-master-0 kubelet[27413]: I0328 14:16:42.212351   27413 client.go:75] Connecting to docker on unix:///var/run/docker.sock
Mar 28 14:16:42 k8s-kubespray-master-0 kubelet[27413]: I0328 14:16:42.212441   27413 client.go:104] Start docker client with request timeout=2m0s
Mar 28 14:16:42 k8s-kubespray-master-0 kubelet[27413]: E0328 14:16:42.212451   27413 reflector.go:134] k8s.io/kubernetes/pkg/kubelet/kubelet.go:451: Failed to list *v1.Node: Get https://192.168.1.113:6443/api/v1/nodes?fieldSelector=m
Mar 28 14:16:42 k8s-kubespray-master-0 kubelet[27413]: W0328 14:16:42.214921   27413 docker_service.go:540] Hairpin mode set to "promiscuous-bridge" but kubenet is not enabled, falling back to "hairpin-veth"
Mar 28 14:16:42 k8s-kubespray-master-0 kubelet[27413]: I0328 14:16:42.214966   27413 docker_service.go:236] Hairpin mode set to "hairpin-veth"
Mar 28 14:16:42 k8s-kubespray-master-0 kubelet[27413]: W0328 14:16:42.215216   27413 cni.go:188] Unable to update cni config: No networks found in /etc/cni/net.d
Mar 28 14:16:42 k8s-kubespray-master-0 kubelet[27413]: W0328 14:16:42.218877   27413 hostport_manager.go:68] The binary conntrack is not installed, this can cause failures in network connection cleanup.
Mar 28 14:16:42 k8s-kubespray-master-0 kubelet[27413]: W0328 14:16:42.218966   27413 cni.go:188] Unable to update cni config: No networks found in /etc/cni/net.d
Mar 28 14:16:42 k8s-kubespray-master-0 kubelet[27413]: I0328 14:16:42.218976   27413 plugins.go:159] Loaded network plugin "cni"
Mar 28 14:16:42 k8s-kubespray-master-0 kubelet[27413]: I0328 14:16:42.218995   27413 docker_service.go:251] Docker cri networking managed by cni
Mar 28 14:16:42 k8s-kubespray-master-0 kubelet[27413]: I0328 14:16:42.225673   27413 docker_service.go:256] Docker Info: &{ID:AO5W:M5RD:FH4E:JWY7:7T6I:ZVYV:XRMP:HM5Q:XYTU:ZH5B:XT53:3MDJ Containers:1 ContainersRunning:1 ContainersPaus
Mar 28 14:16:42 k8s-kubespray-master-0 kubelet[27413]: I0328 14:16:42.226668   27413 docker_service.go:269] Setting cgroupDriver to cgroupfs
Mar 28 14:16:42 k8s-kubespray-master-0 kubelet[27413]: I0328 14:16:42.227009   27413 kubelet.go:633] Starting the GRPC server for the docker CRI shim.
Mar 28 14:16:42 k8s-kubespray-master-0 kubelet[27413]: I0328 14:16:42.227510   27413 container_manager_linux.go:108] Configure resource-only container "/systemd/system.slice" with memory limit: 0
Mar 28 14:16:42 k8s-kubespray-master-0 kubelet[27413]: I0328 14:16:42.227807   27413 docker_server.go:59] Start dockershim grpc server
Mar 28 14:16:42 k8s-kubespray-master-0 kubelet[27413]: I0328 14:16:42.236187   27413 kuberuntime_manager.go:197] Container runtime docker initialized, version: 18.06.1-ce, apiVersion: 1.38.0
Mar 28 14:16:42 k8s-kubespray-master-0 kubelet[27413]: I0328 14:16:42.240978   27413 plugins.go:508] Loaded volume plugin "kubernetes.io/aws-ebs"
Mar 28 14:16:42 k8s-kubespray-master-0 kubelet[27413]: I0328 14:16:42.241000   27413 plugins.go:508] Loaded volume plugin "kubernetes.io/empty-dir"
Mar 28 14:16:42 k8s-kubespray-master-0 kubelet[27413]: I0328 14:16:42.241009   27413 plugins.go:508] Loaded volume plugin "kubernetes.io/gce-pd"
Mar 28 14:16:42 k8s-kubespray-master-0 kubelet[27413]: I0328 14:16:42.241017   27413 plugins.go:508] Loaded volume plugin "kubernetes.io/git-repo"
Mar 28 14:16:42 k8s-kubespray-master-0 kubelet[27413]: I0328 14:16:42.241025   27413 plugins.go:508] Loaded volume plugin "kubernetes.io/host-path"
Mar 28 14:16:42 k8s-kubespray-master-0 kubelet[27413]: I0328 14:16:42.241033   27413 plugins.go:508] Loaded volume plugin "kubernetes.io/nfs"
Mar 28 14:16:42 k8s-kubespray-master-0 kubelet[27413]: I0328 14:16:42.241041   27413 plugins.go:508] Loaded volume plugin "kubernetes.io/secret"
Mar 28 14:16:42 k8s-kubespray-master-0 kubelet[27413]: I0328 14:16:42.241050   27413 plugins.go:508] Loaded volume plugin "kubernetes.io/iscsi"
Mar 28 14:16:42 k8s-kubespray-master-0 kubelet[27413]: I0328 14:16:42.241061   27413 plugins.go:508] Loaded volume plugin "kubernetes.io/glusterfs"
Mar 28 14:16:42 k8s-kubespray-master-0 kubelet[27413]: I0328 14:16:42.241069   27413 plugins.go:508] Loaded volume plugin "kubernetes.io/rbd"
Mar 28 14:16:42 k8s-kubespray-master-0 kubelet[27413]: I0328 14:16:42.241077   27413 plugins.go:508] Loaded volume plugin "kubernetes.io/cinder"
Mar 28 14:16:42 k8s-kubespray-master-0 kubelet[27413]: I0328 14:16:42.241085   27413 plugins.go:508] Loaded volume plugin "kubernetes.io/quobyte"
Mar 28 14:16:42 k8s-kubespray-master-0 kubelet[27413]: I0328 14:16:42.241093   27413 plugins.go:508] Loaded volume plugin "kubernetes.io/cephfs"
Mar 28 14:16:42 k8s-kubespray-master-0 kubelet[27413]: I0328 14:16:42.241102   27413 plugins.go:508] Loaded volume plugin "kubernetes.io/downward-api"
Mar 28 14:16:42 k8s-kubespray-master-0 kubelet[27413]: I0328 14:16:42.241110   27413 plugins.go:508] Loaded volume plugin "kubernetes.io/fc"
Mar 28 14:16:42 k8s-kubespray-master-0 kubelet[27413]: I0328 14:16:42.241118   27413 plugins.go:508] Loaded volume plugin "kubernetes.io/flocker"
Mar 28 14:16:42 k8s-kubespray-master-0 kubelet[27413]: I0328 14:16:42.241126   27413 plugins.go:508] Loaded volume plugin "kubernetes.io/azure-file"
Mar 28 14:16:42 k8s-kubespray-master-0 kubelet[27413]: I0328 14:16:42.241134   27413 plugins.go:508] Loaded volume plugin "kubernetes.io/configmap"
Mar 28 14:16:42 k8s-kubespray-master-0 kubelet[27413]: I0328 14:16:42.241142   27413 plugins.go:508] Loaded volume plugin "kubernetes.io/vsphere-volume"
Mar 28 14:16:42 k8s-kubespray-master-0 kubelet[27413]: I0328 14:16:42.241150   27413 plugins.go:508] Loaded volume plugin "kubernetes.io/azure-disk"
Mar 28 14:16:42 k8s-kubespray-master-0 kubelet[27413]: I0328 14:16:42.241159   27413 plugins.go:508] Loaded volume plugin "kubernetes.io/photon-pd"
Mar 28 14:16:42 k8s-kubespray-master-0 kubelet[27413]: I0328 14:16:42.241167   27413 plugins.go:508] Loaded volume plugin "kubernetes.io/projected"
Mar 28 14:16:42 k8s-kubespray-master-0 kubelet[27413]: I0328 14:16:42.241185   27413 plugins.go:508] Loaded volume plugin "kubernetes.io/portworx-volume"
Mar 28 14:16:42 k8s-kubespray-master-0 kubelet[27413]: I0328 14:16:42.241198   27413 plugins.go:508] Loaded volume plugin "kubernetes.io/scaleio"
Mar 28 14:16:42 k8s-kubespray-master-0 kubelet[27413]: I0328 14:16:42.241207   27413 plugins.go:508] Loaded volume plugin "kubernetes.io/local-volume"
Mar 28 14:16:42 k8s-kubespray-master-0 kubelet[27413]: I0328 14:16:42.241215   27413 plugins.go:508] Loaded volume plugin "kubernetes.io/storageos"
Mar 28 14:16:42 k8s-kubespray-master-0 kubelet[27413]: I0328 14:16:42.241227   27413 plugins.go:508] Loaded volume plugin "kubernetes.io/csi"
Mar 28 14:16:42 k8s-kubespray-master-0 kubelet[27413]: I0328 14:16:42.248017   27413 server.go:1013] Started kubelet
Mar 28 14:16:42 k8s-kubespray-master-0 kubelet[27413]: E0328 14:16:42.248381   27413 kubelet.go:1287] Image garbage collection failed once. Stats initialization may not have completed yet: failed to get imageFs info: unable to find d
Mar 28 14:16:42 k8s-kubespray-master-0 kubelet[27413]: I0328 14:16:42.249451   27413 fs_resource_analyzer.go:66] Starting FS ResourceAnalyzer
Mar 28 14:16:42 k8s-kubespray-master-0 kubelet[27413]: I0328 14:16:42.249759   27413 status_manager.go:152] Starting to sync pod status with apiserver
Mar 28 14:16:42 k8s-kubespray-master-0 kubelet[27413]: I0328 14:16:42.250050   27413 kubelet.go:1804] Starting kubelet main sync loop.
Mar 28 14:16:42 k8s-kubespray-master-0 kubelet[27413]: I0328 14:16:42.250349   27413 kubelet.go:1821] skipping pod synchronization - [container runtime is down PLEG is not healthy: pleg was last seen active 2562047h47m16.854775807s a
Mar 28 14:16:42 k8s-kubespray-master-0 kubelet[27413]: I0328 14:16:42.251081   27413 server.go:133] Starting to listen on 0.0.0.0:10250
Mar 28 14:16:42 k8s-kubespray-master-0 kubelet[27413]: I0328 14:16:42.255649   27413 server.go:318] Adding debug handlers to kubelet server.
Mar 28 14:16:42 k8s-kubespray-master-0 kubelet[27413]: E0328 14:16:42.260197   27413 event.go:212] Unable to write event: 'Post https://192.168.1.113:6443/api/v1/namespaces/default/events: EOF' (may retry after sleeping)
Mar 28 14:16:42 k8s-kubespray-master-0 kubelet[27413]: I0328 14:16:42.260922   27413 volume_manager.go:246] The desired_state_of_world populator starts
Mar 28 14:16:42 k8s-kubespray-master-0 kubelet[27413]: I0328 14:16:42.261222   27413 volume_manager.go:248] Starting Kubelet Volume Manager
Mar 28 14:16:42 k8s-kubespray-master-0 kubelet[27413]: I0328 14:16:42.268966   27413 desired_state_of_world_populator.go:130] Desired state populator starts to run
Mar 28 14:16:42 k8s-kubespray-master-0 kubelet[27413]: W0328 14:16:42.279793   27413 cni.go:188] Unable to update cni config: No networks found in /etc/cni/net.d
Mar 28 14:16:42 k8s-kubespray-master-0 kubelet[27413]: E0328 14:16:42.280576   27413 kubelet.go:2167] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: c
Mar 28 14:16:42 k8s-kubespray-master-0 kubelet[27413]: I0328 14:16:42.297980   27413 factory.go:356] Registering Docker factory
Mar 28 14:16:42 k8s-kubespray-master-0 kubelet[27413]: I0328 14:16:42.298725   27413 factory.go:138] Registering mesos factory
Mar 28 14:16:42 k8s-kubespray-master-0 kubelet[27413]: I0328 14:16:42.299049   27413 factory.go:54] Registering systemd factory
Mar 28 14:16:42 k8s-kubespray-master-0 kubelet[27413]: I0328 14:16:42.299503   27413 factory.go:97] Registering Raw factory
Mar 28 14:16:42 k8s-kubespray-master-0 kubelet[27413]: I0328 14:16:42.299944   27413 manager.go:1222] Started watching for new ooms in manager
Mar 28 14:16:42 k8s-kubespray-master-0 kubelet[27413]: I0328 14:16:42.301734   27413 manager.go:365] Starting recovery of all containers
Mar 28 14:16:42 k8s-kubespray-master-0 kubelet[27413]: I0328 14:16:42.350784   27413 kubelet.go:1821] skipping pod synchronization - [container runtime is down]
Mar 28 14:16:42 k8s-kubespray-master-0 kubelet[27413]: E0328 14:16:42.361668   27413 kubelet.go:2236] node "k8s-kubespray-master-0" not found
Mar 28 14:16:42 k8s-kubespray-master-0 kubelet[27413]: I0328 14:16:42.362051   27413 kubelet_node_status.go:276] Setting node annotation to enable volume controller attach/detach
Mar 28 14:16:42 k8s-kubespray-master-0 kubelet[27413]: I0328 14:16:42.377915   27413 manager.go:370] Recovery completed
Mar 28 14:16:42 k8s-kubespray-master-0 kubelet[27413]: I0328 14:16:42.450568   27413 kubelet_node_status.go:276] Setting node annotation to enable volume controller attach/detach
Mar 28 14:16:42 k8s-kubespray-master-0 kubelet[27413]: E0328 14:16:42.462219   27413 kubelet.go:2236] node "k8s-kubespray-master-0" not found
Mar 28 14:16:42 k8s-kubespray-master-0 kubelet[27413]: I0328 14:16:42.551432   27413 kubelet.go:1821] skipping pod synchronization - [container runtime is down]
Mar 28 14:16:42 k8s-kubespray-master-0 kubelet[27413]: E0328 14:16:42.562407   27413 kubelet.go:2236] node "k8s-kubespray-master-0" not found
Mar 28 14:16:42 k8s-kubespray-master-0 kubelet[27413]: E0328 14:16:42.662960   27413 kubelet.go:2236] node "k8s-kubespray-master-0" not found
Mar 28 14:16:42 k8s-kubespray-master-0 kubelet[27413]: E0328 14:16:42.763624   27413 kubelet.go:2236] node "k8s-kubespray-master-0" not found
Mar 28 14:16:42 k8s-kubespray-master-0 kubelet[27413]: E0328 14:16:42.864251   27413 kubelet.go:2236] node "k8s-kubespray-master-0" not found
Mar 28 14:16:42 k8s-kubespray-master-0 kubelet[27413]: I0328 14:16:42.951999   27413 kubelet.go:1821] skipping pod synchronization - [container runtime is down]
Mar 28 14:16:42 k8s-kubespray-master-0 kubelet[27413]: E0328 14:16:42.964981   27413 kubelet.go:2236] node "k8s-kubespray-master-0" not found
Mar 28 14:16:43 k8s-kubespray-master-0 kubelet[27413]: E0328 14:16:43.065165   27413 kubelet.go:2236] node "k8s-kubespray-master-0" not found
Mar 28 14:16:43 k8s-kubespray-master-0 kubelet[27413]: E0328 14:16:43.165358   27413 kubelet.go:2236] node "k8s-kubespray-master-0" not found
Mar 28 14:16:43 k8s-kubespray-master-0 kubelet[27413]: E0328 14:16:43.212162   27413 reflector.go:134] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://192.168.1.113:6443/api/v1/pods?fieldSele
Mar 28 14:16:43 k8s-kubespray-master-0 kubelet[27413]: E0328 14:16:43.213898   27413 reflector.go:134] k8s.io/kubernetes/pkg/kubelet/kubelet.go:442: Failed to list *v1.Service: Get https://192.168.1.113:6443/api/v1/services?limit=500
Mar 28 14:16:43 k8s-kubespray-master-0 kubelet[27413]: E0328 14:16:43.215549   27413 reflector.go:134] k8s.io/kubernetes/pkg/kubelet/kubelet.go:451: Failed to list *v1.Node: Get https://192.168.1.113:6443/api/v1
RELATO commented 5 years ago

Hi,

The following lines are in the /etc/hosts file of masters nodes

relato@k8s-kubespray-master-0:/etc/kubernetes$ cat /etc/hosts
127.0.0.1 localhost localhost.localdomain
127.0.1.1   k8s-kubespray-0.vcenter.local k8s-kubespray-0

# The following lines are desirable for IPv6 capable hosts
::1 ip6-localhost ip6-loopback localhost6 localhost6.localdomain
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters

192.168.1.110   k8s-kubespray-0.vcenter.local k8s-kubespray-0
# Ansible inventory hosts BEGIN
192.168.1.110 k8s-kubespray-master-0 k8s-kubespray-master-0.cluster.local
192.168.1.111 k8s-kubespray-master-1 k8s-kubespray-master-1.cluster.local
192.168.1.112 k8s-kubespray-master-2 k8s-kubespray-master-2.cluster.local
192.168.1.120 k8s-kubespray-worker-0 k8s-kubespray-worker-0.cluster.local
192.168.1.121 k8s-kubespray-worker-1 k8s-kubespray-worker-1.cluster.local
192.168.1.122 k8s-kubespray-worker-2 k8s-kubespray-worker-2.cluster.local
# Ansible inventory hosts END
192.168.1.113 192.168.1.113
sguyennet commented 5 years ago

Hi, Your issue come from the vSphere Cloud Provider. The kubelet is not able to login to vSphere:

Mar 28 14:16:31 k8s-kubespray-master-0 kubelet[27293]: E0328 14:16:31.730266 27293 connection.go:65] Failed to create govmomi client. err: ServerFaultCode: Cannot complete login due to an incorrect user name or password. Mar 28 14:16:31 k8s-kubespray-master-0 kubelet[27293]: E0328 14:16:31.730311 27293 nodemanager.go:382] Cannot connect to vCenter with err: ServerFaultCode: Cannot complete login due to an incorrect user name or password. Mar 28 14:16:31 k8s-kubespray-master-0 kubelet[27293]: E0328 14:16:31.730328 27293 vsphere.go:589] failed connecting to vcServer "192.168.1.224" with error ServerFaultCode: Cannot complete login due to an incorrect user name or pas Mar 28 14:16:31 k8s-kubespray-master-0 kubelet[27293]: E0328 14:16:31.730346 27293 vsphere.go:1330] Cannot connent to vsphere. Get zone for node k8s-kubespray-master-0 error Mar 28 14:16:31 k8s-kubespray-master-0 kubelet[27293]: F0328 14:16:31.730370 27293 kubelet.go:1354] Kubelet failed to get node info: failed to get zone from cloud provider: ServerFaultCode: Cannot complete login due to an incorrect Mar 28 14:16:31 k8s-kubespray-master-0 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/n/a

Did you create the k8s-vcp@vsphere.local user? Are you sure of the password? Are the vsphere permission correct for this user?

You can do a test with the administrator@vsphere.local user, but it is not good for production as it will give full vSphere administration permissions to the kubelets.

RELATO commented 5 years ago

Hi,

You are absolutely right! After correcting a misspelled name in vsphere_vcp_user (terraform.tfvars) and rerun everything from scratch I was able to complete the installation.

Thank you very much!

home@Macintosh  ~/projetos/clusters/terraform-vsphere-kubespray $  kubectl --kubeconfig=config/admin.conf get all --all-namespaces
NAMESPACE     NAME                                                 READY   STATUS    RESTARTS   AGE
kube-system   pod/calico-kube-controllers-cb47f76b9-4l5z4          1/1     Running   0          8m5s
kube-system   pod/calico-node-4bvvm                                1/1     Running   0          8m18s
kube-system   pod/calico-node-5wmp7                                1/1     Running   0          8m18s
kube-system   pod/calico-node-hw9hb                                1/1     Running   0          8m18s
kube-system   pod/calico-node-mh7gx                                1/1     Running   0          8m18s
kube-system   pod/calico-node-mwjbf                                1/1     Running   0          8m18s
kube-system   pod/calico-node-plnzf                                1/1     Running   0          8m18s
kube-system   pod/coredns-788d98cc7b-gx88v                         1/1     Running   0          6m44s
kube-system   pod/coredns-788d98cc7b-vl4x8                         1/1     Running   0          6m36s
kube-system   pod/dns-autoscaler-66b95c57d9-kfndc                  1/1     Running   0          6m40s
kube-system   pod/kube-apiserver-k8s-kubespray-master-0            1/1     Running   0          12m
kube-system   pod/kube-apiserver-k8s-kubespray-master-1            1/1     Running   0          11m
kube-system   pod/kube-apiserver-k8s-kubespray-master-2            1/1     Running   0          11m
kube-system   pod/kube-controller-manager-k8s-kubespray-master-0   1/1     Running   0          12m
kube-system   pod/kube-controller-manager-k8s-kubespray-master-1   1/1     Running   0          11m
kube-system   pod/kube-controller-manager-k8s-kubespray-master-2   1/1     Running   0          11m
kube-system   pod/kube-proxy-2xs95                                 1/1     Running   0          7m24s
kube-system   pod/kube-proxy-6wgmw                                 1/1     Running   0          7m44s
kube-system   pod/kube-proxy-cbxgw                                 1/1     Running   0          7m50s
kube-system   pod/kube-proxy-sd5q7                                 1/1     Running   0          8m4s
kube-system   pod/kube-proxy-sf4qq                                 1/1     Running   0          8m12s
kube-system   pod/kube-proxy-zrrfw                                 1/1     Running   0          7m29s
kube-system   pod/kube-scheduler-k8s-kubespray-master-0            1/1     Running   0          12m
kube-system   pod/kube-scheduler-k8s-kubespray-master-1            1/1     Running   0          11m
kube-system   pod/kube-scheduler-k8s-kubespray-master-2            1/1     Running   0          11m
kube-system   pod/kubernetes-dashboard-5db4d9f45f-h7nhn            1/1     Running   0          6m35s

NAMESPACE     NAME                           TYPE        CLUSTER-IP     EXTERNAL-IP   PORT(S)                  AGE
default       service/kubernetes             ClusterIP   10.233.0.1     <none>        443/TCP                  13m
kube-system   service/coredns                ClusterIP   10.233.0.3     <none>        53/UDP,53/TCP,9153/TCP   6m43s
kube-system   service/kubernetes-dashboard   ClusterIP   10.233.56.19   <none>        443/TCP                  6m35s

NAMESPACE     NAME                         DESIRED   CURRENT   READY   UP-TO-DATE   AVAILABLE   NODE SELECTOR                 AGE
kube-system   daemonset.apps/calico-node   6         6         6       6            6           <none>                        8m18s
kube-system   daemonset.apps/kube-proxy    6         6         6       6            6           beta.kubernetes.io/os=linux   13m

NAMESPACE     NAME                                      DESIRED   CURRENT   UP-TO-DATE   AVAILABLE   AGE
kube-system   deployment.apps/calico-kube-controllers   1         1         1            1           8m7s
kube-system   deployment.apps/coredns                   2         2         2            2           6m44s
kube-system   deployment.apps/dns-autoscaler            1         1         1            1           6m40s
kube-system   deployment.apps/kubernetes-dashboard      1         1         1            1           6m35s

NAMESPACE     NAME                                                DESIRED   CURRENT   READY   AGE
kube-system   replicaset.apps/calico-kube-controllers-cb47f76b9   1         1         1       8m7s
kube-system   replicaset.apps/coredns-788d98cc7b                  2         2         2       6m44s
kube-system   replicaset.apps/dns-autoscaler-66b95c57d9           1         1         1       6m40s
kube-system   replicaset.apps/kubernetes-dashboard-5db4d9f45f     1         1         1       6m35s

I got a following error ( because I am running from a Macbook, I think )

sed -i 's/lb-apiserver.kubernetes.local/192.168.1.113/g' config/admin.conf
sed: 1: "config/admin.conf": command c expects \ followed by text  
ashfaqurrahman-akij commented 5 years ago

Hi Simon

Great work, as I mentioned in your Blog-post. However, I downloaded your scripts a few days ago, and modified the terraform.tfvars My template VM has also been modified to remove password-less sudo & ssh. (Ubuntu 16.04 x64) I was facing similar issues as the OP "RELATO"

` Error: Error running command 'cd ansible/kubespray && ansible-playbook -i ../../config/hosts.ini -b -u user01 -e "ansible_ssh_pass=$VM_PASSWORD ansible_become_pass=$VM_PRIVILEGE_PASSWORD kube_version=v1.14.3" -T 300 -v cluster.yml': exit status 2. Output: ger\"\n[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory \"/etc/kubernetes/manifests\". This can take up to 5m0s\n[kubelet-check] Initial timeout of 40s passed.\n\nUnfortunately, an error has occurred:\n\ttimed out waiting for the condition\n\nThis error is likely caused by:\n\t- The kubelet is not running\n\t- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)\n\nIf you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:\n\t- 'systemctl status kubelet'\n\t- 'journalctl -xeu kubelet'\n\nAdditionally, a control plane component may have crashed or exited when started by the container runtime.\nTo troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.\nHere is one example how you may list all Kubernetes containers running in docker:\n\t- 'docker ps -a | grep kube | grep -v pause'\n\tOnce you have found the failing container, you can inspect its logs with:\n\t- 'docker logs CONTAINERID'", "stdout_lines": ["[init] Using Kubernetes version: v1.14.3", "[preflight] Running pre-flight checks", "[preflight] Pulling images required for setting up a Kubernetes cluster", "[preflight] This might take a minute or two, depending on the speed of your internet connection", "[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'", "[kubelet-start] Writing kubelet environment file with flags to file \"/var/lib/kubelet/kubeadm-flags.env\"", "[kubelet-start] Writing kubelet configuration to file \"/var/lib/kubelet/config.yaml\"", "[kubelet-start] Activating the kubelet service", "[certs] Using certificateDir folder \"/etc/kubernetes/ssl\"", "[certs] Using existing ca certificate authority", "[certs] Using existing apiserver-kubelet-client certificate and key on disk", "[certs] Using existing apiserver certificate and key on disk", "[certs] Using existing front-proxy-ca certificate authority", "[certs] Using existing front-proxy-client certificate and key on disk", "[certs] External etcd mode: Skipping etcd/ca certificate authority generation", "[certs] External etcd mode: Skipping etcd/server certificate authority generation", "[certs] External etcd mode: Skipping apiserver-etcd-client certificate authority generation", "[certs] External etcd mode: Skipping etcd/peer certificate authority generation", "[certs] External etcd mode: Skipping etcd/healthcheck-client certificate authority generation", "[certs] Using the existing \"sa\" key", "[kubeconfig] Using kubeconfig folder \"/etc/kubernetes\"", "[kubeconfig] Using existing kubeconfig file: \"/etc/kubernetes/admin.conf\"", "[kubeconfig] Using existing kubeconfig file: \"/etc/kubernetes/kubelet.conf\"", "[kubeconfig] Using existing kubeconfig file: \"/etc/kubernetes/controller-manager.conf\"", "[kubeconfig] Using existing kubeconfig file: \"/etc/kubernetes/scheduler.conf\"", "[control-plane] Using manifest folder \"/etc/kubernetes/manifests\"", "[control-plane] Creating static Pod manifest for \"kube-apiserver\"", "[controlplane] Adding extra host path mount \"cloud-config\" to \"kube-apiserver\"", "[controlplane] Adding extra host path mount \"usr-share-ca-certificates\" to \"kube-apiserver\"", "[controlplane] Adding extra host path mount \"cloud-config\" to \"kube-controller-manager\"", "[control-plane] Creating static Pod manifest for \"kube-controller-manager\"", "[controlplane] Adding extra host path mount \"cloud-config\" to \"kube-apiserver\"", "[controlplane] Adding extra host path mount \"usr-share-ca-certificates\" to \"kube-apiserver\"", "[controlplane] Adding extra host path mount \"cloud-config\" to \"kube-controller-manager\"", "[control-plane] Creating static Pod manifest for \"kube-scheduler\"", "[controlplane] Adding extra host path mount \"cloud-config\" to \"kube-apiserver\"", "[controlplane] Adding extra host path mount \"usr-share-ca-certificates\" to \"kube-apiserver\"", "[controlplane] Adding extra host path mount \"cloud-config\" to \"kube-controller-manager\"", "[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory \"/etc/kubernetes/manifests\". This can take up to 5m0s", "[kubelet-check] Initial timeout of 40s passed.", "", "Unfortunately, an error has occurred:", "\ttimed out waiting for the condition", "", "This error is likely caused by:", "\t- The kubelet is not running", "\t- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)", "", "If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:", "\t- 'systemctl status kubelet'", "\t- 'journalctl -xeu kubelet'", "", "Additionally, a control plane component may have crashed or exited when started by the container runtime.", "To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.", "Here is one example how you may list all Kubernetes containers running in docker:", "\t- 'docker ps -a | grep kube | grep -v pause'", "\tOnce you have found the failing container, you can inspect its logs with:", "\t- 'docker logs CONTAINERID'"]}

NO MORE HOSTS LEFT ***** to retry, use: --limit @/home/akij.net/ashfaqur.corp/terraform_0.12.3/terraform-vsphere-kubespray-master/ansible/kubespray/cluster.retry

PLAY RECAP ***** k8s-kubespray-master-0 : ok=327 changed=84 unreachable=0 failed=1
k8s-kubespray-master-1 : ok=302 changed=81 unreachable=0 failed=0
k8s-kubespray-master-2 : ok=302 changed=81 unreachable=0 failed=0
k8s-kubespray-worker-0 : ok=234 changed=60 unreachable=0 failed=0
k8s-kubespray-worker-1 : ok=234 changed=60 unreachable=0 failed=0
k8s-kubespray-worker-2 : ok=234 changed=60 unreachable=0 failed=0
localhost : ok=1 changed=0 unreachable=0 failed=0

Tuesday 02 July 2019 13:26:16 +0600 (0:20:59.465) 0:48:28.200 ** =============================================================================== kubernetes/master : kubeadm | Initialize first master ---------------- 1259.47s download : file_download | Download item ------------------------------ 255.64s download : container_download | Download containers if pull is required or told to always pull (all nodes) -- 87.62s container-engine/docker : ensure docker packages are installed --------- 83.99s download : container_download | Download containers if pull is required or told to always pull (all nodes) -- 78.24s download : container_download | download images for kubeadm config images -- 78.11s bootstrap-os : Install dbus for the hostname module -------------------- 51.22s download : container_download | Download containers if pull is required or told to always pull (all nodes) -- 47.21s download : container_download | Download containers if pull is required or told to always pull (all nodes) -- 43.82s download : container_download | Download containers if pull is required or told to always pull (all nodes) -- 43.30s download : file_download | Download item ------------------------------- 37.73s download : file_download | Download item ------------------------------- 35.10s download : container_download | Download containers if pull is required or told to always pull (all nodes) -- 31.33s download : container_download | Download containers if pull is required or told to always pull (all nodes) -- 26.20s kubernetes/preinstall : Install packages requirements ------------------ 24.59s container-engine/docker : ensure docker-ce repository is enabled ------- 22.92s etcd : Gen_certs | Write etcd master certs ----------------------------- 22.89s download : container_download | Download containers if pull is required or told to always pull (all nodes) -- 18.01s download : container_download | Download containers if pull is required or told to always pull (all nodes) -- 14.11s etcd : reload etcd ----------------------------------------------------- 11.82s

`

After coming across this post, I SSH-ed into the master-0 node and ran "journalctl -u kubelet"

the log is too long I am only posting a few lines.

` Jul 02 14:51:35 k8s-kubespray-master-0 kubelet[11272]: E0702 14:51:35.409059 11272 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list v1beta1.CSIDriver: Get https://172.17.17.143:6443/apis/storage.k8s.io/v1beta1/csidrivers?limit=500&resourceVersion=0: dial tcp 172.17.17.143:6443: connect: no route to host Jul 02 14:51:35 k8s-kubespray-master-0 kubelet[11272]: E0702 14:51:35.409244 11272 reflector.go:126] k8s.io/kubernetes/pkg/kubelet/kubelet.go:451: Failed to list v1.Node: Get https://172.17.17.143:6443/api/v1/nodes?fieldSelector=metadata.name%3Dk8s-kubespray-master-0&limit=500&resourceVersion=0: dial tcp 172.17.17.143:6443: connect: no route to host Jul 02 14:51:35 k8s-kubespray-master-0 kubelet[11272]: E0702 14:51:35.409406 11272 reflector.go:126] k8s.io/kubernetes/pkg/kubelet/kubelet.go:442: Failed to list v1.Service: Get https://172.17.17.143:6443/api/v1/services?limit=500&resourceVersion=0: dial tcp 172.17.17.143:6443: connect: no route to host Jul 02 14:51:35 k8s-kubespray-master-0 kubelet[11272]: E0702 14:51:35.409443 11272 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list v1beta1.RuntimeClass: Get https://172.17.17.143:6443/apis/node.k8s.io/v1beta1/runtimeclasses?limit=500&resourceVersion=0: dial tcp 172.17.17.143:6443: connect: no route to host Jul 02 14:51:35 k8s-kubespray-master-0 kubelet[11272]: E0702 14:51:35.409678 11272 reflector.go:126] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list v1.Pod: Get https://172.17.17.143:6443/api/v1/pods?fieldSelector=spec.nodeName%3Dk8s-kubespray-master-0&limit=500&resourceVersion=0: dial tcp 172.17.17.143:6443: connect: no route to host Jul 02 14:51:35 k8s-kubespray-master-0 kubelet[11272]: E0702 14:51:35.505392 11272 kubelet.go:2244] node "k8s-kubespray-master-0" not found Jul 02 14:51:35 k8s-kubespray-master-0 kubelet[11272]: E0702 14:51:35.605823 11272 kubelet.go:2244] node "k8s-kubespray-master-0" not found Jul 02 14:51:35 k8s-kubespray-master-0 kubelet[11272]: E0702 14:51:35.706167 11272 kubelet.go:2244] node "k8s-kubespray-master-0" not found Jul 02 14:51:35 k8s-kubespray-master-0 kubelet[11272]: W0702 14:51:35.718100 11272 cni.go:213] Unable to update cni config: No networks found in /etc/cni/net.d Jul 02 14:51:35 k8s-kubespray-master-0 kubelet[11272]: E0702 14:51:35.807283 11272 kubelet.go:2244] node "k8s-kubespray-master-0" not found Jul 02 14:51:35 k8s-kubespray-master-0 kubelet[11272]: E0702 14:51:35.907585 11272 kubelet.go:2244] node "k8s-kubespray-master-0" not found Jul 02 14:51:36 k8s-kubespray-master-0 kubelet[11272]: E0702 14:51:36.008165 11272 kubelet.go:2244] node "k8s-kubespray-master-0" not found Jul 02 14:51:36 k8s-kubespray-master-0 kubelet[11272]: E0702 14:51:36.108570 11272 kubelet.go:2244] node "k8s-kubespray-master-0" not found Jul 02 14:51:36 k8s-kubespray-master-0 kubelet[11272]: E0702 14:51:36.209874 11272 kubelet.go:2244] node "k8s-kubespray-master-0" not found Jul 02 14:51:36 k8s-kubespray-master-0 kubelet[11272]: E0702 14:51:36.310176 11272 kubelet.go:2244] node "k8s-kubespray-master-0" not found Jul 02 14:51:36 k8s-kubespray-master-0 kubelet[11272]: I0702 14:51:36.408283 11272 kubelet_node_status.go:283] Setting node annotation to enable volume controller attach/detach Jul 02 14:51:36 k8s-kubespray-master-0 kubelet[11272]: I0702 14:51:36.408384 11272 vsphere.go:857] The vSphere cloud provider does not support zones Jul 02 14:51:36 k8s-kubespray-master-0 kubelet[11272]: I0702 14:51:36.408728 11272 setters.go:73] Using node IP: "172.17.17.133" Jul 02 14:51:36 k8s-kubespray-master-0 kubelet[11272]: E0702 14:51:36.413642 11272 kubelet.go:2244] node "k8s-kubespray-master-0" not found Jul 02 14:51:36 k8s-kubespray-master-0 kubelet[11272]: I0702 14:51:36.416850 11272 kubelet_node_status.go:468] Recording NodeHasSufficientMemory event message for node k8s-kubespray-master-0 Jul 02 14:51:36 k8s-kubespray-master-0 kubelet[11272]: I0702 14:51:36.416949 11272 kubelet_node_status.go:468] Recording NodeHasNoDiskPressure event message for node k8s-kubespray-master-0 Jul 02 14:51:36 k8s-kubespray-master-0 kubelet[11272]: I0702 14:51:36.416989 11272 kubelet_node_status.go:468] Recording NodeHasSufficientPID event message for node k8s-kubespray-master-0 Jul 02 14:51:36 k8s-kubespray-master-0 kubelet[11272]: I0702 14:51:36.417433 11272 kubelet_node_status.go:72] Attempting to register node k8s-kubespray-master-0 Jul 02 14:51:36 k8s-kubespray-master-0 kubelet[11272]: E0702 14:51:36.515292 11272 kubelet.go:2244] node "k8s-kubespray-master-0" not found Jul 02 14:51:36 k8s-kubespray-master-0 kubelet[11272]: E0702 14:51:36.615926 11272 kubelet.go:2244] node "k8s-kubespray-master-0" not found Jul 02 14:51:36 k8s-kubespray-master-0 kubelet[11272]: E0702 14:51:36.716274 11272 kubelet.go:2244] node "k8s-kubespray-master-0" not found Jul 02 14:51:36 k8s-kubespray-master-0 kubelet[11272]: E0702 14:51:36.816577 11272 kubelet.go:2244] node "k8s-kubespray-master-0" not found Jul 02 14:51:36 k8s-kubespray-master-0 kubelet[11272]: E0702 14:51:36.916930 11272 kubelet.go:2244] node "k8s-kubespray-master-0" not found Jul 02 14:51:37 k8s-kubespray-master-0 kubelet[11272]: E0702 14:51:37.017219 11272 kubelet.go:2244] node "k8s-kubespray-master-0" not found Jul 02 14:51:37 k8s-kubespray-master-0 kubelet[11272]: E0702 14:51:37.118380 11272 kubelet.go:2244] node "k8s-kubespray-master-0" not found Jul 02 14:51:37 k8s-kubespray-master-0 kubelet[11272]: E0702 14:51:37.218681 11272 kubelet.go:2244] node "k8s-kubespray-master-0" not found Jul 02 14:51:37 k8s-kubespray-master-0 kubelet[11272]: E0702 14:51:37.320163 11272 kubelet.go:2244] node "k8s-kubespray-master-0" not found Jul 02 14:51:37 k8s-kubespray-master-0 kubelet[11272]: E0702 14:51:37.420431 11272 kubelet.go:2244] node "k8s-kubespray-master-0" not found Jul 02 14:51:37 k8s-kubespray-master-0 kubelet[11272]: E0702 14:51:37.520717 11272 kubelet.go:2244] node "k8s-kubespray-master-0" not found Jul 02 14:51:37 k8s-kubespray-master-0 kubelet[11272]: E0702 14:51:37.621023 11272 kubelet.go:2244] node "k8s-kubespray-master-0" not found Jul 02 14:51:37 k8s-kubespray-master-0 kubelet[11272]: E0702 14:51:37.721289 11272 kubelet.go:2244] node "k8s-kubespray-master-0" not found Jul 02 14:51:37 k8s-kubespray-master-0 kubelet[11272]: E0702 14:51:37.821577 11272 kubelet.go:2244] node "k8s-kubespray-master-0" not found Jul 02 14:51:37 k8s-kubespray-master-0 kubelet[11272]: E0702 14:51:37.921904 11272 kubelet.go:2244] node "k8s-kubespray-master-0" not found Jul 02 14:51:38 k8s-kubespray-master-0 kubelet[11272]: E0702 14:51:38.022255 11272 kubelet.go:2244] node "k8s-kubespray-master-0" not found Jul 02 14:51:38 k8s-kubespray-master-0 kubelet[11272]: E0702 14:51:38.122588 11272 kubelet.go:2244] node "k8s-kubespray-master-0" not found Jul 02 14:51:38 k8s-kubespray-master-0 kubelet[11272]: E0702 14:51:38.223965 11272 kubelet.go:2244] node "k8s-kubespray-master-0" not found Jul 02 14:51:38 k8s-kubespray-master-0 kubelet[11272]: E0702 14:51:38.325387 11272 kubelet.go:2244] node "k8s-kubespray-master-0" not found Jul 02 14:51:38 k8s-kubespray-master-0 kubelet[11272]: E0702 14:51:38.410416 11272 reflector.go:126] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list v1.Pod: Get https://172.17.17.143:6443/api/v1/pods?fieldSelector=spec.nodeName%3Dk8s-kubespray-master-0&limit=500&resourceVersion=0: dial tcp 172.17.17.143:6443: connect: no route to host Jul 02 14:51:38 k8s-kubespray-master-0 kubelet[11272]: E0702 14:51:38.410428 11272 kubelet_node_status.go:94] Unable to register node "k8s-kubespray-master-0" with API server: Post https://172.17.17.143:6443/api/v1/nodes: dial tcp 172.17.17.143:6443: connect: no route to host Jul 02 14:51:38 k8s-kubespray-master-0 kubelet[11272]: E0702 14:51:38.410603 11272 reflector.go:126] k8s.io/kubernetes/pkg/kubelet/kubelet.go:442: Failed to list v1.Service: Get https://172.17.17.143:6443/api/v1/services?limit=500&resourceVersion=0: dial tcp 172.17.17.143:6443: connect: no route to host Jul 02 14:51:38 k8s-kubespray-master-0 kubelet[11272]: E0702 14:51:38.410732 11272 controller.go:115] failed to ensure node lease exists, will retry in 7s, error: Get https://172.17.17.143:6443/apis/coordination.k8s.io/v1beta1/namespaces/kube-node-lease/leases/k8s-kubespray-master-0?timeout=10s: dial tcp 172.17.17.143:6443: connect: no route to host Jul 02 14:51:38 k8s-kubespray-master-0 kubelet[11272]: E0702 14:51:38.410947 11272 reflector.go:126] k8s.io/kubernetes/pkg/kubelet/kubelet.go:451: Failed to list v1.Node: Get https://172.17.17.143:6443/api/v1/nodes?fieldSelector=metadata.name%3Dk8s-kubespray-master-0&limit=500&resourceVersion=0: dial tcp 172.17.17.143:6443: connect: no route to host Jul 02 14:51:38 k8s-kubespray-master-0 kubelet[11272]: E0702 14:51:38.411166 11272 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list v1beta1.CSIDriver: Get https://172.17.17.143:6443/apis/storage.k8s.io/v1beta1/csidrivers?limit=500&resourceVersion=0: dial tcp 172.17.17.143:6443: connect: no route to host Jul 02 14:51:38 k8s-kubespray-master-0 kubelet[11272]: E0702 14:51:38.411519 11272 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list v1beta1.RuntimeClass: Get https://172.17.17.143:6443/apis/node.k8s.io/v1beta1/runtimeclasses?limit=500&resourceVersion=0: dial tcp 172.17.17.143:6443: connect: no route to host Jul 02 14:51:38 k8s-kubespray-master-0 kubelet[11272]: E0702 14:51:38.426763 11272 kubelet.go:2244] node "k8s-kubespray-master-0" not found Jul 02 14:51:38 k8s-kubespray-master-0 kubelet[11272]: E0702 14:51:38.527092 11272 kubelet.go:2244] node "k8s-kubespray-master-0" not found Jul 02 14:51:38 k8s-kubespray-master-0 kubelet[11272]: E0702 14:51:38.627471 11272 kubelet.go:2244] node "k8s-kubespray-master-0" not found Jul 02 14:51:38 k8s-kubespray-master-0 kubelet[11272]: E0702 14:51:38.727791 11272 kubelet.go:2244] node "k8s-kubespray-master-0" not found Jul 02 14:51:38 k8s-kubespray-master-0 kubelet[11272]: E0702 14:51:38.828146 11272 kubelet.go:2244] node "k8s-kubespray-master-0" not found Jul 02 14:51:38 k8s-kubespray-master-0 kubelet[11272]: E0702 14:51:38.871550 11272 kubelet.go:2170] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized Jul 02 14:51:38 k8s-kubespray-master-0 kubelet[11272]: E0702 14:51:38.928452 11272 kubelet.go:2244] node "k8s-kubespray-master-0" not found Jul 02 14:51:39 k8s-kubespray-master-0 kubelet[11272]: E0702 14:51:39.028987 11272 kubelet.go:2244] node "k8s-kubespray-master-0" not found Jul 02 14:51:39 k8s-kubespray-master-0 kubelet[11272]: E0702 14:51:39.129288 11272 kubelet.go:2244] node "k8s-kubespray-master-0" not found Jul 02 14:51:39 k8s-kubespray-master-0 kubelet[11272]: E0702 14:51:39.229632 11272 kubelet.go:2244] node "k8s-kubespray-master-0" not found Jul 02 14:51:39 k8s-kubespray-master-0 kubelet[11272]: E0702 14:51:39.330099 11272 kubelet.go:2244] node "k8s-kubespray-master-0" not found Jul 02 14:51:39 k8s-kubespray-master-0 kubelet[11272]: E0702 14:51:39.430465 11272 kubelet.go:2244] node "k8s-kubespray-master-0" not found Jul 02 14:51:39 k8s-kubespray-master-0 kubelet[11272]: E0702 14:51:39.530839 11272 kubelet.go:2244] node "k8s-kubespray-master-0" not found Jul 02 14:51:39 k8s-kubespray-master-0 kubelet[11272]: E0702 14:51:39.631162 11272 kubelet.go:2244] node "k8s-kubespray-master-0" not found Jul 02 14:51:39 k8s-kubespray-master-0 kubelet[11272]: E0702 14:51:39.731461 11272 kubelet.go:2244] node "k8s-kubespray-master-0" not found Jul 02 14:51:39 k8s-kubespray-master-0 kubelet[11272]: E0702 14:51:39.831776 11272 kubelet.go:2244] node "k8s-kubespray-master-0" not found Jul 02 14:51:39 k8s-kubespray-master-0 kubelet[11272]: E0702 14:51:39.932085 11272 kubelet.go:2244] node "k8s-kubespray-master-0" not found Jul 02 14:51:40 k8s-kubespray-master-0 kubelet[11272]: E0702 14:51:40.032342 11272 kubelet.go:2244] node "k8s-kubespray-master-0" not found Jul 02 14:51:40 k8s-kubespray-master-0 kubelet[11272]: E0702 14:51:40.132751 11272 kubelet.go:2244] node "k8s-kubespray-master-0" not found Jul 02 14:51:40 k8s-kubespray-master-0 kubelet[11272]: E0702 14:51:40.233055 11272 kubelet.go:2244] node "k8s-kubespray-master-0" not found Jul 02 14:51:40 k8s-kubespray-master-0 kubelet[11272]: E0702 14:51:40.334612 11272 kubelet.go:2244] node "k8s-kubespray-master-0" not found Jul 02 14:51:40 k8s-kubespray-master-0 kubelet[11272]: E0702 14:51:40.434850 11272 kubelet.go:2244] node "k8s-kubespray-master-0" not found Jul 02 14:51:40 k8s-kubespray-master-0 kubelet[11272]: E0702 14:51:40.536164 11272 kubelet.go:2244] node "k8s-kubespray-master-0" not found Jul 02 14:51:40 k8s-kubespray-master-0 kubelet[11272]: E0702 14:51:40.636458 11272 kubelet.go:2244] node "k8s-kubespray-master-0" not found Jul 02 14:51:40 k8s-kubespray-master-0 kubelet[11272]: W0702 14:51:40.719139 11272 cni.go:213] Unable to update cni config: No networks found in /etc/cni/net.d Jul 02 14:51:40 k8s-kubespray-master-0 kubelet[11272]: E0702 14:51:40.736783 11272 kubelet.go:2244] node "k8s-kubespray-master-0" not found Jul 02 14:51:40 k8s-kubespray-master-0 kubelet[11272]: E0702 14:51:40.837047 11272 kubelet.go:2244] node "k8s-kubespray-master-0" not found Jul 02 14:51:40 k8s-kubespray-master-0 kubelet[11272]: E0702 14:51:40.937306 11272 kubelet.go:2244] node "k8s-kubespray-master-0" not found Jul 02 14:51:41 k8s-kubespray-master-0 kubelet[11272]: E0702 14:51:41.037607 11272 kubelet.go:2244] node "k8s-kubespray-master-0" not found Jul 02 14:51:41 k8s-kubespray-master-0 kubelet[11272]: E0702 14:51:41.064501 11272 eviction_manager.go:247] eviction manager: failed to get summary stats: failed to get node info: node "k8s-kubespray-master-0" not found Jul 02 14:51:41 k8s-kubespray-master-0 kubelet[11272]: E0702 14:51:41.137943 11272 kubelet.go:2244] node "k8s-kubespray-master-0" not found Jul 02 14:51:41 k8s-kubespray-master-0 kubelet[11272]: E0702 14:51:41.238228 11272 kubelet.go:2244] node "k8s-kubespray-master-0" not found Jul 02 14:51:41 k8s-kubespray-master-0 kubelet[11272]: E0702 14:51:41.338524 11272 kubelet.go:2244] node "k8s-kubespray-master-0" not found Jul 02 14:51:41 k8s-kubespray-master-0 kubelet[11272]: E0702 14:51:41.413585 11272 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list v1beta1.RuntimeClass: Get https://172.17.17.143:6443/apis/node.k8s.io/v1beta1/runtimeclasses?limit=500&resourceVersion=0: dial tcp 172.17.17.143:6443: connect: no route to host Jul 02 14:51:41 k8s-kubespray-master-0 kubelet[11272]: E0702 14:51:41.413692 11272 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list v1beta1.CSIDriver: Get https://172.17.17.143:6443/apis/storage.k8s.io/v1beta1/csidrivers?limit=500&resourceVersion=0: dial tcp 172.17.17.143:6443: connect: no route to host Jul 02 14:51:41 k8s-kubespray-master-0 kubelet[11272]: E0702 14:51:41.413835 11272 reflector.go:126] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list v1.Pod: Get https://172.17.17.143:6443/api/v1/pods?fieldSelector=spec.nodeName%3Dk8s-kubespray-master-0&limit=500&resourceVersion=0: dial tcp 172.17.17.143:6443: connect: no route to host Jul 02 14:51:41 k8s-kubespray-master-0 kubelet[11272]: E0702 14:51:41.413980 11272 reflector.go:126] k8s.io/kubernetes/pkg/kubelet/kubelet.go:442: Failed to list v1.Service: Get https://172.17.17.143:6443/api/v1/services?limit=500&resourceVersion=0: dial tcp 172.17.17.143:6443: connect: no route to host Jul 02 14:51:41 k8s-kubespray-master-0 kubelet[11272]: E0702 14:51:41.413995 11272 reflector.go:126] k8s.io/kubernetes/pkg/kubelet/kubelet.go:451: Failed to list v1.Node: Get https://172.17.17.143:6443/api/v1/nodes?fieldSelector=metadata.name%3Dk8s-kubespray-master-0&limit=500&resourceVersion=0: dial tcp 172.17.17.143:6443: connect: no route to host Jul 02 14:51:41 k8s-kubespray-master-0 kubelet[11272]: E0702 14:51:41.438922 11272 kubelet.go:2244] node "k8s-kubespray-master-0" not found Jul 02 14:51:41 k8s-kubespray-master-0 kubelet[11272]: E0702 14:51:41.539303 11272 kubelet.go:2244] node "k8s-kubespray-master-0" not found Jul 02 14:51:41 k8s-kubespray-master-0 kubelet[11272]: E0702 14:51:41.639579 11272 kubelet.go:2244] node "k8s-kubespray-master-0" not found Jul 02 14:51:41 k8s-kubespray-master-0 kubelet[11272]: E0702 14:51:41.739871 11272 kubelet.go:2244] node "k8s-kubespray-master-0" not found Jul 02 14:51:41 k8s-kubespray-master-0 kubelet[11272]: E0702 14:51:41.840203 11272 kubelet.go:2244] node "k8s-kubespray-master-0" not found Jul 02 14:51:41 k8s-kubespray-master-0 kubelet[11272]: E0702 14:51:41.940498 11272 kubelet.go:2244] node "k8s-kubespray-master-0" not found Jul 02 14:51:42 k8s-kubespray-master-0 kubelet[11272]: E0702 14:51:42.040820 11272 kubelet.go:2244] node "k8s-kubespray-master-0" not found Jul 02 14:51:42 k8s-kubespray-master-0 kubelet[11272]: E0702 14:51:42.142189 11272 kubelet.go:2244] node "k8s-kubespray-master-0" not found Jul 02 14:51:42 k8s-kubespray-master-0 kubelet[11272]: E0702 14:51:42.243351 11272 kubelet.go:2244] node "k8s-kubespray-master-0" not found Jul 02 14:51:42 k8s-kubespray-master-0 kubelet[11272]: E0702 14:51:42.343610 11272 kubelet.go:2244] node "k8s-kubespray-master-0" not found Jul 02 14:51:42 k8s-kubespray-master-0 kubelet[11272]: E0702 14:51:42.444737 11272 kubelet.go:2244] node "k8s-kubespray-master-0" not found Jul 02 14:51:42 k8s-kubespray-master-0 kubelet[11272]: E0702 14:51:42.545901 11272 kubelet.go:2244] node "k8s-kubespray-master-0" not found Jul 02 14:51:42 k8s-kubespray-master-0 kubelet[11272]: E0702 14:51:42.647090 11272 kubelet.go:2244] node "k8s-kubespray-master-0" not found Jul 02 14:51:42 k8s-kubespray-master-0 kubelet[11272]: E0702 14:51:42.747415 11272 kubelet.go:2244] node "k8s-kubespray-master-0" not found Jul 02 14:51:42 k8s-kubespray-master-0 kubelet[11272]: E0702 14:51:42.848645 11272 kubelet.go:2244] node "k8s-kubespray-master-0" not found Jul 02 14:51:42 k8s-kubespray-master-0 kubelet[11272]: E0702 14:51:42.950101 11272 kubelet.go:2244] node "k8s-kubespray-master-0" not found Jul 02 14:51:43 k8s-kubespray-master-0 kubelet[11272]: E0702 14:51:43.050424 11272 kubelet.go:2244] node "k8s-kubespray-master-0" not found Jul 02 14:51:43 k8s-kubespray-master-0 kubelet[11272]: E0702 14:51:43.150725 11272 kubelet.go:2244] node "k8s-kubespray-master-0" not found Jul 02 14:51:43 k8s-kubespray-master-0 kubelet[11272]: E0702 14:51:43.251028 11272 kubelet.go:2244] node "k8s-kubespray-master-0" not found Jul 02 14:51:43 k8s-kubespray-master-0 kubelet[11272]: E0702 14:51:43.352244 11272 kubelet.go:2244] node "k8s-kubespray-master-0" not found Jul 02 14:51:43 k8s-kubespray-master-0 kubelet[11272]: E0702 14:51:43.452569 11272 kubelet.go:2244] node "k8s-kubespray-master-0" not found Jul 02 14:51:43 k8s-kubespray-master-0 kubelet[11272]: E0702 14:51:43.552912 11272 kubelet.go:2244] node "k8s-kubespray-master-0" not found Jul 02 14:51:43 k8s-kubespray-master-0 kubelet[11272]: E0702 14:51:43.653266 11272 kubelet.go:2244] node "k8s-kubespray-master-0" not found Jul 02 14:51:43 k8s-kubespray-master-0 kubelet[11272]: E0702 14:51:43.753748 11272 kubelet.go:2244] node "k8s-kubespray-master-0" not found Jul 02 14:51:43 k8s-kubespray-master-0 kubelet[11272]: E0702 14:51:43.855093 11272 kubelet.go:2244] node "k8s-kubespray-master-0" not found Jul 02 14:51:43 k8s-kubespray-master-0 kubelet[11272]: E0702 14:51:43.874632 11272 kubelet.go:2170] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized Jul 02 14:51:43 k8s-kubespray-master-0 kubelet[11272]: E0702 14:51:43.955472 11272 kubelet.go:2244] node "k8s-kubespray-master-0" not found Jul 02 14:51:44 k8s-kubespray-master-0 kubelet[11272]: E0702 14:51:44.055983 11272 kubelet.go:2244] node "k8s-kubespray-master-0" not found Jul 02 14:51:44 k8s-kubespray-master-0 kubelet[11272]: E0702 14:51:44.156287 11272 kubelet.go:2244] node "k8s-kubespray-master-0" not found Jul 02 14:51:44 k8s-kubespray-master-0 kubelet[11272]: E0702 14:51:44.256606 11272 kubelet.go:2244] node "k8s-kubespray-master-0" not found Jul 02 14:51:44 k8s-kubespray-master-0 kubelet[11272]: E0702 14:51:44.356959 11272 kubelet.go:2244] node "k8s-kubespray-master-0" not found Jul 02 14:51:44 k8s-kubespray-master-0 kubelet[11272]: E0702 14:51:44.412702 11272 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list v1beta1.CSIDriver: Get https://172.17.17.143:6443/apis/storage.k8s.io/v1beta1/csidrivers?limit=500&resourceVersion=0: dial tcp 172.17.17.143:6443: connect: no route to host Jul 02 14:51:44 k8s-kubespray-master-0 kubelet[11272]: E0702 14:51:44.412766 11272 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list v1beta1.RuntimeClass: Get https://172.17.17.143:6443/apis/node.k8s.io/v1beta1/runtimeclasses?limit=500&resourceVersion=0: dial tcp 172.17.17.143:6443: connect: no route to host Jul 02 14:51:44 k8s-kubespray-master-0 kubelet[11272]: E0702 14:51:44.412905 11272 event.go:200] Unable to write event: 'Patch https://172.17.17.143:6443/api/v1/namespaces/default/events/k8s-kubespray-master-0.15ad8614b6d32eba: dial tcp 172.17.17.143:6443: connect: no route to host' (may retry after sleeping) Jul 02 14:51:44 k8s-kubespray-master-0 kubelet[11272]: E0702 14:51:44.413064 11272 reflector.go:126] k8s.io/kubernetes/pkg/kubelet/kubelet.go:442: Failed to list v1.Service: Get https://172.17.17.143:6443/api/v1/services?limit=500&resourceVersion=0: dial tcp 172.17.17.143:6443: connect: no route to host Jul 02 14:51:44 k8s-kubespray-master-0 kubelet[11272]: E0702 14:51:44.413212 11272 reflector.go:126] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list v1.Pod: Get https://172.17.17.143:6443/api/v1/pods?fieldSelector=spec.nodeName%3Dk8s-kubespray-master-0&limit=500&resourceVersion=0: dial tcp 172.17.17.143:6443: connect: no route to host Jul 02 14:51:44 k8s-kubespray-master-0 kubelet[11272]: E0702 14:51:44.417495 11272 reflector.go:126] k8s.io/kubernetes/pkg/kubelet/kubelet.go:451: Failed to list v1.Node: Get https://172.17.17.143:6443/api/v1/nodes?fieldSelector=metadata.name%3Dk8s-kubespray-master-0&limit=500&resourceVersion=0: dial tcp 172.17.17.143:6443: connect: no route to host Jul 02 14:51:44 k8s-kubespray-master-0 kubelet[11272]: E0702 14:51:44.457497 11272 kubelet.go:2244] node "k8s-kubespray-master-0" not found Jul 02 14:51:44 k8s-kubespray-master-0 kubelet[11272]: E0702 14:51:44.557835 11272 kubelet.go:2244] node "k8s-kubespray-master-0" not found Jul 02 14:51:44 k8s-kubespray-master-0 kubelet[11272]: E0702 14:51:44.659491 11272 kubelet.go:2244] node "k8s-kubespray-master-0" not found Jul 02 14:51:44 k8s-kubespray-master-0 kubelet[11272]: E0702 14:51:44.763138 11272 kubelet.go:2244] node "k8s-kubespray-master-0" not found Jul 02 14:51:44 k8s-kubespray-master-0 kubelet[11272]: E0702 14:51:44.864966 11272 kubelet.go:2244] node "k8s-kubespray-master-0" not found Jul 02 14:51:44 k8s-kubespray-master-0 kubelet[11272]: E0702 14:51:44.966179 11272 kubelet.go:2244] node "k8s-kubespray-master-0" not found Jul 02 14:51:45 k8s-kubespray-master-0 kubelet[11272]: E0702 14:51:45.067309 11272 kubelet.go:2244] node "k8s-kubespray-master-0" not found Jul 02 14:51:45 k8s-kubespray-master-0 kubelet[11272]: E0702 14:51:45.168438 11272 kubelet.go:2244] node "k8s-kubespray-master-0" not found Jul 02 14:51:45 k8s-kubespray-master-0 kubelet[11272]: E0702 14:51:45.268755 11272 kubelet.go:2244] node "k8s-kubespray-master-0" not found Jul 02 14:51:45 k8s-kubespray-master-0 kubelet[11272]: E0702 14:51:45.369025 11272 kubelet.go:2244] node "k8s-kubespray-master-0" not found Jul 02 14:51:45 k8s-kubespray-master-0 kubelet[11272]: I0702 14:51:45.410743 11272 kubelet_node_status.go:283] Setting node annotation to enable volume controller attach/detach Jul 02 14:51:45 k8s-kubespray-master-0 kubelet[11272]: I0702 14:51:45.410809 11272 vsphere.go:857] The vSphere cloud provider does not support zones Jul 02 14:51:45 k8s-kubespray-master-0 kubelet[11272]: I0702 14:51:45.411178 11272 setters.go:73] Using node IP: "172.17.17.133" Jul 02 14:51:45 k8s-kubespray-master-0 kubelet[11272]: I0702 14:51:45.416740 11272 kubelet_node_status.go:468] Recording NodeHasSufficientMemory event message for node k8s-kubespray-master-0 Jul 02 14:51:45 k8s-kubespray-master-0 kubelet[11272]: I0702 14:51:45.418429 11272 kubelet_node_status.go:468] Recording NodeHasNoDiskPressure event message for node k8s-kubespray-master-0 Jul 02 14:51:45 k8s-kubespray-master-0 kubelet[11272]: I0702 14:51:45.420427 11272 kubelet_node_status.go:468] Recording NodeHasSufficientPID event message for node k8s-kubespray-master-0 Jul 02 14:51:45 k8s-kubespray-master-0 kubelet[11272]: I0702 14:51:45.422993 11272 kubelet_node_status.go:72] Attempting to register node k8s-kubespray-master-0 Jul 02 14:51:45 k8s-kubespray-master-0 kubelet[11272]: E0702 14:51:45.469294 11272 kubelet.go:2244] node "k8s-kubespray-master-0" not found Jul 02 14:51:45 k8s-kubespray-master-0 kubelet[11272]: E0702 14:51:45.570485 11272 kubelet.go:2244] node "k8s-kubespray-master-0" not found Jul 02 14:51:45 k8s-kubespray-master-0 kubelet[11272]: E0702 14:51:45.670776 11272 kubelet.go:2244] node "k8s-kubespray-master-0" not found Jul 02 14:51:45 k8s-kubespray-master-0 kubelet[11272]: W0702 14:51:45.719770 11272 cni.go:213] Unable to update cni config: No networks found in /etc/cni/net.d Jul 02 14:51:45 k8s-kubespray-master-0 kubelet[11272]: E0702 14:51:45.771053 11272 kubelet.go:2244] node "k8s-kubespray-master-0" not found Jul 02 14:51:45 k8s-kubespray-master-0 kubelet[11272]: E0702 14:51:45.871422 11272 kubelet.go:2244] node "k8s-kubespray-master-0" not found Jul 02 14:51:45 k8s-kubespray-master-0 kubelet[11272]: E0702 14:51:45.971842 11272 kubelet.go:2244] node "k8s-kubespray-master-0" not found Jul 02 14:51:46 k8s-kubespray-master-0 kubelet[11272]: E0702 14:51:46.072189 11272 kubelet.go:2244] node "k8s-kubespray-master-0" not found Jul 02 14:51:46 k8s-kubespray-master-0 kubelet[11272]: E0702 14:51:46.172413 11272 kubelet.go:2244] node "k8s-kubespray-master-0" not found Jul 02 14:51:46 k8s-kubespray-master-0 kubelet[11272]: E0702 14:51:46.273626 11272 kubelet.go:2244] node "k8s-kubespray-master-0" not found Jul 02 14:51:46 k8s-kubespray-master-0 kubelet[11272]: E0702 14:51:46.374677 11272 kubelet.go:2244] node "k8s-kubespray-master-0" not found Jul 02 14:51:46 k8s-kubespray-master-0 kubelet[11272]: E0702 14:51:46.476202 11272 kubelet.go:2244] node "k8s-kubespray-master-0" not found Jul 02 14:51:46 k8s-kubespray-master-0 kubelet[11272]: E0702 14:51:46.577490 11272 kubelet.go:2244] node "k8s-kubespray-master-0" not found Jul 02 14:51:46 k8s-kubespray-master-0 kubelet[11272]: E0702 14:51:46.678645 11272 kubelet.go:2244] node "k8s-kubespray-master-0" not found Jul 02 14:51:46 k8s-kubespray-master-0 kubelet[11272]: E0702 14:51:46.779531 11272 kubelet.go:2244] node "k8s-kubespray-master-0" not found Jul 02 14:51:46 k8s-kubespray-master-0 kubelet[11272]: E0702 14:51:46.882321 11272 kubelet.go:2244] node "k8s-kubespray-master-0" not found Jul 02 14:51:46 k8s-kubespray-master-0 kubelet[11272]: E0702 14:51:46.982630 11272 kubelet.go:2244] node "k8s-kubespray-master-0" not found Jul 02 14:51:47 k8s-kubespray-master-0 kubelet[11272]: E0702 14:51:47.082927 11272 kubelet.go:2244] node "k8s-kubespray-master-0" not found Jul 02 14:51:47 k8s-kubespray-master-0 kubelet[11272]: E0702 14:51:47.183968 11272 kubelet.go:2244] node "k8s-kubespray-master-0" not found Jul 02 14:51:47 k8s-kubespray-master-0 kubelet[11272]: E0702 14:51:47.285679 11272 kubelet.go:2244] node "k8s-kubespray-master-0" not found Jul 02 14:51:47 k8s-kubespray-master-0 kubelet[11272]: E0702 14:51:47.387034 11272 kubelet.go:2244] node "k8s-kubespray-master-0" not found Jul 02 14:51:47 k8s-kubespray-master-0 kubelet[11272]: E0702 14:51:47.412675 11272 controller.go:115] failed to ensure node lease exists, will retry in 7s, error: Get https://172.17.17.143:6443/apis/coordination.k8s.io/v1beta1/namespaces/kube-node-lease/leases/k8s-kubespray-master-0?timeout=10s: dial tcp 172.17.17.143:6443: connect: no route to host Jul 02 14:51:47 k8s-kubespray-master-0 kubelet[11272]: E0702 14:51:47.412881 11272 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list v1beta1.CSIDriver: Get https://172.17.17.143:6443/apis/storage.k8s.io/v1beta1/csidrivers?limit=500&resourceVersion=0: dial tcp 172.17.17.143:6443: connect: no route to host Jul 02 14:51:47 k8s-kubespray-master-0 kubelet[11272]: E0702 14:51:47.413137 11272 kubelet_node_status.go:94] Unable to register node "k8s-kubespray-master-0" with API server: Post https://172.17.17.143:6443/api/v1/nodes: dial tcp 172.17.17.143:6443: connect: no route to host Jul 02 14:51:47 k8s-kubespray-master-0 kubelet[11272]: E0702 14:51:47.413342 11272 reflector.go:126] k8s.io/kubernetes/pkg/kubelet/kubelet.go:451: Failed to list v1.Node: Get https://172.17.17.143:6443/api/v1/nodes?fieldSelector=metadata.name%3Dk8s-kubespray-master-0&limit=500&resourceVersion=0: dial tcp 172.17.17.143:6443: connect: no route to host Jul 02 14:51:47 k8s-kubespray-master-0 kubelet[11272]: E0702 14:51:47.418028 11272 reflector.go:126] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list v1.Pod: Get https://172.17.17.143:6443/api/v1/pods?fieldSelector=spec.nodeName%3Dk8s-kubespray-master-0&limit=500&resourceVersion=0: dial tcp 172.17.17.143:6443: connect: no route to host Jul 02 14:51:47 k8s-kubespray-master-0 kubelet[11272]: E0702 14:51:47.418185 11272 reflector.go:126] k8s.io/kubernetes/pkg/kubelet/kubelet.go:442: Failed to list v1.Service: Get https://172.17.17.143:6443/api/v1/services?limit=500&resourceVersion=0: dial tcp 172.17.17.143:6443: connect: no route to host Jul 02 14:51:47 k8s-kubespray-master-0 kubelet[11272]: E0702 14:51:47.418545 11272 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list v1beta1.RuntimeClass: Get https://172.17.17.143:6443/apis/node.k8s.io/v1beta1/runtimeclasses?limit=500&resourceVersion=0: dial tcp 172.17.17.143:6443: connect: no route to host Jul 02 14:51:47 k8s-kubespray-master-0 kubelet[11272]: E0702 14:51:47.487319 11272 kubelet.go:2244] node "k8s-kubespray-master-0" not found Jul 02 14:51:47 k8s-kubespray-master-0 kubelet[11272]: E0702 14:51:47.587585 11272 kubelet.go:2244] node "k8s-kubespray-master-0" not found Jul 02 14:51:47 k8s-kubespray-master-0 kubelet[11272]: E0702 14:51:47.688003 11272 kubelet.go:2244] node "k8s-kubespray-master-0" not found Jul 02 14:51:47 k8s-kubespray-master-0 kubelet[11272]: E0702 14:51:47.788221 11272 kubelet.go:2244] node "k8s-kubespray-master-0" not found Jul 02 14:51:47 k8s-kubespray-master-0 kubelet[11272]: E0702 14:51:47.888471 11272 kubelet.go:2244] node "k8s-kubespray-master-0" not found Jul 02 14:51:47 k8s-kubespray-master-0 kubelet[11272]: E0702 14:51:47.988703 11272 kubelet.go:2244] node "k8s-kubespray-master-0" not found Jul 02 14:51:48 k8s-kubespray-master-0 kubelet[11272]: E0702 14:51:48.089138 11272 kubelet.go:2244] node "k8s-kubespray-master-0" not found Jul 02 14:51:48 k8s-kubespray-master-0 kubelet[11272]: E0702 14:51:48.189417 11272 kubelet.go:2244] node "k8s-kubespray-master-0" not found Jul 02 14:51:48 k8s-kubespray-master-0 kubelet[11272]: E0702 14:51:48.290484 11272 kubelet.go:2244] node "k8s-kubespray-master-0" not found Jul 02 14:51:48 k8s-kubespray-master-0 kubelet[11272]: E0702 14:51:48.391476 11272 kubelet.go:2244] node "k8s-kubespray-master-0" not found Jul 02 14:51:48 k8s-kubespray-master-0 kubelet[11272]: E0702 14:51:48.492607 11272 kubelet.go:2244] node "k8s-kubespray-master-0" not found Jul 02 14:51:48 k8s-kubespray-master-0 kubelet[11272]: E0702 14:51:48.593150 11272 kubelet.go:2244] node "k8s-kubespray-master-0" not found Jul 02 14:51:48 k8s-kubespray-master-0 kubelet[11272]: E0702 14:51:48.693446 11272 kubelet.go:2244] node "k8s-kubespray-master-0" not found Jul 02 14:51:48 k8s-kubespray-master-0 kubelet[11272]: E0702 14:51:48.793741 11272 kubelet.go:2244] node "k8s-kubespray-master-0" not found Jul 02 14:51:48 k8s-kubespray-master-0 kubelet[11272]: E0702 14:51:48.877555 11272 kubelet.go:2170] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized Jul 02 14:51:48 k8s-kubespray-master-0 kubelet[11272]: E0702 14:51:48.894018 11272 kubelet.go:2244] node "k8s-kubespray-master-0" not found Jul 02 14:51:48 k8s-kubespray-master-0 kubelet[11272]: E0702 14:51:48.995019 11272 kubelet.go:2244] node "k8s-kubespray-master-0" not found Jul 02 14:51:49 k8s-kubespray-master-0 kubelet[11272]: E0702 14:51:49.095343 11272 kubelet.go:2244] node "k8s-kubespray-master-0" not found Jul 02 14:51:49 k8s-kubespray-master-0 kubelet[11272]: E0702 14:51:49.195629 11272 kubelet.go:2244] node "k8s-kubespray-master-0" not found Jul 02 14:51:49 k8s-kubespray-master-0 kubelet[11272]: E0702 14:51:49.296993 11272 kubelet.go:2244] node "k8s-kubespray-master-0" not found Jul 02 14:51:49 k8s-kubespray-master-0 kubelet[11272]: E0702 14:51:49.397255 11272 kubelet.go:2244] node "k8s-kubespray-master-0" not found Jul 02 14:51:49 k8s-kubespray-master-0 kubelet[11272]: E0702 14:51:49.497600 11272 kubelet.go:2244] node "k8s-kubespray-master-0" not found Jul 02 14:51:49 k8s-kubespray-master-0 kubelet[11272]: E0702 14:51:49.598861 11272 kubelet.go:2244] node "k8s-kubespray-master-0" not found Jul 02 14:51:49 k8s-kubespray-master-0 kubelet[11272]: E0702 14:51:49.701747 11272 kubelet.go:2244] node "k8s-kubespray-master-0" not found Jul 02 14:51:49 k8s-kubespray-master-0 kubelet[11272]: E0702 14:51:49.802819 11272 kubelet.go:2244] node "k8s-kubespray-master-0" not found Jul 02 14:51:49 k8s-kubespray-master-0 kubelet[11272]: E0702 14:51:49.904136 11272 kubelet.go:2244] node "k8s-kubespray-master-0" not found Jul 02 14:51:50 k8s-kubespray-master-0 kubelet[11272]: E0702 14:51:50.005222 11272 kubelet.go:2244] node "k8s-kubespray-master-0" not found Jul 02 14:51:50 k8s-kubespray-master-0 kubelet[11272]: E0702 14:51:50.105491 11272 kubelet.go:2244] node "k8s-kubespray-master-0" not found Jul 02 14:51:50 k8s-kubespray-master-0 kubelet[11272]: E0702 14:51:50.205773 11272 kubelet.go:2244] node "k8s-kubespray-master-0" not found Jul 02 14:51:50 k8s-kubespray-master-0 kubelet[11272]: E0702 14:51:50.306042 11272 kubelet.go:2244] node "k8s-kubespray-master-0" not found Jul 02 14:51:50 k8s-kubespray-master-0 kubelet[11272]: E0702 14:51:50.407258 11272 kubelet.go:2244] node "k8s-kubespray-master-0" not found Jul 02 14:51:50 k8s-kubespray-master-0 kubelet[11272]: E0702 14:51:50.412831 11272 reflector.go:126] k8s.io/kubernetes/pkg/kubelet/kubelet.go:451: Failed to list v1.Node: Get https://172.17.17.143:6443/api/v1/nodes?fieldSelector=metadata.name%3Dk8s-kubespray-master-0&limit=500&resourceVersion=0: dial tcp 172.17.17.143:6443: connect: no route to host Jul 02 14:51:50 k8s-kubespray-master-0 kubelet[11272]: E0702 14:51:50.414578 11272 reflector.go:126] k8s.io/kubernetes/pkg/kubelet/kubelet.go:442: Failed to list v1.Service: Get https://172.17.17.143:6443/api/v1/services?limit=500&resourceVersion=0: dial tcp 172.17.17.143:6443: connect: no route to host Jul 02 14:51:50 k8s-kubespray-master-0 kubelet[11272]: E0702 14:51:50.414309 11272 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list v1beta1.CSIDriver: Get https://172.17.17.143:6443/apis/storage.k8s.io/v1beta1/csidrivers?limit=500&resourceVersion=0: dial tcp 172.17.17.143:6443: connect: no route to host Jul 02 14:51:50 k8s-kubespray-master-0 kubelet[11272]: E0702 14:51:50.414454 11272 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list v1beta1.RuntimeClass: Get https://172.17.17.143:6443/apis/node.k8s.io/v1beta1/runtimeclasses?limit=500&resourceVersion=0: dial tcp 172.17.17.143:6443: connect: no route to host Jul 02 14:51:50 k8s-kubespray-master-0 kubelet[11272]: E0702 14:51:50.415204 11272 reflector.go:126] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list v1.Pod: Get https://172.17.17.143:6443/api/v1/pods?fieldSelector=spec.nodeName%3Dk8s-kubespray-master-0&limit=500&resourceVersion=0: dial tcp 172.17.17.143:6443: connect: no route to host Jul 02 14:51:50 k8s-kubespray-master-0 kubelet[11272]: E0702 14:51:50.507611 11272 kubelet.go:2244] node "k8s-kubespray-master-0" not found Jul 02 14:51:50 k8s-kubespray-master-0 kubelet[11272]: E0702 14:51:50.608088 11272 kubelet.go:2244] node "k8s-kubespray-master-0" not found

`

I am no expert :) but it looks like a network issue.. Where did I go wrong????

Thanks Ashfaqur Rahman