Closed pavanats closed 4 years ago
The right NIC IP is defined by inventory.ini
file. Please double check it.
This error might be happening because you have stale settings, you may try to clean before deployment or type in the below command:
kubeadm reset -f
Hi Amr, I see that K8S uses IP address of eth0 of the controller VM and not the one specified in inventory.ini. Here is my inventory.ini:
[all] controller ansible_ssh_user=root ansible_host=30.30.30.22 node01 ansible_ssh_user=root ansible_host=30.30.30.11
[controller_group] controller
[edgenode_group] node01
[edgenode_vca_group]
[ptp_master] controller
[ptp_slave_group] node01
The output of ifconfig on the controller VM is below: [root@controller ~]# ifconfig docker0: flags=4099<UP,BROADCAST,MULTICAST> mtu 1500 inet 172.17.0.1 netmask 255.255.0.0 broadcast 172.17.255.255 inet6 fe80::42:bdff:febd:73c2 prefixlen 64 scopeid 0x20 ether 02:42:bd:bd:73:c2 txqueuelen 0 (Ethernet) RX packets 100811 bytes 5445681 (5.1 MiB) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 126256 bytes 308581344 (294.2 MiB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
eth0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500 inet 192.168.122.30 netmask 255.255.255.0 broadcast 192.168.122.255 inet6 fe80::134b:1faa:2a9d:f5bf prefixlen 64 scopeid 0x20 inet6 fe80::9fc1:5a3d:339a:aa34 prefixlen 64 scopeid 0x20 inet6 fe80::cbd5:9e0c:96a5:9080 prefixlen 64 scopeid 0x20 ether 52:54:00:eb:92:85 txqueuelen 1000 (Ethernet) RX packets 1430089 bytes 3201280552 (2.9 GiB) RX errors 0 dropped 6 overruns 0 frame 0 TX packets 1149725 bytes 75355750 (71.8 MiB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
eth1: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500 inet 30.30.30.22 netmask 255.255.255.0 broadcast 30.30.30.255 inet6 fe80::5054:ff:fed1:d02b prefixlen 64 scopeid 0x20 ether 52:54:00:d1:d0:2b txqueuelen 1000 (Ethernet) RX packets 132884 bytes 188834556 (180.0 MiB) RX errors 0 dropped 114 overruns 0 frame 0 TX packets 44858 bytes 4609305 (4.3 MiB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
lo: flags=73<UP,LOOPBACK,RUNNING> mtu 65536
inet 127.0.0.1 netmask 255.0.0.0
inet6 ::1 prefixlen 128 scopeid 0x10
mirror0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500 inet6 fe80::d0ae:7dff:fef0:75ef prefixlen 64 scopeid 0x20 ether d2:ae:7d:f0:75:ef txqueuelen 1000 (Ethernet) RX packets 700986 bytes 188034936 (179.3 MiB) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 7 bytes 746 (746.0 B) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
ovn0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500 inet 100.64.0.2 netmask 255.255.0.0 broadcast 100.64.255.255 inet6 fe80::dc20:e2ff:fe40:3 prefixlen 64 scopeid 0x20 ether de:20:e2:40:00:03 txqueuelen 1000 (Ethernet) RX packets 297399 bytes 35445313 (33.8 MiB) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 304717 bytes 143073678 (136.4 MiB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
Pavan
From: Amr Mokhtar notifications@github.com Sent: Friday, July 10, 2020 4:41 AM To: open-ness/openness-experience-kits openness-experience-kits@noreply.github.com Cc: Pavan Gupta pavan.gupta@atsgen.com; Author author@noreply.github.com Subject: Re: [open-ness/openness-experience-kits] Error in cluster join at Edge worker Node (#32)
The right NIC IP is defined by inventory.ini file. Please double check it. This error might be happening because you have stale settings, you may try to clean before deployment or type in the below command:
kubeadm reset -f
— You are receiving this because you authored the thread. Reply to this email directly, view it on GitHubhttps://github.com/open-ness/openness-experience-kits/issues/32#issuecomment-656394416, or unsubscribehttps://github.com/notifications/unsubscribe-auth/APSLZC73XKVIDH4RZXXF75DR2ZFDRANCNFSM4OV3T2LA.
What is the contents of this file $HOME/.kube/config
?
Also, can you paste this command and show its output?
$ kubeadm token create --print-join-command
Hi Amr, I am not trying things on physical servers, so will update you if there are any further issues encountered. Pavan
From: Amr Mokhtar notifications@github.com Sent: Wednesday, July 15, 2020 6:19 PM To: open-ness/openness-experience-kits openness-experience-kits@noreply.github.com Cc: Pavan Gupta pavan.gupta@atsgen.com; Author author@noreply.github.com Subject: Re: [open-ness/openness-experience-kits] Error in cluster join at Edge worker Node (#32)
What is the contents of this file $HOME/.kube/config ? Also, can you paste this command and show its output?
$ kubeadm token create --print-join-command
— You are receiving this because you authored the thread. Reply to this email directly, view it on GitHubhttps://github.com/open-ness/openness-experience-kits/issues/32#issuecomment-658747539, or unsubscribehttps://github.com/notifications/unsubscribe-auth/APSLZCYKIKC4T56MJ5MCVETR3WQXPANCNFSM4OV3T2LA.
Hi, My setup is such:
Deployer on VM1, Controller Node on VM2 with 2 NICs, Edge Node on a physical server
Ansible playbook is successful on the controller node.
While running the Ansible playbook is run for edge node installation, I get the following error: I0709 20:10:42.641753 31263 join.go:441] [preflight] Discovering cluster-info I0709 20:10:42.641784 31263 token.go:78] [discovery] Created cluster-info discovery client, requesting info from "192.168.122.30:6443" I0709 20:10:52.642500 31263 token.go:215] [discovery] Failed to request cluster-info, will try again: Get https://192.168.122.30:6443/api/v1/namespaces/kube-public/configmaps/cluster-info?timeout=10s: context deadline exceeded
The reason I see is that IP address 192.168.122.30 is of the second NIC of the controller, whereas it should be picked the address of the first NIC of the controller, which is the address provided in inventory.ini file.
Kindly suggest how the right NIC IP address should be selected on the controller.