➜ ~/Downloads git:(master) ✗ cat /home/omar/Downloads/waleed2.yaml | grep server
server: https://185.69.167.122:6443
==========================================================================================================
➜ ~/Downloads git:(master) ✗ kubectl --kubeconfig=/home/omar/Downloads/waleed2.yaml --insecure-skip-tls-verify get nodes -A
NAME STATUS ROLES AGE VERSION
k3os-10661 Ready master 18m v1.19.2+k3s1
k3os-1962 Ready <none> 17m v1.19.2+k3s1
==========================================================================================================
➜ ~/Downloads git:(master) ✗ kubectl --kubeconfig=/home/omar/Downloads/waleed2.yaml --insecure-skip-tls-verify get nodes -A
error: You must be logged in to the server (Unauthorized)
==========================================================================================================
➜ ~/Downloads git:(master) ✗ ssh rancher@185.69.167.122
The authenticity of host '185.69.167.122 (185.69.167.122)' can't be established.
ECDSA key fingerprint is SHA256:tgSQ6/wiO+8XpinMZEP1Cok1FLxssuwoGwNsj6EYndc.
Are you sure you want to continue connecting (yes/no/[fingerprint])? yes
Warning: Permanently added '185.69.167.122' (ECDSA) to the list of known hosts.
Welcome to k3OS!
Refer to https://github.com/rancher/k3os for README and issues
By default mode of k3OS is to run a single node cluster. Use "kubectl"
to access it. The node token in /var/lib/rancher/k3s/server/node-token
can be used to join agents to this server.
k3os-1988 [~]$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
k3os-1988 Ready master 79m v1.19.2+k3s1
k3os-30358 Ready <none> 71m v1.19.2+k3s1
k3os-139 Ready <none> 68m v1.19.2+k3s1
==========================================================================================================
➜ ~/Downloads git:(master) ✗ kubectl --kubeconfig=/home/omar/Downloads/waleed2.yaml --insecure-skip-tls-verify get nodes -A
error: You must be logged in to the server (Unauthorized)
==========================================================================================================
➜ ~/Downloads git:(master) ✗ ssh rancher@185.69.167.122
@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@
@ WARNING: REMOTE HOST IDENTIFICATION HAS CHANGED! @
@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@
IT IS POSSIBLE THAT SOMEONE IS DOING SOMETHING NASTY!
Someone could be eavesdropping on you right now (man-in-the-middle attack)!
It is also possible that a host key has just been changed.
The fingerprint for the ECDSA key sent by the remote host is
SHA256:U0G2sIMCE2HL/9jU6hIQgBn2C+De8zNKPT/blb5s7cg.
Please contact your system administrator.
Add correct host key in /home/omar/.ssh/known_hosts to get rid of this message.
Offending ECDSA key in /home/omar/.ssh/known_hosts:52
remove with:
ssh-keygen -f "/home/omar/.ssh/known_hosts" -R "185.69.167.122"
ECDSA host key for 185.69.167.122 has changed and you have requested strict checking.
Host key verification failed.
==========================================================================================================
➜ ~/Downloads git:(master) ✗ ssh rancher@185.69.167.122
The authenticity of host '185.69.167.122 (185.69.167.122)' can't be established.
ECDSA key fingerprint is SHA256:U0G2sIMCE2HL/9jU6hIQgBn2C+De8zNKPT/blb5s7cg.
Are you sure you want to continue connecting (yes/no/[fingerprint])? yes
Warning: Permanently added '185.69.167.122' (ECDSA) to the list of known hosts.
Welcome to k3OS!
Refer to https://github.com/rancher/k3os for README and issues
By default mode of k3OS is to run a single node cluster. Use "kubectl"
to access it. The node token in /var/lib/rancher/k3s/server/node-token
can be used to join agents to this server.
k3os-10661 [~]$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
k3os-10661 Ready master 23m v1.19.2+k3s1
k3os-1962 Ready <none> 23m v1.19.2+k3s1
Since 2 VM's are active on the same segment with the same IP, both will respond to ARP requests, so I'm assuming what we are seeing here is the result of accidental ARP poisoning.
We deployed a kubernetes cluster. Then we executed these commands. It looks like there're two different clusters with the same ip.
Kubernetes reservations id (on devnet):
Old kubernetes deployment (decommissioned the public ip workload but the others are still provisioned):
Related: https://github.com/threefoldtech/zos/issues/1098 https://github.com/threefoldtech/zos/issues/1097