Open F21 opened 7 years ago
It seems to be pretty random whether a worker will be initialized correctly or not. I just destroyed the cluster and ran vagrant up
again. This time, I can see 2 workers downloading the images and setting up, but the last worker did nothing. Is there anyway to check to see why the last worker did not install?
Using journalctl
just showed no entries
:
core@w3 ~ $ journalctl -u kubelet
-- No entries --
I am using the multi-node vagrant setup and have configured it to create 3 worker nodes. I can see that the nodes are launched and can ssh into all of them. However, kubernetes only sees 1 worker:
When I ssh into worker 2 and 3 (the ones that were not running kubernetes), I see that there are not logs for the kubelet:
This is also the case when I check the web dashboard.
I am using
core-kuberntes
master.This is my
config.rb
:I also set
USE_CALICO=true
andK8S_VER=v1.4.5_coreos.0
inworker-install.sh
andcontroller-install.sh
.I am running this in VirtualBox 5.1.8 on Windows 10 64-bit. To get vagrant up working on Windows, I applied this commit to master: https://github.com/ah45/coreos-kubernetes/commit/3ca05ebbf21610401781dc2410293636e20d6161