Closed arcenik closed 3 years ago
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale
.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close
.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /lifecycle stale
Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten
.
Rotten issues close after an additional 30d of inactivity.
If this issue is safe to close now please do so with /close
.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /lifecycle rotten
Have the same issue
@arcenik and @includerandom : I have the same issue some time, and solved it in the past. This time, my solution does not work anymore.
You can :
I have used Ansible to do run commands on all hosts :
currentinventory=${ANSIBLE_INVENTORY_PATH}/kube-dev/hosts.yaml
ansible -i $currentinventory kubernetes --list-hosts
ansible -i $currentinventory kubernetes --become -m shell -a "mv /var/lib/kubelet/cpu_manager_state /var/lib/kubelet/cpu_manager_state-OLD"
ansible -i $currentinventory kubernetes --become -m systemd -a "name=kubelet daemon_reload=true state=restarted"
ansible-playbook -i $currentinventory --become cluster.yml
This error "Create kubeadm token for joining nodes with 24h expiration" had been reported many times last months, and I did not find a clear explanation of the solutions. I'm still searching. If anyone finds an other solution, please tell us.
My kubeadm output looks like yours :
$ /usr/local/bin/kubeadm --kubeconfig /etc/kubernetes/admin.conf --v=5 token create
I0904 09:15:37.877589 88884 token.go:121] [token] validating mixed arguments
I0904 09:15:37.877920 88884 token.go:130] [token] getting Clientsets from kubeconfig file
I0904 09:15:37.885142 88884 token.go:243] [token] loading configurations
I0904 09:15:37.885649 88884 interface.go:400] Looking for default routes with IPv4 addresses
I0904 09:15:37.885663 88884 interface.go:405] Default route transits interface "eth0"
I0904 09:15:37.886497 88884 interface.go:208] Interface eth0 is up
I0904 09:15:37.886703 88884 interface.go:256] Interface "eth0" has 2 addresses :[10.150.233.41/24 fe80::250:56ff:fe87:2252/64].
I0904 09:15:37.886742 88884 interface.go:223] Checking addr 10.150.233.41/24.
I0904 09:15:37.886752 88884 interface.go:230] IP found 10.150.233.41
I0904 09:15:37.886765 88884 interface.go:262] Found valid IPv4 address 10.150.233.41 for interface "eth0".
I0904 09:15:37.886778 88884 interface.go:411] Found active IP 10.150.233.41
W0904 09:15:37.886913 88884 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
I0904 09:15:37.886929 88884 token.go:255] [token] creating token
timed out waiting for the condition
This happens even with the last HEAD f1566cb8
Serge
Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen
.
Mark the issue as fresh with /remove-lifecycle rotten
.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /close
@fejta-bot: Closing this issue.
Found an other cause for this error : check that the etcd service is running on the host as a daemon OR as a container ; but not twice. Check what process binds to port 2380. If etcd is configured twice, choose one only, and disable the other one.
I'm trying to install a kubernetes cluster on my raspberypi 3b+ cluster but it fails as it try to create a token before the api server/container is started
The result of the command that fails:
The running containers
The docker images:
Environment:
Cloud provider or hardware configuration:on-premise raspbery pi3b+ 8 nodes cluster
OS (
printf "$(uname -srm)\n$(cat /etc/os-release)\n"
): Ubuntu 19.10 aarch64Version of Ansible (
ansible --version
): ansible 2.9.7Version of Python (
python --version
): 3.8.2 (default, Apr 8 2020, 14:31:25) [GCC 9.3.0]Kubespray version (commit) (
git rev-parse --short HEAD
): 08a97eecNetwork plugin used:
Output and inventory: https://gist.github.com/arcenik/389343799624498addb1edc7cbfa976c
Full inventory with variables (
ansible -i inventory/sample/inventory.ini all -m debug -a "var=hostvars[inventory_hostname]"
): see gist aboveCommand used to invoke ansible:
Output of ansible run:
Anything else do we need to know: