Closed rhuss closed 7 years ago
these are the logs of the Docker container:
root@n0:/home/pi# docker logs 9552a3fccc4b
I0404 09:16:00.523415 1 leaderelection.go:179] attempting to acquire leader lease...
I0404 09:16:01.711489 1 leaderelection.go:189] successfully acquired lease kube-system/kube-scheduler
I0404 09:16:01.714291 1 event.go:217] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"kube-scheduler", UID:"2907d1f8-1917-11e7-9f3c-b827ebadcfc2", APIVersion:"v1", ResourceVersion:"388", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' n0 became leader
I0404 09:16:01.715333 1 event.go:217] Event(v1.ObjectReference{Kind:"Pod", Namespace:"kube-system", Name:"self-hosted-kube-scheduler-2660681130-r0php", UID:"4671c05b-1917-11e7-8dc6-b827ebadcfc2", APIVersion:"v1", ResourceVersion:"362", FieldPath:""}): type: 'Warning' reason: 'FailedScheduling' no nodes available to schedule pods
I0404 09:16:01.751626 1 event.go:217] Event(v1.ObjectReference{Kind:"Pod", Namespace:"kube-system", Name:"self-hosted-kube-scheduler-2660681130-r0php", UID:"4671c05b-1917-11e7-8dc6-b827ebadcfc2", APIVersion:"v1", ResourceVersion:"389", FieldPath:""}): type: 'Warning' reason: 'FailedScheduling' no nodes available to schedule pods
W0404 09:16:01.751239 1 factory.go:533] Request for pod kube-system/self-hosted-kube-scheduler-2660681130-r0php already in flight, abandoning
I0404 09:16:02.734193 1 event.go:217] Event(v1.ObjectReference{Kind:"Pod", Namespace:"kube-system", Name:"self-hosted-kube-scheduler-2660681130-r0php", UID:"4671c05b-1917-11e7-8dc6-b827ebadcfc2", APIVersion:"v1", ResourceVersion:"389", FieldPath:""}): type: 'Warning' reason: 'FailedScheduling' no nodes available to schedule pods
I0404 09:16:04.757063 1 event.go:217] Event(v1.ObjectReference{Kind:"Pod", Namespace:"kube-system", Name:"self-hosted-kube-scheduler-2660681130-r0php", UID:"4671c05b-1917-11e7-8dc6-b827ebadcfc2", APIVersion:"v1", ResourceVersion:"389", FieldPath:""}): type: 'Warning' reason: 'FailedScheduling' no nodes available to schedule pods
....
Maybe also interesting that there is also a stopped scheduler container these logs:
Actually this is only an issue when running in self-hosted mode. Without selfHosted: true
in the config file it works.
Current status: I'm now back using plain kubeadm 1.6.1 (without your custom hyperkube image) and disabled some features in the config file (like proxy-client-cert-file
) since these are not available in for vanilla apiserver 1.6.0.
Now it works, and even weave is happy (because of different machine-ids now).
Now going to try PVs and Ingress (as I guess they don't require your customisations in the adapter hyperkube image ...)
Let me know if I can help further in evaluating the issue with the self-hosted mode.
Known issue. Since the kubelet behaviour changed drastically between beta.4 and rc.1-ish and kubeadm broke fatally as a consequence, we fixed that by breaking self-hosting only in v1.6.1 ;)
It's on my list to do pretty soon, but well, school ;)
As you noted, it will just work fine to disable proxy* and self-hosting if you don't want the API aggregation thing.
Awesome to see you're working on this. When you find flaws, feel free to do a PR as well :+1:
Thanks for the info and for your awesome work on kubeadm and kubernetes in general.
np at all, take your time. School is important. still ;-)
'hope to contribute more, still looking around to get a feeling of the project(s). And yes, spare time is an issue here, too :)
I could now move from self-hosting to using static pods as normal, so this should be ok now :+1: Thanks ;)
I just did a super fresh install of everything but now hit the following issue with kubeadm 1.6.1
A
with
/etc/kubernetes/kubeadm.yml
(from the README except an extra predefined token)leads to
which indicates that the scheduler pod doesn't come up. And indeed the problem is that the scheduler pod cannot be scheduled:
Looks like a catch-22 ;-)