coreos / bugs

Issue tracker for CoreOS Container Linux
https://coreos.com/os/eol/
147 stars 30 forks source link

Should we remove the kubelet? #978

Closed crawford closed 8 years ago

crawford commented 8 years ago

@aaronlevy brought it to my attention that k8s has some serious bugs around running mixed versions in a cluster (https://github.com/kubernetes/kubernetes/issues/16961). This class of bug is, of course, the bane of our existence as a self-updating OS. Given the tight dependence between the versions of the k8s components, I think we should either ship everything or nothing. This is further complicated by projects like Tectonic, which have their own version dependencies. We have to be careful that we don't end up creating massive interdependencies between k8s, CoreOS, and all of Tectonics components.

The kubelet has not yet made it to the stable channel, so we still have time to make a call on this.

joshix commented 8 years ago

What's the image size difference between kubelet and "everything"?

crawford commented 8 years ago

Everything is roughly 10x the size of the kubelet (39M vs 338M).

aaronlevy commented 8 years ago

Looks like hyperkube ("everything") is 43mb. Kubelet is 33mb and we would need both as kubelet IIRC is not part of hyperkube.

The everything route still wouldn't really solve this specific problem, unless you classed your updates differently too. E.g. all master nodes would need to be updated before any workers. Otherwise worker kubelet > master (which is the current issue).

crawford commented 8 years ago

My numbers came from v1.1.1.

$ ls server/bin -lah
insgesamt 338M
drwxr-xr-x 1 alex alex  480  9. Nov 08:02 .
drwxr-xr-x 1 alex alex    6  9. Nov 08:01 ..
-rwxr-xr-x 1 alex alex  55M  9. Nov 08:01 hyperkube
-rwxr-xr-x 1 alex alex  44M  9. Nov 08:01 kube-apiserver
-rw-r--r-- 1 alex alex   33  9. Nov 08:01 kube-apiserver.docker_tag
-rw-r--r-- 1 alex alex  46M  9. Nov 08:01 kube-apiserver.tar
-rwxr-xr-x 1 alex alex  36M  9. Nov 08:01 kube-controller-manager
-rw-r--r-- 1 alex alex   33  9. Nov 08:01 kube-controller-manager.docker_tag
-rw-r--r-- 1 alex alex  39M  9. Nov 08:01 kube-controller-manager.tar
-rwxr-xr-x 1 alex alex  22M  9. Nov 08:02 kubectl
-rwxr-xr-x 1 alex alex  39M  9. Nov 08:01 kubelet
-rwxr-xr-x 1 alex alex  19M  9. Nov 08:01 kube-proxy
-rwxr-xr-x 1 alex alex  18M  9. Nov 08:01 kube-scheduler
-rw-r--r-- 1 alex alex   33  9. Nov 08:01 kube-scheduler.docker_tag
-rw-r--r-- 1 alex alex  21M  9. Nov 08:01 kube-scheduler.tar
-rwxr-xr-x 1 alex alex 3,3M  9. Nov 08:01 linkcheck

Assuming we need the API server, kube-scheduler, etc..

aaronlevy commented 8 years ago

hyperkube is basically an all-in-one binary from which you can run apiserver, scheduler, controller, proxy.

crawford commented 8 years ago

Ah, good to know.

mmelnyk commented 8 years ago

I would like to see CoreOS as small\simple as possible. We are using kubelet\k8s and weave (all set), sometimes serf, so there are no problem to download\update these components during provision\deployment\boot. Also @aaronlevy is correct - hyperkube is all-in-one.

aaronlevy commented 8 years ago

FWIW the v1.1.1 hyperkube binary does contain the kubelet. So size-wise it looks like 55M (everything) vs 39M (kubelet).

crawford commented 8 years ago

@aaronlevy what is the plan?

crawford commented 8 years ago

It sounds like the plan is to backport some version-skew fixes into kubelet v1.1.2 for now. The eventual goal being out-of-band updates for the kubelet (e.g. kubelet in rkt).

phemmer commented 8 years ago

I would first ask the question, why is the kubelet baked into the image to begin with? I imagine the answer is so that the host doesn't have to reach out to the internet to install it.

So, what if the kubelet is removed from the image, and downloaded from the master instead? Yes, this puts reliance on the master being available, but if the master isn't available, the kubelet will be useless anyway.

This also helps with issues such as #179, where because the kubelet is baked into the image, your cluster might be running mixed versions of kubernetes based on which image booted.

ntquyen commented 8 years ago

My CoreOS node has just upgraded from v773.1.0 to v835.9.0 stable version and kubelet was removed. Should I rollback to previous version or build my own kubelet binary?

crawford commented 8 years ago

You can continue to roll forward to the beta channel. The kubelet is still present there.

crawford commented 8 years ago

https://github.com/coreos/bugs/issues/1051