Open liaden opened 6 years ago
I wonder if this is not an issue of systemctl enable kubelet
which I disable by default.
Nah, i'm having the same issue too with lastest kubeadm
From what I can tell, the reason the kubeadm init
is hanging is because Etcd was updated to default to using https in 1.10 instead of http. I think this is the commit https://github.com/kubernetes/kubernetes/pull/60728/files.
@jmreicha - I'm running into this right now. I had kube init running correctly for a week then had to re-image my SD Card and when I did that I'm running into the issue your talking about because kube init pulling the latest images. Do you have any suggestions to work around this?
@krisclarkdev downgrade to 1.9.8 and it works for me (as of the release I referenced). I've been meaning to try a more recent release though.
Thanks @liaden I'm trying it now. Just out of curiosity how important is having crictl tools installed?
I didn't need it for the other nodes to join, but made sure to have it for the master node. I didn't test if I could get away without it.
@liaden if you move your distribution level to buster, then you can get kube 10 working :
root@infra36:~# kubectl version
Client Version: version.Info{Major:"1", Minor:"10", GitVersion:"v1.10.4", GitCommit:"5ca598b4ba5abb89bb773071ce452e33fb66339d", GitTreeState:"clean", BuildDate:"2018-06-06T08:13:03Z", GoVersion:"go1.9.3", Compiler:"gc", Platform:"linux/arm64"}
Server Version: version.Info{Major:"1", Minor:"10", GitVersion:"v1.10.4", GitCommit:"5ca598b4ba5abb89bb773071ce452e33fb66339d", GitTreeState:"clean", BuildDate:"2018-06-06T08:00:59Z", GoVersion:"go1.9.3", Compiler:"gc", Platform:"linux/arm64"}
root@infra36:~# cat /etc/debian_version
buster/sid
root@infra36:~# uname -r
4.4.77-rockchip-ayufan-136
@sebt3 I'm on buster and still could not get kube 10 working fwiw
cat /etc/debian_version
buster/sid
$ uname -r
4.4.132-rockchip-ayufan-270
@krisclarkdev hum, then there's probably something that differ between the image I'm using and ayufan's bionic images. I've build mine using a script (https://github.com/sebt3/debian_images_maker, published for you). I can give you the detail to setup that image if you want
Otherwise, it's the setup method that differ. I've to say, I'm not using etcd in the container built by kubeadm. I'm using etcd packages from debian (because I needed better control on the etcd configuration) and give kubeadm a configuration file to tell it where to find etcd. I can give you the details here if needed
@sebt3 I'd love any instructions. In the meantime I kept downgrading until kubeadm init pulled an older version of etcd. I've since got kubeadm working but I had to drop all the way down to 1.8.4
Ok so...
1) install etcd from debian
2) run kubeadm up and until it hang. before a kubeadm reset
, save /etc/kuberetes/pki/etcd (way easier than doing the openssl manually). Alternatively you can use the alpha feature kubeadm alpha phase certs
.
3) Configure etcd. There's at least 2 thing to be done :
ETCD_UNSUPPORTED_ARCH=arm64
[Service]
EnvironmentFile=/etc/kubernetes/etcd.env
...
ExecStart=/usr/bin/etcd \
...
--cert-file=/etc/kubernetes/pki/etcd/server.crt \
--key-file=/etc/kubernetes/pki/etcd/server.key \
--client-cert-auth \
--trusted-ca-file=/etc/kubernetes/pki/etcd/ca.crt \
--peer-cert-file=/etc/kubernetes/pki/etcd/peer.crt \
--peer-key-file=/etc/kubernetes/pki/etcd/peer.key \
--peer-client-cert-auth \
--peer-trusted-ca-file=/etc/kubernetes/pki/etcd/ca.crt
4) start etcd and check if it sane :
systemctl daemon-reload
systemctl start etcd
export ETCDCTL_CA_FILE=/etc/kubernetes/pki/etcd/ca.crt
export ETCDCTL_CERT_FILE=/etc/kubernetes/pki/etcd/client.crt
export ETCDCTL_KEY_FILE=/etc/kubernetes/pki/etcd/client.key
export ETCDCTL_ENDPOINTS=https://$YOURIP:2379
etcdctl cluster-health
5) tell kubeadm where to find your etcd in a config file :
apiVersion: kubeadm.k8s.io/v1alpha1
kind: MasterConfiguration
api:
advertiseAddress:$YOURIP
etcd:
endpoints:
- https://$YOURIP:2379
caFile: /etc/kubernetes/pki/etcd/ca.crt
certFile: /etc/kubernetes/pki/etcd/client.crt
keyFile: /etc/kubernetes/pki/etcd/client.key
networking:
podSubnet: 10.244.0.0/16
and finally start kubeadm with kubeadm init --config=/path/to/previous.config --ignore-preflight-errors=all
You're the man @sebt3 -- I've got etcd up and running. Had to tweak your steps a bit but overall it worked well. I'll have to finish the rest up in the morning and report back. Thanks again.
Well, I now the latest build is working for me. I have no idea why. The only thing I changed was a different SD Card and now kube init worked as expected.
Nice work @sebt3
I am getting ready to redo my cluster so this will help me out too. The one thing I was noticing is that golang merged better ssl assembly so I am also wondering if kubernetes 1.11 would avoid the problem completely.
@krisclarkdev Which build/kernel version are you using?
@krisclarkdev maybe since 1.11 have been release yesterday (https://kubernetes.io/blog/2018/06/27/kubernetes-1.11-release-announcement/), you're not running that ?
Nope, I'm on 1.10
I reinstalled with latest on Saturday and init finished just fine for me, regardless of the fact that it was compiled with Go 1.9 (which implies the golang ssl assembly for arm64 was not used in the binary). This was with 1.10.4 though. I do not remember if I was using 1.10.2 or 1.10.3 a month ago.
I upgraded to 1.11 today from 1.10. I did a clean reset then upgraded to 1.11 on all machines. Then did a fresh kubectl init I had to include --ignore-preflight-errors=all on the master when doing the init and on the nodes with the join command
I was running into an issue where
kubeadm init
was failing for me using the image bionic-containers-rock64-0.6.44-239-arm64.img. I was able to get around it by runningapt-get purge kubelet kubectl kubeadm; apt-get install kubelet=1.9.8-00 kubeadm=1.9.8-00 kubectl=1.9.8-00
. If others are running into this issue, it may be good to change which version of kubernetes is being installed out of the box.Edit: My master node setup in case it helps anyone else:
sudo apt-get install -y golang
go get github.com/kubernetes-incubator/cri-tools/cmd/crictl
sudo mv go/bin/crictl /usr/bin
1.9.8-00
:sudo apt-get purge kubectl kubeadm kubelet
sudo apt-get install kubectl=1.9.8-00 kubeadm=1.9.8-00 kubelet=1.9.8-00
sudo systemctl enable kubelet.service
sudo kubeadm init
kubectl apply -f "https://cloud.weave.works/k8s/net?k8s-version=$(kubectl version | base64 | tr -d '\n')"
wget https://storage.googleapis.com/kubernetes-helm/helm-v2.9.1-linux-arm64.tar.gz
tar -xzvf helm-v2.9.1-linux-arm64.tar.gz
sudo mv linux-arm64/helm /usr/bin/ && rm -rf linux-arm64/
kubectl create -f k8s/rbac-config.yml
helm init --service-account tiller --tiller-image=jessestuart/tiller:v2.9.1
I also tried moving docker back to the last verified version (docker-ce 17.03) for kubernetes as well when trying to get 1.10 to work to no avail.