Closed peykens closed 7 years ago
Could be that something upstream has changed. I'm going to try to upgrade to Kubernetes 1.8 soon, so will verify this, too.
I just encountered the same problem. Seems to related to wrong systemd config. In roles/kubernetes/tasks/docker.yml
, the line:
template: src=docker-1.12.service dest=/etc/systemd/system/multi-user.target.wants/docker.service
needs to be
template: src=docker-1.12.service dest=/etc/systemd/system/docker.service
@kerko cool, thanks for the pointer ! weekend is raspi time, will fix it then.
I have tested in the mean time with Kubernetes v 1.8.0 and docker version 17.05.0-ce.
You have to update the iptables for cni0, and than it works. $ sudo iptables -A FORWARD -i cni0 -j ACCEPT $ sudo iptables -A FORWARD -o cni0 -j ACCEPT
BUT : I'm now hitting an issue that the server doesn't store the JWS key. So after 24 horus (ttl set to 0 doesn't help) you loose the ability to join. When a worker node reboots, it's lost. When the master reboots everything is gone. log message : "there is no JWS signed token in the cluster-info ConfigMap"
$ kubectl version Client Version: version.Info{Major:"1", Minor:"8", GitVersion:"v1.8.0", GitCommit:"6e937839ac04a38cac63e6a7a306c5d035fe7b0a", GitTreeState:"clean", BuildDate:"2017-09-28T22:57:57Z", GoVersion:"go1.8.3", Compiler:"gc", Platform:"linux/arm"} Server Version: version.Info{Major:"1", Minor:"8", GitVersion:"v1.8.0", GitCommit:"6e937839ac04a38cac63e6a7a306c5d035fe7b0a", GitTreeState:"clean", BuildDate:"2017-09-28T22:46:41Z", GoVersion:"go1.8.3", Compiler:"gc", Platform:"linux/arm"}
$ docker version Client: Version: 17.05.0-ce API version: 1.29 Go version: go1.7.5 Git commit: 89658be Built: Thu May 4 22:30:54 2017 OS/Arch: linux/arm
Server: Version: 17.05.0-ce API version: 1.29 (minimum version 1.12) Go version: go1.7.5 Git commit: 89658be Built: Thu May 4 22:30:54 2017 OS/Arch: linux/arm Experimental: false
I'm about to update to 1.8.0 and just checked the minimal Docker version to use:
From https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG.md#external-dependencies:
Continuous integration builds use Docker versions 1.11.2, 1.12.6, 1.13.1, and 17.03.2. These versions were validated on Kubernetes 1.8.
So I will go with 17.03.2 if available for Hypriot and use that version for the next update.
Super, i’m looking forward to your results. I really want to get it working.
I'm just about to push, but it turns out that regardless what I do, the token has an expiry of 24h. I opened a Kubernetes issue here --> https://github.com/kubernetes/kubernetes/issues/53637.
However, when we create a token after the bootstrap with
kubeadm token create --ttl 0
then it creates a proper token (check with kubeadm token list
) so I'm going to use this one for joining the cluster.
@peykens @kerko I updated the playbooks, and also the base system to Hypriot 1.6.0. If you have the possibility, I'd recommend starting from scratch (did it just twice, took me ~ 15 mins each).
The problem with the expiring tokens should be fixed, but for sure I only know it tomorrow ;-)
Please let me know, whether this update works for you.
Hi,
I'm missing a file "Could not find or access 'docker.service'" in task TASK [kubernetes : Update docker service startup]
The former docker-1.12.service
Task :
Sorry, forgot to check in (renamed it to remove the version number). Should be back now ...
Hi @rhuss ,
First of all thx for your effort. I have tested the latest scripts and it's going well. I do have to update some issues :
thanks for the feedback. tbh, I use weave at the moment (that's what I tested) and don't run into the issues you mentioned. I guess, flannel integration needs some love again (however, I'm happy that one network implementation works smoothely).
I haven't tested a proper reboot yet, but will do asap. Looks like still an issue with the downgrade.
I just found out that it took 12 minutes but the docker came up on the node. For sure not the proper solution, really curious what it is:
Ah, got it. Two service files, and I copied it to the wrong location. Let me fix this.
Feel free to kick in for the flannel fix, happy about any PR ;-)
Hold on, there are still issues wrt restarts. I think its much better to write a /etc/docker/daemon.json
instead of overwriting the service file. I will try that as soon as I'm back from travelling.
Took a bit, but there was an issue with Hypriot 1.6.0, too. Just added a fix for this, so should work now.
@peykens any chance that you can test the updated playbooks ?
I will flash all my Pi's with hypriot image 1.6.0 again and start from scratch
Hi @rhuss ,
I flashed my 4 Pi's and started from scratch. I just skip the network overlay since I use flannel. Scripts are running fine, and everything is installed. Afterwards I can deploy a new service and reach all nodes. Reboot on both master node and worker node is working fine !
So thx a lot for the work.
Unfourtunately, the ingress controller is no longer working on these versions :-( The pods are not created anymore : {code} $ kubectl get deployments -n kube-system NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE kube-dns 1 1 1 0 1h traefik-ingress-controller 3 0 0 0 2m {code}
Do you also use an ingress controller ?
In the mean time I dropped my flannel and restarted with the weave. Install is fine, but ingress controller hits the same problem. The pod is not created.
Ok, got it finally working. Needed to create the ServiceAccount, ClusterRole and ClusterRoleBinding. See also : https://github.com/hypriot/blog/issues/51
SUPER, now let's wait and see if it keeps on working after 24h (initial token expiry).
Next step, the Kubernetes dashboard. If you have any working links to that, it would be great.
@peykens Traefik integration is pending (I have some initial POC), for the dashboard, including heapster and influxdb you can just call:
ansible-playbook -i hosts management.yml
You then have a kubernetes-dashboard service which you can either export via ingress or via NodePort
for testing.
OK, missed that one. Dashboard is running fine, I can access it from every node over the service. However, my ingress not working yet : 404 Not Found. My other services work fine over ingress. Probably something with the namespace I guess.
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: kubernetes-dashboard
spec:
rules:
- http:
paths:
- path: /dashboard
backend:
serviceName: kubernetes-dashboard
servicePort: 80
ssh tunnel works fine to get access to the dashboard.
Yeah, kubernetes-dashboard is running in namespace kube-system (as the other infra services). I suggest that you install the ingress object also into this namespace.
SUPER, now let's wait and see if it keeps on working after 24h (initial token expiry).
This work for sure as I created an extra token which never expires (you can check with kubeadm token list
). See also the upstream issue I opened for why the TTL couldn't initially be provided --> https://github.com/kubernetes/kubernetes/issues/53637)
I created the ingress in namespace kube-system, but it doesn't help.
$ kubectl describe ing/kubernetes-dashboard -n kube-system
Name: kubernetes-dashboard
Namespace: kube-system
Address:
Default backend: default-http-backend:80 (<none>)
Rules:
Host Path Backends
---- ---- --------
*
/dashboard kubernetes-dashboard:80 (10.44.0.2:9090)
Annotations:
Events: <none>
Last guess: Replace /dashboard
with /
, at least for me it only worked with using the root context.
Otherwise, I will continue on the traefik ingress controller soon (and also rook as distributed fs), and will adapt the dashboard accordingly.
I had another app running behind / therefore I used another path. I switched now both and guess what ... dashboard is working fine but the other one is no longer accessible :-) It was working before, so I'll try to figure it out
Thx a lot for your help. Don't know how to thank you. I'm new to k8s and ansible, so this project has helped me a lot. I will definitely use it to introduce my colleagues to k8s.
you are welcome ;-). I'm going to close this issue now, feel free to open a new one if you hit some other issues.
Per https://github.com/kubernetes/kubeadm/issues/335#issuecomment-352521912, kubeadm init --token-ttl 0
resolves the there is no JWS signed token in the cluster-info ConfigMap
error for me.
Hi,
I'm trying to run this great ansible scripts, but the docker downgrade always fails.
RUNNING HANDLER [kubernetes : restart docker] *** fatal: [192.168.1.200]: FAILED! => {"changed": false, "failed": true, "msg": "Unable to start service docker: Job for docker.service failed. See 'systemctl status docker.service' and 'journalctl -xn' for details.\n"}
Anyone else also facing this issue ?