luxas / kubernetes-on-arm

Kubernetes ported to ARM boards like Raspberry Pi.
MIT License
597 stars 86 forks source link

will it be convenient when upgrade to next release? #133

Closed Slahser closed 8 years ago

Slahser commented 8 years ago

hey i saw the roadmap here. thanks for your work .

and i want to know will it be convenient when upgrade to next release?

build a pi cluster is a little complex for me ...because i am not skillful on Ansible etc..

luxas commented 8 years ago

I'm building the next release on v1.4 right now, but probably no, you have to reflash your cards when upgrading from v0.8.0 to v0.9.0

Hopefully v0.9.0 => v0.9.2 should go smoothly, but can't promise :)

Slahser commented 8 years ago

ha ,i`m so glad get the reply so fast . i got it ,thank you.

mitchhh22 commented 7 years ago

any timeline when the 1.4 release will be out?

luxas commented 7 years ago

kubeadm will be the "supported" way See https://github.com/kubernetes/kubernetes.github.io/pull/1420 for more info (Will soon be merged into the website)

mitchhh22 commented 7 years ago

will kubeadm work on the latest raspbian jessie?

luxas commented 7 years ago

Yes, but you need to set cgroup_enable=memory, cgroup_enable=cpuset in /boot/cmdline.txt

mitchhh22 commented 7 years ago

@luxas I edited cmdline.txt and followed the kubeadm instructions here but the install just hangs on waiting for the control plane to become ready:

root@raspberrypi:/home/pi# kubeadm init
<master/tokens> generated token: "1dd21f.e3914d068bfabe47"
<master/pki> created keys and certificates in "/etc/kubernetes/pki"
error: <util/kubeconfig> failed to create "/etc/kubernetes/kubelet.conf", it already exists [open /etc/kubernetes/kubelet.conf: file exists]
root@raspberrypi:/home/pi# rm -r /etc/kubernetes/
root@raspberrypi:/home/pi# kubeadm init
<master/tokens> generated token: "d7d84f.efacd682ee671222"
<master/pki> created keys and certificates in "/etc/kubernetes/pki"
<util/kubeconfig> created "/etc/kubernetes/kubelet.conf"
<util/kubeconfig> created "/etc/kubernetes/admin.conf"
<master/apiclient> created API client configuration
<master/apiclient> created API client, waiting for the control plane to become ready
luxas commented 7 years ago

Did you use v1.4.1 as pointed out? See https://deploy-preview-1420--kubernetes-io-vnext-staging.netlify.com/docs/getting-started-guides/kubeadm/ for instructions

mitchhh22 commented 7 years ago

Even with v1.4.1 on latest raspbian the kubeadm just hangs:

root@raspberrypi:/home/pi# sudo kubeadm init --use-kubernetes-version=v1.4.1
Running pre-flight checks
<master/tokens> generated token: "3c2f05.464d02ae967a95da"
<master/pki> generated Certificate Authority key and certificate:
Issuer: CN=kubernetes | Subject: CN=kubernetes | CA: true
Not before: 2016-11-13 19:36:50 +0000 UTC Not After: 2026-11-11 19:36:50 +0000 UTC
Public: /etc/kubernetes/pki/ca-pub.pem
Private: /etc/kubernetes/pki/ca-key.pem
Cert: /etc/kubernetes/pki/ca.pem
<master/pki> generated API Server key and certificate:
Issuer: CN=kubernetes | Subject: CN=kube-apiserver | CA: false
Not before: 2016-11-13 19:36:50 +0000 UTC Not After: 2017-11-13 19:37:12 +0000 UTC
Alternate Names: [192.168.86.101 10.96.0.1 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local]
Public: /etc/kubernetes/pki/apiserver-pub.pem
Private: /etc/kubernetes/pki/apiserver-key.pem
Cert: /etc/kubernetes/pki/apiserver.pem
<master/pki> generated Service Account Signing keys:
Public: /etc/kubernetes/pki/sa-pub.pem
Private: /etc/kubernetes/pki/sa-key.pem
<master/pki> created keys and certificates in "/etc/kubernetes/pki"
<util/kubeconfig> created "/etc/kubernetes/kubelet.conf"
<util/kubeconfig> created "/etc/kubernetes/admin.conf"
<master/apiclient> created API client configuration
<master/apiclient> created API client, waiting for the control plane to become ready
luxas commented 7 years ago

Have you set cgroup_enable=cpuset in /boot/cmdline.txt and rebooted? That will do it

nsteinmetz commented 7 years ago

@mitchhh22:

It worked with v1.4.5 on HypriotOS which has correct kernel options ; see https://github.com/luxas/kubernetes-on-arm/issues/140#issuecomment-257164213

mitchhh22 commented 7 years ago

@luxas setting cgroup_enable=cpuset fixed the issue. Thank you

mitchhh22 commented 7 years ago

I ran: curl -sSL https://raw.githubusercontent.com/luxas/flannel/update-daemonset/Documentation/kube-flannel.yml | sed "s/amd64/arm/g" | kubectl create -f -

But looking at kube-system I see:

root@raspberrypi:/home/pi# kubectl get po --namespace=kube-system
NAME                                  READY     STATUS              RESTARTS   AGE
dummy-2501624643-m60i8                1/1       Running             1          30m
etcd-raspberrypi                      1/1       Running             1          29m
kube-apiserver-raspberrypi            1/1       Running             1          29m
kube-controller-manager-raspberrypi   1/1       Running             1          30m
kube-discovery-2202902116-wtije       1/1       Running             0          2m
kube-dns-2334855451-0hhqb             0/3       ContainerCreating   0          5m
kube-flannel-ds-x3ohq                 2/2       Running             3          14m
kube-proxy-zzljp                      1/1       Running             1          29m
kube-scheduler-raspberrypi            1/1       Running             1          30m
  info: 1 completed object(s) was(were) not shown in pods list. Pass --show-all to see all objects.
root@raspberrypi:/home/pi# kubectl describe po kube-dns-2334855451-0hhqb --namespace=kube-system
Name:       kube-dns-2334855451-0hhqb
Namespace:  kube-system
Node:       raspberrypi/192.168.86.101
Start Time: Sun, 13 Nov 2016 21:18:23 +0000
Labels:     component=kube-dns
        k8s-app=kube-dns
        kubernetes.io/cluster-service=true
        name=kube-dns
        pod-template-hash=2334855451
        tier=node
Status:     Pending
IP:
Controllers:    ReplicaSet/kube-dns-2334855451
Containers:
  kube-dns:
    Container ID:
    Image:      gcr.io/google_containers/kubedns-arm:1.7
    Image ID:
    Ports:      10053/UDP, 10053/TCP
    Args:
      --domain=cluster.local
      --dns-port=10053
    Limits:
      cpu:  100m
      memory:   170Mi
    Requests:
      cpu:      100m
      memory:       170Mi
    State:      Waiting
      Reason:       ContainerCreating
    Ready:      False
    Restart Count:  0
    Liveness:       http-get http://:8080/healthz delay=60s timeout=5s period=10s #success=1 #failure=1
    Readiness:      http-get http://:8081/readiness delay=30s timeout=5s period=10s #success=1 #failure=3
    Volume Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-oqmh3 (ro)
    Environment Variables:  <none>
  dnsmasq:
    Container ID:
    Image:      gcr.io/google_containers/kube-dnsmasq-arm:1.3
    Image ID:
    Ports:      53/UDP, 53/TCP
    Args:
      --cache-size=1000
      --no-resolv
      --server=127.0.0.1#10053
    Limits:
      cpu:  100m
      memory:   170Mi
    Requests:
      cpu:      100m
      memory:       170Mi
    State:      Waiting
      Reason:       ContainerCreating
    Ready:      False
    Restart Count:  0
    Volume Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-oqmh3 (ro)
    Environment Variables:  <none>
  healthz:
    Container ID:
    Image:      gcr.io/google_containers/exechealthz-arm:1.1
    Image ID:
    Port:       8080/TCP
    Args:
      -cmd=nslookup kubernetes.default.svc.cluster.local 127.0.0.1:53 >/dev/null && nslookup kubernetes.default.svc.cluster.local 127.0.0.1:10053 >/dev/null
      -port=8080
      -quiet
    Limits:
      cpu:  10m
      memory:   50Mi
    Requests:
      cpu:      10m
      memory:       50Mi
    State:      Waiting
      Reason:       ContainerCreating
    Ready:      False
    Restart Count:  0
    Volume Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-oqmh3 (ro)
    Environment Variables:  <none>
Conditions:
  Type      Status
  Initialized   True
  Ready     False
  PodScheduled  True
Volumes:
  default-token-oqmh3:
    Type:   Secret (a volume populated by a Secret)
    SecretName: default-token-oqmh3
QoS Class:  Guaranteed
Tolerations:    dedicated=master:NoSchedule
Events:
  FirstSeen LastSeen    Count   From            SubobjectPath   Type        Reason      Message
  --------- --------    -----   ----            -------------   --------    ------      -------
  5m        5m      1   {default-scheduler }            Normal      Scheduled   Successfully assigned kube-dns-2334855451-0hhqb to raspberrypi
  5m        3m      60  {kubelet raspberrypi}           Warning     FailedSync  Error syncing pod, skipping: failed to "SetupNetwork" for "kube-dns-2334855451-0hhqb_kube-system" with SetupNetworkError: "Failed to setup network for pod \"kube-dns-2334855451-0hhqb_kube-system(b56411dd-a9e6-11e6-a1b1-b827ebf7cf15)\" using network plugins \"cni\": open /run/flannel/subnet.env: no such file or directory; Skipping pod"

  2m    0s  55  {kubelet raspberrypi}       Warning FailedSync  Error syncing pod, skipping: failed to "SetupNetwork" for "kube-dns-2334855451-0hhqb_kube-system" with SetupNetworkError: "Failed to setup network for pod \"kube-dns-2334855451-0hhqb_kube-system(b56411dd-a9e6-11e6-a1b1-b827ebf7cf15)\" using network plugins \"cni\": open /run/flannel/subnet.env: no such file or directory; Skipping pod"

Any ideas?

WakeupTsai commented 7 years ago

@mitchhh22 I have the same problem, did you solve that yet? thanks!