hypriot / image-builder-rpi

SD card image for Raspberry Pi with Docker: HypriotOS
http://blog.hypriot.com/post/how-to-get-docker-working-on-your-favourite-arm-board-with-hypriotos/
MIT License
1.07k stars 168 forks source link

Kubernetes ready image #208

Closed Tombar closed 5 years ago

Tombar commented 6 years ago

Ahoy Pirates!

I'm currently building a kubernetes ready RPI image based on this repo and was wondering if there is interest to merge/support it as part of Hypriot overall since I would need to create a new branch and tidy things in order to submit a PR.

ATM this branch has most of the changes needed to chroot_script.sh https://github.com/Tombar/image-builder-rpi-k8s/tree/k8s-support

I've also needed to create a larger raw image of 2000MB since the weight of the kubernetes packages is around 350mb

My use case is, providing an easy to use workflow, where users just flash the image to master node with a cloud-init script that runs kubeadm init, grab the kubeadm join token and flash node images with a cloud-init run-cmd that joins the cluster.

so far, all is working for me successfully :)

ulm0 commented 6 years ago

This is a really nice idea, yet imho hypriot itself should stay as lightweight as posible. This might a good start for a k8s-ready flavor of hypriot, which could have official support by the hypriot team and the community.

StefanScherer commented 6 years ago

Ahoy @Tombar This looks very interesting! So this reduces the complexity of setting up a one-node Kubernetes cluster to "flash and boot" (with the right user-data yml, of course)? I've heard that setting up things take a couple of minutes, is this SD card booting still fast? That would be awesome.

Yes, as @klud1 mentioned, the Hypriot image should be small, but I could think of providing a second "distro" SD image with Kubernetes.

Tombar commented 6 years ago

Hello, you can try it out using my current k8s ready image https://github.com/Tombar/image-builder-rpi-k8s/releases/tag/v1.7.1-k8s-ALPHA

Regarding boot times, the cloud-init with the initial kubeadm init takes ~ 645.12 seconds to complete, after that you can either taint your master to have a 1 node cluster or join other nodes to it.

@StefanScherer I was thinking of the same, adding additional support for a k8s ready image to hypriot, if there is interest, I'm willing to commit some time to push this forward.

ulm0 commented 6 years ago

@StefanScherer @Tombar should this coming k8s-ready distro have the docker version recommended by k8s (1.12.x...17.03.x) or the latest docker version available?

Tombar commented 6 years ago

so far, for my initial use case, I based my k8s image on the docker version already distributed by Hypriot (Docker version 17.10.0-ce), but yes, we can bundle a specific version if needed/wanted.

Please check the main changes here: https://github.com/Tombar/image-builder-rpi-k8s/blob/k8s-support/builder/chroot-script.sh#L149-L159

I'm also bundling flannel CNI for example, which if we are doing a Hypriot image might worth discussing a little bit? do we want to favor one option?

ulm0 commented 6 years ago

That'd be good in order to have a proper support/compatibility with k8s imo.

About the networking layer, could this be specified in the cloud-init as well? so people can choose whichever they want to use, and list on the readme what networking layer for k8s are arm-ready too (e.g. flannel, weave and so on).

ulm0 commented 6 years ago
# config is not there yet, but this setups it for root
echo 'export KUBECONFIG=/etc/kubernetes/admin.conf' >> /root/.bashrc

shall this configuration be available for the pirate user too or only for root? i mean docker commands can be run without further tweaking by the pirate user, same approach can be used for k8s components.

Tombar commented 6 years ago

Yes, we can try and support different CNI's through cloud-init settings.

Tombar commented 6 years ago

@klud1 that's is just a quick hack needed to bootstrap the flannel networking setup.

But again, that's something we can discuss.. happy to follow any Hypriot based conventions you have :)

ulm0 commented 6 years ago

Absolutely! Just throwing some thoughts so this new flavor can offer a great ux, really great job. Didn't mean to sound rude, sorry for any misunderstanding.

Tombar commented 6 years ago

so, to recap:

In order to move this forward, we need a larger oficial raw image.. can someone send a PR to hypriot//image-builder-raw and build an extra one? SD_CARD_SIZE=2000 work for me..

Regarding development of this branch, want me to send a PR or is something there is interest in working collectively?

StefanScherer commented 6 years ago

A good recap!

Just some ideas that I want to share now, don't know when I can help with it:

Tombar commented 6 years ago

@StefanScherer that is my main concern, I can work on the k8s scripts, image and testing, but working on the overall repo setup to support different flavors/matrix is something I would rather not do :(

StefanScherer commented 6 years ago

@Tombar Wow 640 seconds, that's a lot. Do you know what are the biggest time consuming parts? Eg. pulling images? Can some parts be done during the SD image build so we can reduce that first boot experience?

In the Hypriot Cluster Lab we also had exported images baked on the SD card image, but on first boot these images had to be imported into Docker again which also took time. Just another crazy idea which I haven't tried yet:

I think of pre-pulling Docker images, but then Docker must be running during Travis build. With the tests in #200 I'm sure this could work baking a real image into the SD card. If someone knows how much time we could drop of this long time perhaps it's worth investigating.

StefanScherer commented 6 years ago

@Tombar I understand, I'll discuss this with the other pirates what would be a good way. But I don't see a real blocker of not merging this finally into this repo. As Travis does not provide build artifacts we could start with your fork, as you've setup a forked pipeline already. Once we have a idea of either using matrix build or eg. just turning on CircleCI for a second build agent (which could also store artifacts) you can start a WIP PR here. WDYT?

Tombar commented 6 years ago

@StefanScherer according to what I can see in the cloud-init logs, most of the time was spent at Running command ['/var/lib/cloud/instance/scripts/runcmd'] which is basically the kubeadm init command.

Unfortunately the kubeadm init command doesn't have any timestamp to tell where the time is spent, but I can manually run it and monitor to have a better understanding in the upcoming days.

robertpeteuil commented 6 years ago

This is awesome news!

Myself and others have been using HypriotOS as the base for K8s clusters on Pi's. Most folks I know have been doing the prep, install and k8s spin-up via Ansible. It sounds like a lot of those tasks could be incorporated into a combination of this new k8s-ready variant + cloud-init.

Let me weigh in on a couple questions posed earlier in this thread - regarding the k8s specific image:

Docker Version

1st Boot Time

machine-id uniqueness - Update: lined-out as this is now performed at first boot

Tombar commented 6 years ago

@robertpeteuil agree on using the stable docker version :) its going to be my first change to https://github.com/Tombar/image-builder-rpi-k8s/tree/k8s-support

In the mean time, if you want to try a kubernetes ready image, you can try https://github.com/Tombar/image-builder-rpi-k8s/releases/tag/v1.7.1-k8s-ALPHA

I just flash it with Hypriot flash tool and a userdata file that has all the normal boilerplate plus:

# These commands will be ran once on first boot only
runcmd:
  # Pickup the hostname changes
  - [ systemctl, restart, avahi-daemon ]

  # Pickup the daemon.json changes
  - [ systemctl, restart, docker ]

  - [ kubeadm, init, --pod-network-cidr, 10.244.0.0/16 ]

  - 'export KUBECONFIG=/etc/kubernetes/admin.conf'

  - [ kubectl, create, -f, /root/kube-flannel.yml ]

And please let us know about your experience and findings!

robertpeteuil commented 6 years ago

@Tombar Sure thing, I’ll give it a go & post my findings. It might take a few weeks due to the holidays & travels.

Here’s some additional detail why I need a non-autostart option (other than for testing):

I’ll post back as soon as I can.

ulm0 commented 6 years ago

About the stable docker version for k8s, the docker repo bundled with the current hypriot image is https://download.docker.com, which only has 17.09.x and later versions avaliable.

I made this script (https://gitlab.com/klud/k8s-arm) a while ago, it uses the old repo (https://apt.dockerproject.org) for installing the recommended docker version as well as the latest k8s available for arm.

One more thing: As of k8s 1.9 it needs crictl (https://github.com/kubernetes-incubator/cri-tools), it is written in golang, but the user needs to build it. should this be shipped within the k8s-ready image as well?

Tombar commented 6 years ago

Hello, just a friendly notice that I will be resuming my work on this in about 2 weeks

robertpeteuil commented 6 years ago

@Tombar - Thanks for the reminder. I'm unsure how much feedback I'll be able to provide in the near term due to customer projects/travel - but I'll post what I can.

To answer the question about crictl and k8s v1.9:

ulm0 commented 6 years ago

Exactly, locking the distro in K8s v1.8.x and Docker v17.03.x is better.

chriskinsman commented 6 years ago

I have found issues running out of memory using kubeadm on the pi using k8s 1.9. Have moved back to 1.8 due to this.

I have been fighting trying to get k8s running under hypriot 1.7.1 for a few days and striking out. Going to try this image and see if I can get past the sticking point.

chriskinsman commented 6 years ago

This image is awesome. Finally got me unstuck after 3 days of not being able to get k8s up on hypriot. Thanks a bunch for building this...

StefanScherer commented 6 years ago

@chriskinsman Can you share what was the problem with K8S on Hypriot in the first place?

chriskinsman commented 6 years ago

Only know the symptoms. Never able to identify root cause.

First tried following: https://blog.hypriot.com/post/setup-kubernetes-raspberry-pi-cluster/

kubeadm init would finish and I would get a token.

The blog post above used an older version of flannel so I found this blog post that was an update of yours: [http://www.ecliptik.com/Raspberry-Pi-Kubernetes-Cluster/] (http://www.ecliptik.com/Raspberry-Pi-Kubernetes-Cluster/). I would then install flannel using:

curl -sSL https://raw.githubusercontent.com/coreos/flannel/v0.9.1/Documentation/kube-flannel.yml | sed "s/amd64/arm64/g" | kubectl create -f -

I would never see the flannel pods schedule, the kube-dns service would never come on line and a kubectl describe node would show not ready for the node with an error about an unconfigured CNI.

Tried a ton of variations on this including using the latest flannel, etc.

I would also occasionally get in the console what looked like a crash dump for kube-proxy I seem to remember it was.

After using the above image I noticed the output from kubeadm init looks very different than what I was seeing using your blog instructions. Not sure if related.

I was using hypriot 1.7.1 with this cloud-init:

`

cloud-config

vim: syntax=yaml

#

hostname: base

manage_etc_hosts: true

resize_rootfs: true growpart: mode: auto devices: ["/"] ignore_growroot_disabled: false

users:

package_update: true package_upgrade: false package_reboot_if_required: true packages:

locale: "en_US.UTF-8" timezone: "America/Los_Angeles"

write_files:

runcmd:

I locked on 1.8.8 because I found that 1.9 would hang in kubeadm init. Reading some items on the kubernetes issue list it looks like 1GB of memory in the Pi3 is slim for 1.9. They recommend 2GB and implied the master node might be running out of memory.

Thanks!

chriskinsman commented 6 years ago

@Tombar I am hitting:

standard_init_linux.go:195: exec user process caused "exec format error"

Whenever I try to start an image using this setup. I have tried both aarch64/httpd and arm64v8 as a test. Feels like it isn't liking the architecture to start the container but I believe these are the right arch types.

Thanks!

chriskinsman commented 6 years ago

Looks like the issue was I needed to use arm32v7 with this image for the architecture.

guidoffm commented 6 years ago

Is there anybody out there who sucessfully installed the most recent k8s on hypriot?

chriskinsman commented 6 years ago

I got 1.8.5 running. 1.9 wouldn't start. Looked to be an issue with memory on master node. Starting with 1.9 they are recommending a minimum of 2GB memory which the pi3 obviously doesn't have...

jeefberkey commented 6 years ago

I have run k8s 1.9.3, 1.9.4, and 1.10 successfully on my small 3 node cluster, installed with kubeadm. It's a little unstable due to the memory reqs but it works eventually.

chriskinsman commented 6 years ago

Did you start with this image or the base hypriot image? If the base hypriot image love to know your steps, etc.

jeefberkey commented 6 years ago

Using the stock image. I have a basic (and definitely professional) playbook here that does it:

I'm pretty sure it all works, I'm not good at k8s yet.

My cloud init is pretty similar to yours: https://github.com/jeefberkey/pi-images/blob/master/provisioning/templates/cloud-init.yaml.erb

jmreicha commented 6 years ago

@chriskinsman have you been able to get 1.9.x or 1.10.x working yet? If not, would this installation method work on a non Pi arm board that has more memory?

ulm0 commented 6 years ago

Starting with 1.9 they are recommending a minimum of 2GB memory which the pi3 obviously doesn't have...

@chriskinsman any official source for this? please!

I got 1.8.5 running. 1.9 wouldn't start. Looked to be an issue with memory on master node

is this only for master nodes? can i have a 4GiB Ram amd64 master node and RPi3s workers then?

michael-robbins commented 6 years ago

I am able to get a 5 node rPi cluster (3b+ and 3b) with raspbian lite, 1.10.1 k8s, 18.03 docker with weave CNI installed with no problems! (https://github.com/michael-robbins/rpi-k8s-ansible)

silvacraig commented 6 years ago

FWIW - I had a 3 X pi 3 cluster and then got a pi3 B+. Installed hypriot 1.8 onto pi 3 B+ . Then trying to get kubeadm and kubelet installed - this is the result: [ERROR SystemVerification]: failed to parse kernel config: unable to load kernel module "configs": output - "modprobe: ERROR: ../libkmod/libkmod.c:586 kmod_search_moddep() could not open moddep file '/lib/modules/4.9.80-hypriotos-v7+/modules.dep.bin'\nmodprobe: FATAL: Module configs not found in directory /lib/modules/4.9.80-hypriotos-v7+\n", err - exit status 1 Checked in //lib/modules and no 4.9.80-hypriotos-v7+/

Subsequently tried hypriot 1.9 and the problem went away - installed hypriot 1.9 on the PI 3's and then installed weavenet. Looking good - joined nodes and everything crashed. From what I see above - this could be memory related. Waaah - will be following this thread with more attention to find the right recipe.

hoshsadiq commented 6 years ago

I'm trying to get master setup using cloud-init. I have the following apt config (note some of the things have been removed because they're redundant):

apt_preserve_sources_list: true
apt_update: true
apt_upgrade: true
package_upgrade: true
packages:
# todo apt-mark hold docker-ce
# todo apt-mark hold kube*
# todo upgrade kubernetes
#  - [dirmngr]

  - [nfs-common]
  - [apt-transport-https]
  - [ca-certificates]
#  - [nfs-kernel-server]

  - [br_netfilter]
  - [kubeadm, 1.10.2-00]
  - [kubelet, 1.10.2-00]
  - [kubectl, 1.10.2-00]
  - [docker-ce, "17.03.2~ce-0~ubuntu-xenial"]

apt:
  sources_list: |
    deb http://raspbian.raspberrypi.org/raspbian/ stretch main contrib non-free rpi
  conf: |
    APT {
      Get {
        Assume-Yes "true";
        Fix-Broken "true";
      };
    };
  sources:
    docker.list:
      source: "deb [arch=armhf] https://download.docker.com/linux/ubuntu xenial stable"
      key: |
        -----BEGIN PGP PUBLIC KEY BLOCK-----

        <clipped for brevity>
        -----END PGP PUBLIC KEY BLOCK-----
    kubernetes.list:
      source: "deb http://apt.kubernetes.io/ kubernetes-xenial main"
      key: |
        -----BEGIN PGP PUBLIC KEY BLOCK-----

        <clipped for brevity>
        -----END PGP PUBLIC KEY BLOCK-----

and I'm getting the following errors:

Get:1 http://raspbian.raspberrypi.org/raspbian stretch InRelease [15.0 kB]
<clipped for brevity>
Get:8 http://archive.raspberrypi.org/debian stretch/main armhf Packages [159 kB]
Ign:7 https://packages.cloud.google.com/apt kubernetes-xenial Release
<clipped for brevity>
Ign:10 https://packages.cloud.google.com/apt kubernetes-xenial/main all Packages
Get:18 https://download.docker.com/linux/ubuntu xenial/stable armhf Packages [3,657 B]
Get:9 https://packagecloud.io/Hypriot/rpi/debian stretch InRelease [23.2 kB]
Ign:12 https://packages.cloud.google.com/apt kubernetes-xenial/main armhf Packages
Ign:15 https://packages.cloud.google.com/apt kubernetes-xenial/main Translation-en_US
Get:19 https://packagecloud.io/Hypriot/rpi/debian stretch/main armhf Packages [2,729 B]
Ign:16 https://packages.cloud.google.com/apt kubernetes-xenial/main Translation-en
Ign:10 https://packages.cloud.google.com/apt kubernetes-xenial/main all Packages
Ign:12 https://packages.cloud.google.com/apt kubernetes-xenial/main armhf Packages
Ign:15 https://packages.cloud.google.com/apt kubernetes-xenial/main Translation-en_US
Ign:16 https://packages.cloud.google.com/apt kubernetes-xenial/main Translation-en
Ign:10 https://packages.cloud.google.com/apt kubernetes-xenial/main all Packages
Ign:12 https://packages.cloud.google.com/apt kubernetes-xenial/main armhf Packages
Ign:15 https://packages.cloud.google.com/apt kubernetes-xenial/main Translation-en_US
Ign:16 https://packages.cloud.google.com/apt kubernetes-xenial/main Translation-en
Ign:10 https://packages.cloud.google.com/apt kubernetes-xenial/main all Packages
Ign:12 https://packages.cloud.google.com/apt kubernetes-xenial/main armhf Packages
Ign:15 https://packages.cloud.google.com/apt kubernetes-xenial/main Translation-en_US
Ign:16 https://packages.cloud.google.com/apt kubernetes-xenial/main Translation-en
Ign:10 https://packages.cloud.google.com/apt kubernetes-xenial/main all Packages
Ign:12 https://packages.cloud.google.com/apt kubernetes-xenial/main armhf Packages
Ign:15 https://packages.cloud.google.com/apt kubernetes-xenial/main Translation-en_US
Ign:16 https://packages.cloud.google.com/apt kubernetes-xenial/main Translation-en
Ign:10 https://packages.cloud.google.com/apt kubernetes-xenial/main all Packages
Err:12 https://packages.cloud.google.com/apt kubernetes-xenial/main armhf Packages
  server certificate verification failed. CAfile: /etc/ssl/certs/ca-certificates.crt CRLfile: none
Ign:15 https://packages.cloud.google.com/apt kubernetes-xenial/main Translation-en_US
Ign:16 https://packages.cloud.google.com/apt kubernetes-xenial/main Translation-en
Fetched 12.1 MB in 19s (607 kB/s)
Reading package lists...
W: The repository 'http://apt.kubernetes.io kubernetes-xenial Release' does not have a Release file.
E: Failed to fetch https://packages.cloud.google.com/apt/dists/kubernetes-xenial/main/binary-armhf/Packages  server certificate verification failed. CAfile: /etc/ssl/certs/ca-certificates.crt CRLfile: none
E: Some index files failed to download. They have been ignored, or old ones used instead.
Cloud-init v. 0.7.9 running 'modules:config' at Sat, 28 Apr 2018 18:57:29 +0000. Up 41.04 seconds.

Any ideas what's happening? What's odd is that, once cloud-init is finished, and I ssh into the pi, and manually run apt-get update, it works fine:

 $ ssh -o ConnectTimeout=2 -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -i ~/.ssh/id_rsa_pi hosh@192.168.100.199
Warning: Permanently added '192.168.100.199' (ECDSA) to the list of known hosts.
bash: warning: setlocale: LC_ALL: cannot change locale (en_GB.UTF-8)

HypriotOS (Debian GNU/Linux 9)

The programs included with the Debian GNU/Linux system are free software;
the exact distribution terms for each program are described in the
individual files in /usr/share/doc/*/copyright.

Debian GNU/Linux comes with ABSOLUTELY NO WARRANTY, to the extent
permitted by applicable law.
bash: warning: setlocale: LC_ALL: cannot change locale (en_GB.UTF-8)

 $ sudo apt-get install kubeadm
Reading package lists... Done
Building dependency tree
Reading state information... Done
E: Unable to locate package kubeadm

 $ sudo apt-get update
Hit:1 http://raspbian.raspberrypi.org/raspbian stretch InRelease
Hit:2 http://archive.raspberrypi.org/debian stretch InRelease
Hit:4 https://download.docker.com/linux/raspbian stretch InRelease
Hit:5 https://download.docker.com/linux/ubuntu xenial InRelease
Get:3 https://packages.cloud.google.com/apt kubernetes-xenial InRelease [8993 B]
Hit:6 https://packagecloud.io/Hypriot/rpi/debian stretch InRelease
Get:7 https://packages.cloud.google.com/apt kubernetes-xenial/main armhf Packages [15.5 kB]
Fetched 24.5 kB in 4s (5671 B/s)
Reading package lists... Done

Reading package lists... Done
Building dependency tree
Reading state information... Done
The following additional packages will be installed:
  ebtables ethtool kubectl kubelet kubernetes-cni socat
The following NEW packages will be installed:
  ebtables ethtool kubeadm kubectl kubelet kubernetes-cni socat
0 upgraded, 7 newly installed, 0 to remove and 0 not upgraded.
Need to get 50.3 MB/50.8 MB of archives.
After this operation, 359 MB of additional disk space will be used.

Any ideas what might be the issue?

cosnicolaou commented 6 years ago

it's because /boot is too small, if you look /var/log/cloud-init-output.log (or something like that) you'll see that's the problem. I just ran into this, ideally, the update/upgrade would run after / is mounted but I'm not sure where in the boot process that happens nor how to coordinate that with cloud-init.

ulm0 commented 6 years ago

Back to this, I recently init'd a cluster with 1.11.3 and docker 17.12.1 with no problems

Specs:

chriskinsman commented 6 years ago

@ulm0 post your cloud-init?

ulm0 commented 6 years ago

@chriskinsman it's pretty simple

#cloud-config
# vim: syntax=yaml
#

# The current version of cloud-init in the Hypriot rpi-64 is 0.7.6
# When dealing with cloud-init, it is SUPER important to know the version
# I have wasted many hours creating servers to find out the module I was trying to use wasn't in the cloud-init version I had
# Documentation: http://cloudinit.readthedocs.io/en/0.7.9/index.html

# Set your hostname here, the manage_etc_hosts will update the hosts file entries as well
hostname: node1
manage_etc_hosts: true

# You could modify this for your own user information
users:
  - name: pirate
    gecos: "Hypriot Pirate"
    sudo: ALL=(ALL) NOPASSWD:ALL
    shell: /bin/bash
    groups: users,docker,video,input
    plain_text_passwd: hypriot
    lock_passwd: false
    ssh_pwauth: true
    chpasswd: { expire: false }

# # Set the locale of the system
locale: "en_US.UTF-8"

# # Set the timezone
# # Value of 'timezone' must exist in /usr/share/zoneinfo
timezone: "America/Santiago"

# # Update apt packages on first boot
package_update: true
package_upgrade: false
# package_reboot_if_required: true

# # Install any additional apt packages you need here
packages:
 - ntp
 - nfs-common
 - vim

# WiFi connect to HotSpot
# - use `wpa_passphrase SSID PASSWORD` to encrypt the psk
write_files:
  - content: |
      allow-hotplug wlan0
      iface wlan0 inet dhcp
      wpa-conf /etc/wpa_supplicant/wpa_supplicant.conf
      iface default inet dhcp
    path: /etc/network/interfaces.d/wlan0
  - content: |
      country=cl
      ctrl_interface=DIR=/var/run/wpa_supplicant GROUP=netdev
      update_config=1
      network={
      ssid="[REDACTED]"
      psk="[REDACTED]"
      proto=RSN
      key_mgmt=WPA-PSK
      pairwise=CCMP
      auth_alg=OPEN
      }
    path: /etc/wpa_supplicant/wpa_supplicant.conf

# These commands will be ran once on first boot only
runcmd:
  # Pickup the hostname changes
  - 'systemctl restart avahi-daemon'
  # Activate WiFi interface
  - 'ifup wlan0'
charles-d-burton commented 5 years ago

Using 1.9 causes a kernel panic of some kind when you try to install weave:

] Process weaver (pid: 1936, stack limit = 0x9dd1e210)

Message from syslogd@kube-node1 at Nov 22 00:23:25 ... kernel:[ 258.356111] Stack: (0x9dd1f9f0 to 0x9dd20000)

Message from syslogd@kube-node1 at Nov 22 00:23:25 ... kernel:[ 258.369676] f9e0: 00000000 00000000 c907a8c0 9dd1fa90

Message from syslogd@kube-node1 at Nov 22 00:23:25 ... kernel:[ 258.396039] fa00: 0000801a 0000ebd4 a6c30ed0 a6c30e98 80c85e00 9dd1fa20 80c7b140 ada0e380

Message from syslogd@kube-node1 at Nov 22 00:23:25 ... kernel:[ 258.422886] fa20: 80c7b140 00000000 9dd1fa64 00000000 00000000 9de88850 00000000 0000ebd4

Message from syslogd@kube-node1 at Nov 22 00:23:25 ... kernel:[ 258.449869] fa40: bc622000 9e227500 00002100 9de88800 00000050 0000801a 32669400 00000000

Message from syslogd@kube-node1 at Nov 22 00:23:25 ... kernel:[ 258.476457] fa60: 00000000 00000000 00008000 0000ee47 00000002 c907a8c0 00000000 00000000

Message from syslogd@kube-node1 at Nov 22 00:23:25 ... kernel:[ 258.508568] fa80: 00000000 00000000 00000000 00000000 6d0ca8c0 00000000 00000000 00000000

Message from syslogd@kube-node1 at Nov 22 00:23:25 ... kernel:[ 258.542714] faa0: 00000000 00000000 00000000 a6c30840 9e227000 00002000 9e227000 0000056e

Message from syslogd@kube-node1 at Nov 22 00:23:25 ... kernel:[ 258.570421] fac0: 9de88800 a7e48700 9dd1fb54 9dd1fad8 7f78132c 7f77fcf4 00000000 00000000

Message from syslogd@kube-node1 at Nov 22 00:23:25 ... kernel:[ 258.598421] fae0: 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000

Message from syslogd@kube-node1 at Nov 22 00:23:25 ... kernel:[ 258.625782] fb00: 00000000 00000000 00000000 8067af3c 00000040 401d5809 00000040 a6c30840

Message from syslogd@kube-node1 at Nov 22 00:23:25 ... kernel:[ 258.653959] fb20: 00000000 9e227000 9dd1fb64 a6c30840 00000003 9e227000 9e227000 0000056e

Message from syslogd@kube-node1 at Nov 22 00:23:25 ... kernel:[ 258.682771] fb40: 00000000 a7e48700 9dd1fb9c 9dd1fb58 8067b4a4 7f780f34 9dd1fb9c 9dd1fb68

Message from syslogd@kube-node1 at Nov 22 00:23:25 ... kernel:[ 258.712495] fb60: 8067b0ec 9dd1fbb0 80c04e84 00000000 00000000 80b8c578 00000003 a6c30840

Message from syslogd@kube-node1 at Nov 22 00:23:25 ... kernel:[ 258.743185] fb80: 9e227000 a7e48700 00000008 a6c30840 9dd1fbf4 9dd1fba0 8067be44 8067b410

Message from syslogd@kube-node1 at Nov 22 00:23:25 ...

hoshsadiq commented 5 years ago

@charles-d-burton you're probably better off posting this at to the weave devs rather than here.

charles-d-burton commented 5 years ago

I found the issue with the weave devs, it's apparently a known issue with this kernel version. A kernel update apparently fixes it, I'm in the process of building it to test that.

StefanScherer commented 5 years ago

Closing due to inactivity