rancher / k3os

Purpose-built OS for Kubernetes, fully managed by Kubernetes.
https://k3os.io
Apache License 2.0
3.5k stars 396 forks source link

vagrant up fails #808

Closed davidwalter0 closed 2 years ago

davidwalter0 commented 2 years ago

Version (k3OS / kernel) k3os --version k3os version v0.20.7-k3s1r0

uname --kernel-release --kernel-version 5.4.0-73-generic rancher/k3os#82 SMP Thu Jun 3 02:29:43 UTC 2021

uname -a Linux k3os-20107 5.4.0-73-generic rancher/k3os#82 SMP Thu Jun 3 02:29:43 UTC 2021 x86_64 GNU/Linux

Architecture x86_64

Describe the bug

Following the README from https://github.com/rancher/k3os/tree/master/package/packer/vagrant commands from readme for vagrant don't boot a working configured VM with the default config

To Reproduce

packer build .
vagrant box add --provider virtualbox k3os k3os_virtualbox.box
vagrant plugin install vagrant-vboxmanage; vagrant plugin install vagrant-vbguest
vagrant up

Expected behavior expected a working k3s in the VM

Actual behavior k3s server is running but kubectl get all --all-namespaces blocks

k3os-20107 [~]$ k3s kubectl get all
NAME                 TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)   AGE
service/kubernetes   ClusterIP   10.43.0.1    <none>        443/TCP   58s
k3os-20107 [~]$ k3s kubectl get all --all-namespaces
The connection to the server 127.0.0.1:6443 was refused - did you specify the right host or port?
The connection to the server 127.0.0.1:6443 was refused - did you specify the right host or port?
The connection to the server 127.0.0.1:6443 was refused - did you specify the right host or port?
k3os-20107 [~]$ k3s kubectl get all
The connection to the server 127.0.0.1:6443 was refused - did you specify the right host or port?

Additional context x86_64 macbook OSX virtualbox 6.1.26 vagrant 2.2.18

vagrant up fails with the following k3s-server is running in the vm, but the control plane isn't functional

[default] A Virtualbox Guest Additions installation was found but no tools to rebuild or start them.
The guest's platform ("linux") is currently not supported, will try generic Linux method...
Copy iso file /Applications/VirtualBox.app/Contents/MacOS/VBoxGuestAdditions.iso into the box /tmp/VBoxGuestAdditions.iso
Unmounting Virtualbox Guest Additions ISO from: /mnt
umount: /mnt: no mount point specified.
==> default: Checking for guest additions in VM...
The following SSH command responded with a non-zero exit status.
Vagrant assumes that this means the command failed!

umount /mnt

Stdout from the command:
umount: /mnt: no mount point specified.
dweomer commented 2 years ago

virtualbox guest additions will not install without some cleverness to make /lib/modules writable (I've pulled off similar with a silly manifest to install docker in a pod with virtualbox server modules installed via init-container):

# configmap
apiVersion: v1
kind: ConfigMap
metadata:
  name: runner-config
  namespace: drone
  labels:
    app.kubernetes.io/name: drone-runner
    app.kubernetes.io/instance: drone
    app.kubernetes.io/component: runner
    app.kubernetes.io/part-of: drone
    app.kubernetes.io/managed-by: dweomer
data:
  install-virtualbox-modules.sh: |
    #!/usr/bin/env bash
    set -eux
    mount | grep -v containerd | grep -v docker | grep -v kube
    find /kernel/mod
    find /kernel/src
    mkdir -vp /kernel/{mod,src}/{w,u} /lib/modules
    mount -t overlay overlay -o lowerdir=/kernel/mod/l,workdir=/kernel/mod/w,upperdir=/kernel/mod/u /lib/modules
    mount -t overlay overlay -o lowerdir=/kernel/src/l,workdir=/kernel/src/w,upperdir=/kernel/src/u /usr/src
    DEBIAN_FRONTEND=noninteractive apt-get -y reinstall virtualbox-dkms
    /etc/init.d/virtualbox start
---
# deployment snippet
apiVersion: apps/v1
kind: Deployment
metadata:
  name: runner-docker-amd64
  namespace: drone
  labels:
    app.kubernetes.io/name: drone-runner-docker-linux-amd64
    app.kubernetes.io/instance: drone
    app.kubernetes.io/component: runner
    app.kubernetes.io/managed-by: dweomer
    app.kubernetes.io/part-of: drone
    drone.io/runner: docker
spec:
  replicas: 3
  strategy:
    type: Recreate
  selector:
    matchLabels:
      app.kubernetes.io/name: drone-runner-docker-linux-amd64
      app.kubernetes.io/instance: drone
      app.kubernetes.io/component: runner
  template:
    metadata:
      labels:
        app.kubernetes.io/name: drone-runner-docker-linux-amd64
        app.kubernetes.io/instance: drone
        app.kubernetes.io/component: runner
        drone.io/runner: docker
    spec:
      affinity:
        podAntiAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:
            - topologyKey: "kubernetes.io/hostname"
              labelSelector:
                matchExpressions:
                  - {key: "drone.io/runner", operator: Exists}
        nodeAffinity:
          preferredDuringSchedulingIgnoredDuringExecution:
            - weight: 100
              preference:
                matchExpressions:
                  - {key: "drone.io/runner", operator: In, values: ["docker"]}
            - weight: 75
              preference:
                matchExpressions:
                  - {key: "node-role.kubernetes.io/control-plane", operator: DoesNotExist}
            - weight: 50
              preference:
                matchExpressions:
                  - {key: "drone.io/server", operator: DoesNotExist}
            - weight: 25
              preference:
                matchExpressions:
                  - {key: "drone.io/runner", operator: NotIn, values: ["kube"]}
          requiredDuringSchedulingIgnoredDuringExecution:
            nodeSelectorTerms:
              - matchExpressions:
                  - {key: "drone.io/runner", operator: Exists}
                  - {key: "drone.io/runner", operator: NotIn, values: ["disabled"]}
                  - {key: "kubernetes.io/arch", operator: In, values: ["amd64"]}
      tolerations:
        - key: kubernetes.io/arch
          operator: Equal
          value: amd64
      serviceAccountName: drone-runner
      automountServiceAccountToken: false
      terminationGracePeriodSeconds: 300
      hostNetwork: true
      dnsPolicy: ClusterFirstWithHostNet
      dnsConfig:
        nameservers:
          - 8.8.8.8
          - 1.1.1.1
      initContainers:
        - name: virtualbox
          image: "dweomer/virtualbox:6.1.26"
          imagePullPolicy: Always
          command: [/kernel/install-virtualbox-modules.sh]
          securityContext:
            privileged: true
          terminationMessagePath: /run/drone/virtualbox/termination-log
          volumeMounts:
            - {name: runner-cfg, mountPath: /kernel/install-virtualbox-modules.sh, subPath: install-virtualbox-modules.sh}
            - {name: kernel-mod, mountPath: /kernel/mod}
            - {name: kernel-src, mountPath: /kernel/src}
            - {name: runner-lib, mountPath: /kernel/mod/l, subPath: modules, readOnly: true}
            - {name: runner-usr, mountPath: /kernel/src/l, subPath: src, readOnly: true}
            - {name: runner-dev, mountPath: /dev}
      containers:
        - name: docker
          image: "dweomer/docker:20.10-dind"
          imagePullPolicy: Always
          command: ["dind", "dockerd", "--host=unix://var/run/docker.sock", "--host=tcp://0.0.0.0:2376"]
          securityContext:
            privileged: true
          terminationMessagePath: /run/drone/docker/termination-log
          volumeMounts:
            - {name: docker-cfg, mountPath: /etc/docker/daemon.json, subPath: daemon.json}
            - {name: docker-tls, mountPath: /etc/docker/tls/server, readOnly: true}
            - {name: runner-dev, mountPath: /dev}
            - {name: runner-lib, mountPath: /lib/modules, subPath: modules, readOnly: true}
            - {name: runner-opt, mountPath: /opt}
            - {name: runner-run, mountPath: /run}
            - {name: runner-tmp, mountPath: /tmp}
            - {name: runner-usr, mountPath: /usr/src, subPath: src, readOnly: true}
            - {name: runner-var, mountPath: /var}
        - name: runner
          image: "drone/drone-runner-docker:1"
          imagePullPolicy: Always
          env:
            - {name: DRONE_RUNNER_CAPACITY, value: "1"}
            - {name: DRONE_RUNNER_NAME, valueFrom: {fieldRef: {fieldPath: "spec.nodeName"}}}
            - {name: DRONE_RUNNER_PRIVILEGED_IMAGES, value: "dweomer/drone-plugins-docker"}
          envFrom:
            - configMapRef:
                name: drone-runner
          ports:
            - name: http
              containerPort: 3000
              protocol: TCP
          terminationMessagePath: /run/drone/runner/termination-log
          volumeMounts:
            - {name: runner-opt, mountPath: /opt}
            - {name: runner-run, mountPath: /run}
            - {name: runner-tmp, mountPath: /tmp}
            - {name: runner-var, mountPath: /var}
      volumes:
        - name: docker-cfg
          configMap: {name: docker-config, defaultMode: 0600, optional: true}
        - name: docker-tls
          secret: {secretName: docker-tls-server, defaultMode: 0600}
        - name: kernel-mod
          hostPath: {path: /kernel/mod, type: DirectoryOrCreate}
        - name: kernel-src
          hostPath: {path: /kernel/src, type: DirectoryOrCreate}
        - name: runner-cfg
          configMap: {name: runner-config, defaultMode: 0744}
        - name: runner-dev
          hostPath: {path: /dev, type: Directory}
        - name: runner-lib
          hostPath: {path: /lib, type: Directory}
        - name: runner-opt
          hostPath: {path: /opt, type: DirectoryOrCreate}
        - name: runner-run
          hostPath: {path: /run, type: Directory}
        - name: runner-tmp
          hostPath: {path: /tmp, type: Directory}
        - name: runner-usr
          hostPath: {path: /usr, type: Directory}
        - name: runner-var
          hostPath: {path: /var, type: Directory}

There is a lot of noise here but the gist of it is in the install-virtualbox-modules.sh script from the config map combined with some hostPath volume magic that gives the virtualbox container a writable, persistent /lib/modules and /usr/src trees for the DKMS compilation. IIRC the guest additions install will attempt a similar compilation or at the very least attempt to copy modules into the /lib/modules tree so this should give you something to try.


If none of the above makes much sense: I wouldn't expect the virtualbox guest additions to ever install cleanly on k3os :smile: Consider adding one or both of the following to your Vagrantfile:

config.vbguest.auto_update = false
config.vm.provider :virtualbox do |v|
  v.check_guest_additions = false
end