nubificus / urunc

a simple container runtime that aspires to become `runc` for unikernels
Apache License 2.0
39 stars 2 forks source link

K3s and urunc compatibility #50

Open DeftaSebastian opened 2 weeks ago

DeftaSebastian commented 2 weeks ago

I would like to ask if you have any experience with running urunc on k3s. I have tried installing urunc on a k3s node, but I have run into a couple of issues.

System info

If I install urunc with the way it is described on installation.md, then urunc won't be a known runtime for kublet and get stuck on ContainerCreating

So I tried doing the following:

I will list now the files that I modified and what they contain:

dm_create.sh:

#!/bin/bash

DATA_DIR=/var/lib/rancher/k3s/agent/containerd/io.containerd.snapshotter.v1.devmapper
POOL_NAME=containerd-pool

mkdir -p /var/lib/rancher/k3s/agent/containerd/
mkdir -p ${DATA_DIR}

# Create data file
touch "${DATA_DIR}/data"
truncate -s 100G "${DATA_DIR}/data"

# Create metadata file
touch "${DATA_DIR}/meta"
truncate -s 10G "${DATA_DIR}/meta"

# Allocate loop devices
DATA_DEV=$(losetup --find --show "${DATA_DIR}/data")
META_DEV=$(losetup --find --show "${DATA_DIR}/meta")

# Define thin-pool parameters.
# See https://www.kernel.org/doc/Documentation/device-mapper/thin-provisioning.txt for details.
SECTOR_SIZE=512
DATA_SIZE="$(blockdev --getsize64 -q ${DATA_DEV})"
LENGTH_IN_SECTORS=$(bc <<<"${DATA_SIZE}/${SECTOR_SIZE}")
DATA_BLOCK_SIZE=128
LOW_WATER_MARK=32768

# Create a thin-pool device
dmsetup create "${POOL_NAME}" \
    --table "0 ${LENGTH_IN_SECTORS} thin-pool ${META_DEV} ${DATA_DEV} ${DATA_BLOCK_SIZE} ${LOW_WATER_MARK}"

dm_reload.sh

#!/bin/bash
set -ex

DATA_DIR=/var/lib/rancher/k3s/agent/containerd/io.containerd.snapshotter.v1.devmapper
POOL_NAME=containerd-pool

# Allocate loop devices
DATA_DEV=$(losetup --find --show "${DATA_DIR}/data")
META_DEV=$(losetup --find --show "${DATA_DIR}/meta")

# Define thin-pool parameters.
# See https://www.kernel.org/doc/Documentation/device-mapper/thin-provisioning.txt for details.
SECTOR_SIZE=512
DATA_SIZE="$(blockdev --getsize64 -q ${DATA_DEV})"
LENGTH_IN_SECTORS=$(bc <<<"${DATA_SIZE}/${SECTOR_SIZE}")
DATA_BLOCK_SIZE=128
LOW_WATER_MARK=32768

# Create a thin-pool device
dmsetup create "${POOL_NAME}" \
    --table "0 ${LENGTH_IN_SECTORS} thin-pool ${META_DEV} ${DATA_DEV} ${DATA_BLOCK_SIZE} ${LOW_WATER_MARK}"
systemctl restart containerd.service

/var/lib/rancher/k3s/agent/etc/containerd/config.toml.tmpl

{{ template "base" . }}

[plugins."io.containerd.snapshotter.v1.devmapper"]
  pool_name = "containerd-pool"
  root_path = "/var/lib/rancher/k3s/agent/containerd/io.containerd.snapshotter.v1.devmapper"
  base_image_size = "10GB"
  discard_blocks = true
  fs_type = "ext2"
[plugins."io.containerd.grpc.v1.cri".containerd.runtimes.urunc]
    runtime_type = "io.containerd.urunc.v2"
    container_annotations = ["com.urunc.unikernel.*"]
    pod_annotations = ["com.urunc.unikernel.*"]
    snapshotter = "devmapper"

After all these changes, I am currently facing this error:

Name:                nginx-urunc-545b984cdd-69xql
Namespace:           default
Priority:            0
Runtime Class Name:  urunc
Service Account:     default
Node:                tt-node1/10.9.2.165
Start Time:          Tue, 27 Aug 2024 13:33:22 +0000
Labels:              pod-template-hash=545b984cdd
                     run=nginx-urunc
Annotations:         <none>
Status:              Pending
IP:                  
IPs:                 <none>
Controlled By:       ReplicaSet/nginx-urunc-545b984cdd
Containers:
  nginx-urunc:
    Container ID:  
    Image:         nubificus/nginx-hvt:x86_64
    Image ID:      
    Port:          80/TCP
    Host Port:     0/TCP
    Command:
      sleep
    Args:
      infinity
    State:          Waiting
      Reason:       ContainerCreating
    Ready:          False
    Restart Count:  0
    Requests:
      cpu:        10m
    Environment:  <none>
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-t48dd (ro)
Conditions:
  Type                        Status
  PodReadyToStartContainers   False 
  Initialized                 True 
  Ready                       False 
  ContainersReady             False 
  PodScheduled                True 
Volumes:
  kube-api-access-t48dd:
    Type:                    Projected (a volume that contains injected data from multiple sources)
    TokenExpirationSeconds:  3607
    ConfigMapName:           kube-root-ca.crt
    ConfigMapOptional:       <nil>
    DownwardAPI:             true
QoS Class:                   Burstable
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
  Type     Reason                  Age               From     Message
  ----     ------                  ----              ----     -------
  Warning  FailedCreatePodSandBox  6s (x2 over 18s)  kubelet  Failed to create pod sandbox: rpc error: code = NotFound desc = failed to create containerd container: snapshot does not exist: not found

And the container is stuck on ContainerCreating.

The deplyoment yaml I use is:

apiVersion: apps/v1
kind: Deployment
metadata:
  labels:
    run: nginx-urunc
  name: nginx-urunc
spec:
  replicas: 1
  selector:
    matchLabels:
      run: nginx-urunc
  template:
    metadata:
      labels:
        run: nginx-urunc
    spec:
      nodeName: tt-node1
      runtimeClassName: urunc
      containers:
      - image: nubificus/nginx-hvt:x86_64
        imagePullPolicy: Always
        name: nginx-urunc
        command: ["sleep"]
        args: ["infinity"]
        ports:
        - containerPort: 80
          protocol: TCP
        resources:
          requests:
            cpu: 10m
      restartPolicy: Always
---
apiVersion: v1
kind: Service
metadata:
  name: nginx-urunc
spec:
  ports:
  - port: 80
    protocol: TCP
    targetPort: 80
  selector:
    run: nginx-urunc
  sessionAffinity: None
  type: ClusterIP

And the runtimeClass yaml is

kind: RuntimeClass
apiVersion: node.k8s.io/v1
metadata:
    name: urunc
handler: urunc

Thank you in advance!

ananos commented 2 weeks ago

Hi @DeftaSebastian! thanks for your detailed report!

We have been running urunc successfully on k3s, so let's walk through the steps just in case we've missed something in the docs.

I think a first thing to check given the error message (snapshot does not exist) is the devmapper support in k3s.

Can you check the output of the following command?

ctr --address /run/k3s/containerd/containerd.sock plugin ls |grep devmapper

it should be something like the following:

io.containerd.snapshotter.v1           devmapper                linux/amd64    ok        

if it's not, I suspect there's another instance of containerd running on your machine (stock containerd), so you've setup devmapper correctly but for this instance.

DeftaSebastian commented 2 weeks ago

I have created a new virtual machine with the same configuration as the one from when i posted the issue.

On this machine, I have set it up us a worker node for the k3s cluster and installed urunc following this guide. To be pedantic, these are the commands that I ran to install and test urunc on this worker node:

    3  sudo apt-get update
    4  sudo apt-get upgrade -y
    5  sudo apt-get install git wget bc make build-essential -y
    6  wget -q https://go.dev/dl/go1.20.6.linux-$(dpkg --print-architecture).tar.gz
    7  sudo rm -rf /usr/local/go
    8  sudo tar -C /usr/local -xzf go1.20.6.linux-$(dpkg --print-architecture).tar.gz
    9  sudo tee -a /etc/profile > /dev/null << 'EOT'
   10  export PATH=$PATH:/usr/local/go/bin
   11  EOT
   12  rm -f go1.20.6.linux-$(dpkg --print-architecture).tar.gz
   13  RUNC_VERSION=$(curl -L -s -o /dev/null -w '%{url_effective}' "https://github.com/opencontainers/runc/releases/latest" | grep -oP "v\d+\.\d+\.\d+" | sed 's/v//')
   14  wget -q https://github.com/opencontainers/runc/releases/download/v$RUNC_VERSION/runc.$(dpkg --print-architecture)
   15  sudo install -m 755 runc.$(dpkg --print-architecture) /usr/local/sbin/runc
   16  rm -f ./runc.$(dpkg --print-architecture)
   17  CONTAINERD_VERSION=$(curl -L -s -o /dev/null -w '%{url_effective}' "https://github.com/containerd/containerd/releases/latest" | grep -oP "v\d+\.\d+\.\d+" | sed 's/v//')
   18  wget -q https://github.com/containerd/containerd/releases/download/v$CONTAINERD_VERSION/containerd-$CONTAINERD_VERSION-linux-$(dpkg --print-architecture).tar.gz
   19  sudo tar Cxzvf /usr/local containerd-$CONTAINERD_VERSION-linux-$(dpkg --print-architecture).tar.gz
   20  sudo rm -f containerd-$CONTAINERD_VERSION-linux-$(dpkg --print-architecture).tar.gz
   21  CONTAINERD_VERSION=$(curl -L -s -o /dev/null -w '%{url_effective}' "https://github.com/containerd/containerd/releases/latest" | grep -oP "v\d+\.\d+\.\d+" | sed 's/v//')
   22  wget -q https://raw.githubusercontent.com/containerd/containerd/v$CONTAINERD_VERSION/containerd.service
   23  sudo rm -f /lib/systemd/system/containerd.service
   24  sudo mv containerd.service /lib/systemd/system/containerd.service
   25  sudo systemctl daemon-reload
   26  sudo systemctl enable --now containerd
   27  sudo mkdir -p /etc/containerd/
   28  sudo mv /etc/containerd/config.toml /etc/containerd/config.toml.bak
   29  sudo containerd config default | sudo tee /etc/containerd/config.toml
   30  sudo systemctl restart containerd
   31  CNI_VERSION=$(curl -L -s -o /dev/null -w '%{url_effective}' "https://github.com/containernetworking/plugins/releases/latest" | grep -oP "v\d+\.\d+\.\d+" | sed 's/v//')
   32  wget -q https://github.com/containernetworking/plugins/releases/download/v$CNI_VERSION/cni-plugins-linux-$(dpkg --print-architecture)-v$CNI_VERSION.tgz
   33  sudo mkdir -p /opt/cni/bin
   34  sudo tar Cxzvf /opt/cni/bin cni-plugins-linux-$(dpkg --print-architecture)-v$CNI_VERSION.tgz
   35  sudo rm -f cni-plugins-linux-$(dpkg --print-architecture)-v$CNI_VERSION.tgz
   36  NERDCTL_VERSION=$(curl -L -s -o /dev/null -w '%{url_effective}' "https://github.com/containerd/nerdctl/releases/latest" | grep -oP "v\d+\.\d+\.\d+" | sed 's/v//')
   37  wget -q https://github.com/containerd/nerdctl/releases/download/v$NERDCTL_VERSION/nerdctl-$NERDCTL_VERSION-linux-$(dpkg --print-architecture).tar.gz
   38  sudo tar Cxzvf /usr/local/bin nerdctl-$NERDCTL_VERSION-linux-$(dpkg --print-architecture).tar.gz
   39  sudo rm -f nerdctl-$NERDCTL_VERSION-linux-$(dpkg --print-architecture).tar.gz
   40  git clone https://github.com/nubificus/urunc.git
   41  git clone https://github.com/nubificus/bima.git
   42  sudo mkdir -p /usr/local/bin/scripts
   43  sudo cp urunc/script/dm_create.sh /usr/local/bin/scripts/dm_create.sh
   44  sudo chmod 755 /usr/local/bin/scripts/dm_create.sh
   45  sudo cp urunc/script/dm_reload.sh /usr/local/bin/scripts/dm_reload.sh
   46  sudo chmod 755 /usr/local/bin/scripts/dm_reload.sh
   47  sudo mkdir -p /usr/local/lib/systemd/system/
   48  sudo cp urunc/script/dm_reload.service /usr/local/lib/systemd/system/dm_reload.service
   49  sudo chmod 644 /usr/local/lib/systemd/system/dm_reload.service
   50  sudo chown root:root /usr/local/lib/systemd/system/dm_reload.service
   51  sudo systemctl daemon-reload
   52  sudo systemctl enable dm_reload.service
   53  sudo sed -i '/\[plugins\."io\.containerd\.snapshotter\.v1\.devmapper"\]/,/^$/d' /etc/containerd/config.toml
   54  sudo tee -a /etc/containerd/config.toml > /dev/null <<'EOT'
   55  # Customizations for urunc
   56  [plugins."io.containerd.snapshotter.v1.devmapper"]
   57    pool_name = "containerd-pool"
   58    root_path = "/var/lib/containerd/io.containerd.snapshotter.v1.devmapper"
   59    base_image_size = "10GB"
   60    discard_blocks = true
   61    fs_type = "ext2"
   62  EOT
   63  sudo systemctl restart containerd
   64  sudo /usr/local/bin/scripts/dm_create.sh
   65  cd bima
   66  make && sudo make install
   67  cd ..
   68  cd bima/
   69  make && sudo make install   ## here i forgot that go was not added to the $PATH so I had to restart
   70  ls
   71  exit
   72  cd bima/
   73  make && sudo make install
   74  cd ..
   75  cd urunc
   76  make && sudo make install
   77  cd ..
   78  sudo tee -a /etc/containerd/config.toml > /dev/null <<EOT
[plugins."io.containerd.grpc.v1.cri".containerd.runtimes.urunc]
    runtime_type = "io.containerd.urunc.v2"
    container_annotations = ["com.urunc.unikernel.*"]
    pod_annotations = ["com.urunc.unikernel.*"]
    snapshotter = "devmapper"
EOT

   79  sudo systemctl restart containerd
   80  sudo apt-get install libseccomp-dev pkg-config gcc -y
   81  git clone -b v0.6.9 https://github.com/Solo5/solo5.git
   82  cd solo5
   83  ./configure.sh  && make -j$(nproc)
   84  sudo cp tenders/hvt/solo5-hvt /usr/local/bin
   85  sudo cp tenders/spt/solo5-spt /usr/local/bin
   86  sudo nerdctl run --security-opt seccomp=unconfined  --rm -ti --snapshotter devmapper --runtime io.containerd.urunc.v2 harbor.nbfc.io/nubificus/urunc/redis-hvt-rump:latest unikernel

After running these commands and checking that urunc works, then deploying a deployment containing this:

apiVersion: apps/v1
kind: Deployment
metadata:
  labels:
    run: nginx-urunc
  name: nginx-urunc
spec:
  replicas: 1
  selector:
    matchLabels:
      run: nginx-urunc
  template:
    metadata:
      labels:
        run: nginx-urunc
    spec:
      nodeName: tt-node2
      runtimeClassName: urunc
      containers:
      - image: nubificus/nginx-hvt:x86_64
        imagePullPolicy: Always
        name: nginx-urunc
        command: ["sleep"]
        args: ["infinity"]
        ports:
        - containerPort: 80
          protocol: TCP
        resources:
          requests:
            cpu: 10m
      restartPolicy: Always
---
apiVersion: v1
kind: Service
metadata:
  name: nginx-urunc
spec:
  ports:
  - port: 80
    protocol: TCP
    targetPort: 80
  selector:
    run: nginx-urunc
  sessionAffinity: None
  type: ClusterIP

It results in urunc not even being visible to kubelet:

Name:                nginx-urunc-cf8b7bf5f-q78vk
Namespace:           default
Priority:            0
Runtime Class Name:  urunc
Service Account:     default
Node:                tt-node2/10.9.3.15
Start Time:          Wed, 28 Aug 2024 08:04:49 +0000
Labels:              pod-template-hash=cf8b7bf5f
                     run=nginx-urunc
Annotations:         <none>
Status:              Pending
IP:                  
IPs:                 <none>
Controlled By:       ReplicaSet/nginx-urunc-cf8b7bf5f
Containers:
  nginx-urunc:
    Container ID:  
    Image:         nubificus/nginx-hvt:x86_64
    Image ID:      
    Port:          80/TCP
    Host Port:     0/TCP
    Command:
      sleep
    Args:
      infinity
    State:          Waiting
      Reason:       ContainerCreating
    Ready:          False
    Restart Count:  0
    Requests:
      cpu:        10m
    Environment:  <none>
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-pkj7q (ro)
Conditions:
  Type                        Status
  PodReadyToStartContainers   False 
  Initialized                 True 
  Ready                       False 
  ContainersReady             False 
  PodScheduled                True 
Volumes:
  kube-api-access-pkj7q:
    Type:                    Projected (a volume that contains injected data from multiple sources)
    TokenExpirationSeconds:  3607
    ConfigMapName:           kube-root-ca.crt
    ConfigMapOptional:       <nil>
    DownwardAPI:             true
QoS Class:                   Burstable
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
  Type     Reason                  Age                   From     Message
  ----     ------                  ----                  ----     -------
  Warning  FailedCreatePodSandBox  4m33s (x47 over 14m)  kubelet  Failed to create pod sandbox: rpc error: code = Unknown desc = failed to get sandbox runtime: no runtime for "urunc" is configured

To answer your last question, running ctr --address /run/k3s/containerd/containerd.sock plugin ls |grep devmapper on my old worker node that I based the issue on, results in: io.containerd.snapshotter.v1 devmapper linux/amd64 ok

ananos commented 2 weeks ago

hmm thanks for taking the time for a clean install. it might be that we assume the sock is at /run/containerd

Can you try ln -s /run/k3s/containerd /run/containerd ?

DeftaSebastian commented 2 weeks ago

The errors persist on both machines even after running ln -s /run/k3s/containerd /run/containerd

cmainas commented 2 weeks ago

Hello @DeftaSebastian , I was trying to replicate your issues but I failed. However, I encountered an another issue when trying to use urunc in k3s. After a few changes, I managed to make urunc run with k3s. Let me go through the process:

  1. Get a fresh k3s single node cluster in an ubuntu 22.04 (working cni etc.)
  2. Setup devmapper as you mentioned in the beginning of this issue. I used the exact same scripts.
  3. Setup Go as mentioned in the urunc installation guide
  4. Build and install urunc as mentioned in the urunc installation guide. However, please use the k3s_issue branch. It is based on the commit you opened the issue.
  5. Install Solo5 as mentioned in the urunc installation guide
  6. Install qemu by apt install qemu
  7. Created a config.toml.tmpl file with the exact same content as in your first message. This step was weird, because I made two installations in the same base image and in one case I had to use two more lines (see below) and in the other one I could just use your file. I have no explanation for that. Maybe I made something wrong.
    [plugins."io.containerd.grpc.v1.cri".containerd.runtimes.urunc.options]
    BinaryName = "/usr/local/bin/urunc"
  8. Restarted k3s service by systemctl restart k3s.service
  9. Verified that devmapper plugin is ok by ctr --address /run/k3s/containerd/containerd.sock plugin ls | grep devmapper
  10. Added the new runtime class for urunc with the same yaml file as in your first message.
  11. Deployed a Redis Rumprun app with the following yaml. Please avoid the nginx image you have in your yaml file. I had problems with that image too.
    
    apiVersion: apps/v1
    kind: Deployment
    metadata:
    name: hvt-rumprun-redis-deployment
    labels:
    app: hvt-rumprun-redis
    spec:
    replicas: 1
    selector:
    matchLabels:
      app: hvt-rumprun-redis
    template:
    metadata:
      labels:
        app: hvt-rumprun-redis
    spec:
      runtimeClassName: urunc
      containers:
      - name: hvt-rumprun-redis
        image: harbor.nbfc.io/nubificus/urunc/redis-hvt-rump:latest
        ports:
        - containerPort: 80
12. Redis Rumprun unikernel started but crashed

kubectl get pods NAME READY STATUS RESTARTS AGE hvt-rumprun-redis-deployment-796f77d986-96w7m 0/1 CrashLoopBackOff 7 (2m43s ago) 13m

13. Rumprun failed, because of the entwork config :)

kubectl logs hvt-rumprun-redis-deployment-796f77d986-96w7m

[ 1.0000000] 2018 The NetBSD Foundation, Inc. All rights reserved. [ 1.0000000] Copyright (c) 1982, 1986, 1989, 1991, 1993 [ 1.0000000] The Regents of the University of California. All rights reserved.

[ 1.0000000] NetBSD 8.99.25 (RUMP-ROAST) [ 1.0000000] total memory = 253 MB [ 1.0000000] timecounter: Timecounters tick every 10.000 msec [ 1.0000080] timecounter: Timecounter "clockinterrupt" frequency 100 Hz quality 0 [ 1.0000090] cpu0 at thinair0: rump virtual cpu [ 1.0000090] root file system type: rumpfs [ 1.0000090] kern.module.path=/stand/amd64/8.99.25/modules [ 1.0200090] mainbus0 (root) [ 1.0200090] timecounter: Timecounter "bmktc" frequency 1000000000 Hz quality 100 [ 1.0200090] ukvmif0: Ethernet address 3a:4f:14:45:3a:56 rumprun: gw "169.254.1.1" addition failed

=== bootstrap failed [ 1.0748306] rump kernel halting... [ 1.0748306] syncing disks... done [ 1.0748306] unmounting file systems... [ 1.1560190] unmounted rumpfs on / type rumpfs [ 1.1560190] unmounting done halted Solo5: solo5_exit(0) called

14. Created a Nginx Unikraft deployment over qemu

apiVersion: apps/v1 kind: Deployment metadata: name: qemu-unikraft-nginx-deployment labels: app: qemu-unikraft-nginx spec: replicas: 1 selector: matchLabels: app: qemu-unikraft-nginx template: metadata: labels: app: qemu-unikraft-nginx spec: runtimeClassName: urunc containers:

Long story short, use the new branch k3s_issue for urunc, configure containerd and devmapper as you did in your first step, do not use Rumprun. We have seen the Rumprun issue before, Rumprun did not like that the gw and the IP of the unikernel were in different subnets. I thought we solved it, but maybe we did not update the image.

DeftaSebastian commented 2 weeks ago

With your instructions, it finally works. Regarding the image functionality, is there a way to fix the ones that are broken? Also, if I were to make a unikernel image, are there any minimal requirements that it needs to have?

Just to let you know, I did not need to include the following in the config.tomp.tmpl

[plugins."io.containerd.grpc.v1.cri".containerd.runtimes.urunc.options]
  BinaryName = "/usr/local/bin/urunc"
cmainas commented 2 weeks ago

With your instructions, it finally works. Regarding the image functionality, is there a way to fix the ones that are broken? Also, if I were to make a unikernel image, are there any minimal requirements that it needs to have?

Yes, it is possible to fix it. I found the changes that are required for Rumprun. I will try to fix some, but it might take some time.

Regarding the unikernel image, for the time being, you need to build the unikernel image separately and then include it in an OCI image. You can do the last part with bima. Urunc expects some specific annotations in the OCI image or a urunc.json file with the necessary information. Also take a look on the currently supported unikernels.

Let me know if you have any issues on creating an image or on how to best create an image for your purpose.

cmainas commented 2 weeks ago

So luckily the building of rumprun did not face any issues and I managed to update nginx and redis images. You can find them:

Both of them work in the k3s cluster.

$ kubectl get pods
NAME                                             READY   STATUS    RESTARTS   AGE
hvt-rumprun-redis-deployment-675c78b4d9-pnz5t    1/1     Running   0          16m
hvt-unikraft-nginx-deployment-6dbb6f96c8-qw6mr   1/1     Running   0          13m

Just for reference, here are the Containerfiles to use with bima.

FROM scratch

COPY redis6_nogw.hvt /unikernel/redis_nogw.hvt
COPY tmp/conf/redis.conf /conf/redis.conf
COPY tmp/conf/redisaof.conf /conf/redisaof.conf

LABEL com.urunc.unikernel.binary=/unikernel/redis_nogw.hvt
LABEL "com.urunc.unikernel.cmdline"='redis /data/conf/redis.conf'
LABEL "com.urunc.unikernel.unikernelType"="rumprun"
LABEL "com.urunc.unikernel.hypervisor"="hvt"
LABEL "com.urunc.unikernel.blkMntPoint"="/data/"
FROM scratch

COPY nginx6_nogw.hvt /unikernel/nginx_nogw.hvt
COPY tmp/conf/fastcgi.conf /conf/fastcgi.conf
COPY tmp/conf/fastcgi_params /conf/fastcgi_params
COPY tmp/conf/mime.types /conf/mime.types
COPY tmp/conf/nginx.conf /conf/nginx.conf
COPY tmp/conf/scgi_params /conf/scgi_params
COPY tmp/conf/uwsgi_params /conf/uwsgi_params
COPY tmp/www/index.html /www/index.html
COPY tmp/www/logo150.png /www/logo150.png

LABEL com.urunc.unikernel.binary=/unikernel/nginx_nogw.hvt
LABEL "com.urunc.unikernel.cmdline"='nginx -c /data/conf/nginx.conf'
LABEL "com.urunc.unikernel.unikernelType"="rumprun"
LABEL "com.urunc.unikernel.hypervisor"="hvt"
LABEL "com.urunc.unikernel.blkMntPoint"="/data/"

Here is the bima command for redis (same command can be used for nginx with different tag)

sudo bima build -t harbor.nbfc.io/nubificus/urunc/hvt-rumprun-redis:latest -o tar -f Containerfile .

The above command will create a tar file that you can load in nerdctl or docker with:

nerdctl/docker load < hvt-rumprun-redis\:latest
DeftaSebastian commented 2 weeks ago

Thank you for your help and the quick responses. I will be sure to come back here if I run into any more trouble with the unikernel images.

cmainas commented 2 weeks ago

Oh, I forgot to mention, that I had to change the mask for Rumprun unikernels through urunc. I have pushed the commit in the k3s_issue branch.

Also, would it be ok if we keep the issue open, just as a reminder to properly fix the issues and open a PR?

DeftaSebastian commented 2 weeks ago

No problem with me.

DeftaSebastian commented 1 week ago

Hi, have there been any big changes that could mess with the k3s and urunc compatibility? I have noticed yesterday that when I would do a clean urunc install following the steps from this issue, it would lead to an error

Name:                qemu-unikraft-nginx-deployment-d85c69448-nxhmv
Namespace:           default
Priority:            0
Runtime Class Name:  urunc
Service Account:     default
Node:                tt-node/10.9.4.169
Start Time:          Thu, 05 Sep 2024 08:53:51 +0000
Labels:              app=qemu-unikraft-nginx
                     pod-template-hash=d85c69448
Annotations:         <none>
Status:              Pending
IP:                  
IPs:                 <none>
Controlled By:       ReplicaSet/qemu-unikraft-nginx-deployment-d85c69448
Containers:
  qemu-unikraft-nginx:
    Container ID:   
    Image:          harbor.nbfc.io/nubificus/urunc/nginx-qemu-unikraft:latest
    Image ID:       
    Port:           80/TCP
    Host Port:      0/TCP
    State:          Waiting
      Reason:       ContainerCreating
    Ready:          False
    Restart Count:  0
    Environment:    <none>
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-w27gs (ro)
Conditions:
  Type                        Status
  PodReadyToStartContainers   False 
  Initialized                 True 
  Ready                       False 
  ContainersReady             False 
  PodScheduled                True 
Volumes:
  kube-api-access-w27gs:
    Type:                    Projected (a volume that contains injected data from multiple sources)
    TokenExpirationSeconds:  3607
    ConfigMapName:           kube-root-ca.crt
    ConfigMapOptional:       <nil>
    DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
  Type     Reason                  Age   From     Message
  ----     ------                  ----  ----     -------
  Warning  FailedCreatePodSandBox  22s   kubelet  Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: failed to read config.json: open /run/containerd/io.containerd.runtime.v2.task/k8s.io/e4ce3b0c636665d5d544875b0286c77142a93d6a34200af76a2a413dde97cb8c/config.json: no such file or directory: unknown
  Warning  FailedCreatePodSandBox  10s   kubelet  Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: failed to read config.json: open /run/containerd/io.containerd.runtime.v2.task/k8s.io/6b214b5dc717443c0b84c921e6c28b90be8a73ba03b49258d5559cd577ac266c/config.json: no such file or directory: unknown

Also, this is the configuration that I am using Urunc version: 0.3.0-0b519f3 Arch: x86_64 Unikernel: harbor.nbfc.io/nubificus/urunc/nginx-qemu-unikraft k3s: 1.30.4+k3s1 (98262b5d) OS: Ubuntu server 22.04.4

Currently I am using a script to install urunc on the k3s worker node.

sudo apt-get upgrade -y

sudo apt-get install git wget bc make build-essential -y

wget -q https://go.dev/dl/go1.20.6.linux-$(dpkg --print-architecture).tar.gz
sudo rm -rf /usr/local/go
sudo tar -C /usr/local -xzf go1.20.6.linux-$(dpkg --print-architecture).tar.gz
sudo tee -a /etc/profile &gt; /dev/null &lt;&lt; 'EOT'
export PATH=$PATH:/usr/local/go/bin
EOT
sudo rm -f go1.20.6.linux-$(dpkg --print-architecture).tar.gz

source /etc/profile

sudo mkdir -p /usr/local/bin/scripts
git clone https://github.com/nubificus/urunc.git

sudo cp urunc/script/dm_create.sh /usr/local/bin/scripts/dm_create.sh
sudo chmod 755 /usr/local/bin/scripts/dm_create.sh

sudo cp urunc/script/dm_reload.sh /usr/local/bin/scripts/dm_reload.sh
sudo chmod 755 /usr/local/bin/scripts/dm_reload.sh

sudo mkdir -p /usr/local/lib/systemd/system/

sudo cp urunc/script/dm_reload.service /usr/local/lib/systemd/system/dm_reload.service
sudo chmod 644 /usr/local/lib/systemd/system/dm_reload.service
sudo chown root:root /usr/local/lib/systemd/system/dm_reload.service
sudo systemctl daemon-reload
sudo systemctl enable dm_reload.service

sudo tee /usr/local/bin/scripts/dm_create.sh &gt; /dev/null &lt;&lt; 'EOF'
#!/bin/bash

DATA_DIR=/var/lib/rancher/k3s/agent/containerd/io.containerd.snapshotter.v1.devmapper
POOL_NAME=containerd-pool

mkdir -p /var/lib/rancher/k3s/agent/containerd/
mkdir -p ${DATA_DIR}

# Create data file
touch "${DATA_DIR}/data"
truncate -s 100G "${DATA_DIR}/data"

# Create metadata file
touch "${DATA_DIR}/meta"
truncate -s 10G "${DATA_DIR}/meta"

# Allocate loop devices
DATA_DEV=$(losetup --find --show "${DATA_DIR}/data")
META_DEV=$(losetup --find --show "${DATA_DIR}/meta")

# Define thin-pool parameters.
# See https://www.kernel.org/doc/Documentation/device-mapper/thin-provisioning.txt for details.
SECTOR_SIZE=512
DATA_SIZE="$(blockdev --getsize64 -q ${DATA_DEV})"
LENGTH_IN_SECTORS=$(bc &lt;&lt;&lt;"${DATA_SIZE}/${SECTOR_SIZE}")
DATA_BLOCK_SIZE=128
LOW_WATER_MARK=32768

# Create a thin-pool device
dmsetup create "${POOL_NAME}" \
    --table "0 ${LENGTH_IN_SECTORS} thin-pool ${META_DEV} ${DATA_DEV} ${DATA_BLOCK_SIZE} ${LOW_WATER_MARK}"
EOF

sudo tee /usr/local/bin/scripts/dm_reload.sh &gt; /dev/null &lt;&lt; 'EOF'
#!/bin/bash
set -ex

DATA_DIR=/var/lib/rancher/k3s/agent/containerd/io.containerd.snapshotter.v1.devmapper
POOL_NAME=containerd-pool

# Allocate loop devices
DATA_DEV=$(losetup --find --show "${DATA_DIR}/data")
META_DEV=$(losetup --find --show "${DATA_DIR}/meta")

# Define thin-pool parameters.
# See https://www.kernel.org/doc/Documentation/device-mapper/thin-provisioning.txt for details.
SECTOR_SIZE=512
DATA_SIZE="$(blockdev --getsize64 -q ${DATA_DEV})"
LENGTH_IN_SECTORS=$(bc &lt;&lt;&lt;"${DATA_SIZE}/${SECTOR_SIZE}")
DATA_BLOCK_SIZE=128
LOW_WATER_MARK=32768

# Create a thin-pool device
dmsetup create "${POOL_NAME}" \
    --table "0 ${LENGTH_IN_SECTORS} thin-pool ${META_DEV} ${DATA_DEV} ${DATA_BLOCK_SIZE} ${LOW_WATER_MARK}"
systemctl restart containerd.service
EOF

sudo /usr/local/bin/scripts/dm_create.sh

cd urunc
git checkout -b k3s_issue
make &amp;&amp; sudo make install
cd ..

sudo apt-get install libseccomp-dev pkg-config gcc -y

git clone -b v0.6.9 https://github.com/Solo5/solo5.git
cd solo5
./configure.sh  &amp;&amp; make -j$(nproc)
sudo cp tenders/hvt/solo5-hvt /usr/local/bin
sudo cp tenders/spt/solo5-spt /usr/local/bin

sudo apt install qemu

sudo touch /var/lib/rancher/k3s/agent/etc/containerd/config.toml.tmpl
sudo tee /var/lib/rancher/k3s/agent/etc/containerd/config.toml.tmpl &gt; /dev/null &lt;&lt; 'EOF'
{{ template "base" . }}

[plugins."io.containerd.snapshotter.v1.devmapper"]
  pool_name = "containerd-pool"
  root_path = "/var/lib/rancher/k3s/agent/containerd/io.containerd.snapshotter.v1.devmapper"
  base_image_size = "10GB"
  discard_blocks = true
  fs_type = "ext2"
[plugins."io.containerd.grpc.v1.cri".containerd.runtimes.urunc]
    runtime_type = "io.containerd.urunc.v2"
    container_annotations = ["com.urunc.unikernel.*"]
    pod_annotations = ["com.urunc.unikernel.*"]
    snapshotter = "devmapper"
EOF

sudo systemctl restart k3s-agent.service

Also, the yaml that I am using to launch the deployment is this one:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: qemu-unikraft-nginx-deployment
  labels:
    app: qemu-unikraft-nginx
spec:
  replicas: 1
  selector:
    matchLabels:
      app: qemu-unikraft-nginx
  template:
    metadata:
      labels:
        app: qemu-unikraft-nginx
    spec:
      runtimeClassName: urunc
      nodeName: tt-node
      containers:
      - name: qemu-unikraft-nginx
        image: harbor.nbfc.io/nubificus/urunc/nginx-qemu-unikraft:latest
        ports:
        - containerPort: 80

Thank you in advance for your help!

cmainas commented 1 week ago

Hello @DeftaSebastian . it seems there is an inconsistency, between the urunc version that you reported in your last message. The 0b519f3 hash refers to the main branch and not in the k3s_issue branch. We have not merged the changes from this branch to the main yet. However in your script, it seems you are trying to build the correct branch.

Could you verify that the k3s_issue branch is actually installed?

cmainas commented 1 week ago

Oh in your script you do git checkout -b k3s_issue. Did you mean git clone -b k3s_issue?

DeftaSebastian commented 1 week ago

Hi, yes, I was using git checkout -b k3s_issue and this resulted in wrong behavior. I ended up using git switch k3s_issue and it worked fine.

Although I find it weird that it worked this way until yesterday, I've found a solution, so it's no longer urgent. Thank you for your help!