Open SCWebenizer opened 1 week ago
Thank you @SCWebenizer for taking the time to write such a detailed issue!
Running unikraft
unikernels packaged as OCI images using kraft
is not currently supported by urunc
.
Instead, we use bima to build urunc
-compatible OCI images.
However, this is a great suggestion for future development. We’ve added it to our TODO list and are actively looking into it.
In the meantime, you can use a workaround by building a unikraft unikernel with kraft
, packaging it with docker
or buildah
, and deploying it using urunc
.
We were able to get a working http-c
example from Unikraft's catalog using the following instructions:
git clone https://github.com/unikraft/catalog.git
cd catalog/native/http-c/
kraft build --no-cache --no-update --plat qemu --arch x86_64
The produced unikernel binary will be located at ./.unikraft/build/http-c_qemu-x86_64
.
Next, create a urunc.json
file, which is required by urunc
to properly deploy the unikernel, and a Dockerfile
to build the container image.
tee ./urunc.json > /dev/null <<EOT
{
"com.urunc.unikernel.binary":"$(echo -n '/unikernel/http-c.qemu' | base64)",
"com.urunc.unikernel.cmdline":"$(echo -n '' | base64)",
"com.urunc.unikernel.unikernelType":"$(echo -n 'unikraft' | base64)",
"com.urunc.unikernel.hypervisor":"$(echo -n 'qemu' | base64)"
}
EOT
tee ./Dockerfile > /dev/null <<EOT
FROM scratch
COPY ./.unikraft/build/http-c_qemu-x86_64 /unikernel/http-c.qemu
COPY ./urunc.json /urunc.json
LABEL "com.urunc.unikernel.binary"="/unikernel/http-c.qemu"
LABEL "com.urunc.unikernel.cmdline"=""
LABEL "com.urunc.unikernel.unikernelType"="unikraft"
LABEL "com.urunc.unikernel.hypervisor"="qemu"
EOT
Build the image using docker
(or buildah
) and push it:
docker build -t gntouts/unikraft-http-c:demo .
docker push gntouts/unikraft-http-c:demo
You should now be able to run the unikernel:
sudo nerdctl run --rm -ti --snapshotter devmapper --runtime io.containerd.urunc.v2 docker.io/gntouts/unikraft-http-c:demo unikernel
As a side note, since version 0.17.0, unikraft introduced some breaking CLI changes. We plan to address these changes soon. In the meantime, you can use the compat_unikraft_0.17.0
branch if needed.
git clone https://github.com/nubificus/urunc.git
cd urunc
git checkout compat_unikraft_0.17.0
make && sudo make install
Hello, thank you for the detailed answer @gntouts .
I tried the steps you described, but I had issues regarding nerdctl.
My environment is the same as the one described in issue #50 , a control node used for Kubernetes (k3s) control, and a worker node with urunc installed.
Because urunc was manually set on the k3s_issue
branch on my worker node, I used git cherry-pick to get the new commit from the compat_unikraft_0.17.0
branch, and added it on top of the k3s_issue
branch, locally. I then ran make && sudo make install
in the urunc git folder, and I restarted my terminal afterwards.
I tried the steps you mentioned on the worker node, but I noticed that nerdctl was not installed before. I tried installing the latest rootless minimal version of it, from github (https://github.com/containerd/nerdctl/releases/tag/v1.7.7).
The actual nerdctl executable is a file in my local directory, so I tried executing it with:
sudo ./nerdctl run --rm -ti --snapshotter devmapper --runtime io.containerd.urunc.v2 docker.io/gntouts/unikraft-http-c:demo unikernel
When I tried running the command, with or without sudo, with either my own custom image or the one from docker.io, it would give me this error:
FATA[0002] failed to stat snapshot sha256:cc1d40d83d3052e37dfe547a7693b5e4b00a8f2b41419b8702000ef2af119c06: snapshotter not loaded: devmapper: invalid argument
I realize that there might be something wrong with my environment, but I do not know what might it be, without possibly breaking this setup where the nginx unikernel is running just fine.
I tried then replacing the nginx unikernel image with the docker.io/gntouts/unikraft-http-c:demo
image, on the control node, and I tried to run it. The result was this error, found with kubectl describe:
Warning Failed 4s kubelet Error: failed to generate container "72564ae0516b6ca7fb5b94302a4975fec30010235b292f8d469e5eec23bed827" spec: failed to generate spec: no command specified
I then tried to add a command command: [ '/unikernel/http-c.qemu' ]
to the yaml, and I tried submitting it again. It still failed, but it gave no error reason, only an error code in the status.
Thanks for taking the time to test our approach. I looked into it a bit more. I was able to use urunc
in a single-node k3s cluster with nerdctl
to run the Docker image I built in the previous reply. I was also able to configure k3s to use urunc
and spawn a k3s deployment of that image.
Below you can find the commands I used in a new (and updated) Ubuntu 22.04 VM to achieve that.
# Clean k3s installation
curl -sfL https://get.k3s.io | sh -
# Install nerdctl
wget https://github.com/containerd/nerdctl/releases/download/v2.0.0-rc.2/nerdctl-2.0.0-rc.2-linux-amd64.tar.gz
tar xzvf nerdctl-2.0.0-rc.2-linux-amd64.tar.gz
rm ./containerd*
rm nerdctl-2.0.0-rc.2-linux-amd64.tar.gz
sudo ./nerdctl -a /run/k3s/containerd/containerd.sock -n k8s.io ps
# Install CNI plugins
CNI_VERSION=$(curl -L -s -o /dev/null -w '%{url_effective}' "https://github.com/containernetworking/plugins/releases/latest" | grep -oP "v\d+\.\d+\.\d+" | sed 's/v//')
wget -q https://github.com/containernetworking/plugins/releases/download/v$CNI_VERSION/cni-plugins-linux-$(dpkg --print-architecture)-v$CNI_VERSION.tgz
sudo mkdir -p /opt/cni/bin
sudo tar Cxzvf /opt/cni/bin cni-plugins-linux-$(dpkg --print-architecture)-v$CNI_VERSION.tgz
sudo rm -f cni-plugins-linux-$(dpkg --print-architecture)-v$CNI_VERSION.tgz
# Install Go
wget -q https://go.dev/dl/go1.23.1.linux-$(dpkg --print-architecture).tar.gz
sudo rm -rf /usr/local/go
sudo tar -C /usr/local -xzf go1.23.1.linux-$(dpkg --print-architecture).tar.gz
sudo tee -a /etc/profile > /dev/null << 'EOT'
export PATH=$PATH:/usr/local/go/bin
EOT
rm -f go1.23.1.linux-$(dpkg --print-architecture).tar.gz
# Install urunc
git clone git@github.com:nubificus/urunc.git
cd urunc
git cherry-pick ceffafcb94fe3d7a19b70efa36aefe98f78d19ac
git cherry-pick 5fa23eafbbe5d32d4e183e09dbcaa9a82d5f3ea5
git cherry-pick bda8e18ec30613cdb0ce91439fd146bbf02ace03
make && sudo make install
# Install qemu-system-x86
sudo apt-get install -y qemu-system-x86
# Run the unikraft image without devmapper
sudo ./nerdctl -a /run/k3s/containerd/containerd.sock run --rm -ti --runtime io.containerd.urunc.v2 docker.io/gntouts/unikraft-http-c:demo unikernel
# From a different shell
# You can find the IP from the QEMU output
gntouts@ax5:~$ curl 10.4.0.3:8080
Hello, World!
gntouts@ax5:~$ sudo ./bin/nerdctl -a /run/k3s/containerd/containerd.sock ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
5530092cff5c docker.io/gntouts/unikraft-http-c:demo "unikernel" 28 seconds ago Up unikraft-http-c-55300
gntouts@ax5:~$ ps -ef | grep qemu
root 7350 7335 1 14:33 pts/2 00:00:00 /usr/bin/qemu-system-x86_64 -cpu host -m 254 -enable-kvm -nographic -vga none --sandbox on,obsolete=deny,elevateprivileges=deny,spawn=deny,resourcecontrol=deny -kernel /run/k3s/containerd/io.containerd.runtime.v2.task/default/5530092cff5cb0fbbf5926b1a01c9a7aa731a9d02de035a2691427076c8373c4/rootfs/unikernel/http-c.qemu -net nic,model=virtio -net tap,script=no,ifname=tap0_urunc -append netdev.ip=10.4.0.3/24:10.4.0.1:8.8.8.8 --
To get a working k3s urunc installation for Unikraft & QEMU:
Install devmapper:
sudo mkdir -p /usr/local/bin/scripts
sudo tee /usr/local/bin/scripts/dm_create.sh > /dev/null <<EOT
#!/bin/bash
DATA_DIR=/var/lib/rancher/k3s/agent/containerd/io.containerd.snapshotter.v1.devmapper
POOL_NAME=containerd-pool
mkdir -p /var/lib/rancher/k3s/agent/containerd/
mkdir -p \${DATA_DIR}
# Create data file
touch "\${DATA_DIR}/data"
truncate -s 100G "\${DATA_DIR}/data"
# Create metadata file
touch "\${DATA_DIR}/meta"
truncate -s 10G "\${DATA_DIR}/meta"
# Allocate loop devices
DATA_DEV=\$(losetup --find --show "\${DATA_DIR}/data")
META_DEV=\$(losetup --find --show "\${DATA_DIR}/meta")
# Define thin-pool parameters.
# See https://www.kernel.org/doc/Documentation/device-mapper/thin-provisioning.txt for details.
SECTOR_SIZE=512
DATA_SIZE="\$(blockdev --getsize64 -q \${DATA_DEV})"
LENGTH_IN_SECTORS=\$(bc <<<"\${DATA_SIZE}/\${SECTOR_SIZE}")
DATA_BLOCK_SIZE=128
LOW_WATER_MARK=32768
# Create a thin-pool device
dmsetup create "\${POOL_NAME}" \
--table "0 \${LENGTH_IN_SECTORS} thin-pool \${META_DEV} \${DATA_DEV} \${DATA_BLOCK_SIZE} \${LOW_WATER_MARK}"
EOT
sudo chmod 755 /usr/local/bin/scripts/dm_create.sh
sudo tee /usr/local/bin/scripts/dm_reload.sh > /dev/null <<EOT
#!/bin/bash
set -ex
DATA_DIR=/var/lib/rancher/k3s/agent/containerd/io.containerd.snapshotter.v1.devmapper
POOL_NAME=containerd-pool
# Allocate loop devices
DATA_DEV=\$(losetup --find --show "\${DATA_DIR}/data")
META_DEV=\$(losetup --find --show "\${DATA_DIR}/meta")
# Define thin-pool parameters.
# See https://www.kernel.org/doc/Documentation/device-mapper/thin-provisioning.txt for details.
SECTOR_SIZE=512
DATA_SIZE="\$(blockdev --getsize64 -q \${DATA_DEV})"
LENGTH_IN_SECTORS=\$(bc <<<"\${DATA_SIZE}/\${SECTOR_SIZE}")
DATA_BLOCK_SIZE=128
LOW_WATER_MARK=32768
# Create a thin-pool device
dmsetup create "\${POOL_NAME}" \
--table "0 \${LENGTH_IN_SECTORS} thin-pool \${META_DEV} \${DATA_DEV} \${DATA_BLOCK_SIZE} \${LOW_WATER_MARK}"
systemctl restart containerd.service
EOT
sudo chmod 755 /usr/local/bin/scripts/dm_reload.sh
sudo mkdir -p /usr/local/lib/systemd/system/
sudo tee /usr/local/lib/systemd/system/dm_reload.service > /dev/null <<EOT
[Unit]
Description=Devmapper reload script
After=network.target
[Service]
Type=simple
ExecStart=/usr/local/bin/scripts/dm_reload.sh
User=root
[Install]
WantedBy=multi-user.target
EOT
sudo chmod 644 /usr/local/lib/systemd/system/dm_reload.service
sudo chown root:root /usr/local/lib/systemd/system/dm_reload.service
sudo systemctl daemon-reload
sudo systemctl enable dm_reload.service
Update containerd config with devmapper and urunc:
sudo cp /var/lib/rancher/k3s/agent/etc/containerd/config.toml /var/lib/rancher/k3s/agent/etc/containerd/config.toml.tmpl
sudo tee -a /var/lib/rancher/k3s/agent/etc/containerd/config.toml.tmpl > /dev/null <<EOT
[plugins."io.containerd.snapshotter.v1.devmapper"]
pool_name = "containerd-pool"
root_path = "/var/lib/rancher/k3s/agent/containerd/io.containerd.snapshotter.v1.devmapper"
base_image_size = "10GB"
discard_blocks = true
fs_type = "ext2"
[plugins."io.containerd.grpc.v1.cri".containerd.runtimes.urunc]
runtime_type = "io.containerd.urunc.v2"
container_annotations = ["com.urunc.unikernel.*"]
pod_annotations = ["com.urunc.unikernel.*"]
snapshotter = "devmapper"
EOT
sudo systemctl restart k3s.service
Add urunc runtime class:
tee ./urunc-rc.yaml >/dev/null <<EOT
kind: RuntimeClass
apiVersion: node.k8s.io/v1
metadata:
name: urunc
handler: urunc
EOT
sudo k3s kubectl apply -f urunc-rc.yaml
sudo k3s kubectl get runtimeclass
gntouts@ax5:~/bin$ sudo k3s kubectl get runtimeclass
NAME HANDLER AGE
crun crun 38m
lunatic lunatic 38m
nvidia nvidia 38m
nvidia-experimental nvidia-experimental 38m
slight slight 38m
spin spin 38m
urunc urunc 52s
wasmedge wasmedge 38m
wasmer wasmer 38m
wasmtime wasmtime 38m
wws wws 38m
Now, we are ready to deploy our image:
tee ./test_unikraft.yaml >/dev/null <<EOT
apiVersion: apps/v1
kind: Deployment
metadata:
name: qemu-unikraft-test-helloworld-c-deployment
labels:
app: qemu-unikraft-test-helloworld-c
spec:
replicas: 1
selector:
matchLabels:
app: qemu-unikraft-test-helloworld-c
template:
metadata:
labels:
app: qemu-unikraft-test-helloworld-c
spec:
runtimeClassName: urunc
containers:
- name: qemu-unikraft-test-helloworld-c
image: docker.io/gntouts/unikraft-http-c:demo
command: [ '/unikernel/http-c.qemu' ]
ports:
- containerPort: 8080
protocol: TCP
EOT
sudo k3s kubectl apply -f test_unikraft.yaml
To test the spawned unikernel:
gntouts@ax5:~$ sudo kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
qemu-unikraft-test-helloworld-c-deployment-54f7c969c-xl894 1/1 Running 0 3s 10.42.0.53 ax5 <none> <none>
gntouts@ax5:~$ curl 10.42.0.53:8080
Hello, World!
Let me know if it worked out!
Description
I tried to make my own unikraft unikernel, a simple hello world application, and then bring it to k3s configured with urunc. I could not bring it to k3s, because the manifest files were meant to run on qemu/x86_64. Even if I forced it by picking the sha256 of the qemu/x86_64 platform, not all of the required digests exist, for the image to be pulled correctly. Because of this, I tried to pull an official unikraft unikernel image ("unikraft.org/helloworld") to k3s, and it had the same results as above.
The motivation behind this issue being posted here is that there is a nginx unikernel, made with Unikraft, which works.
Below is the file I used, taken from https://github.com/nubificus/urunc/issues/50 , with the results concatenated below:
I managed to circumvent the OS/platform issue by concatenating the SHA256 hash of the qemu/x86_64 manifest to the image name, but I could not do anything about the missing files:
From what I understand, it seems that Unikraft did not push all the required files / digests, for kubernetes to properly pull the image.
System info
Steps to reproduce
From a previous issue, https://github.com/nubificus/urunc/issues/50 , I noticed that there is a nginx unikernel that works, made with Unikraft, but it has different manifests and filesystems compared to the oficicial Unikraft ones. What did you do to make the nginx unikernel? I am unable to make anything similar to it.
Also, I asked about this issue on the Unikraft community too, but I thought of asking about it here too, because of that nginx unikernel.