kata-containers / kata-containers

Kata Containers is an open source project and community working to build a standard implementation of lightweight Virtual Machines (VMs) that feel and perform like containers, but provide the workload isolation and security advantages of VMs. https://katacontainers.io/
Apache License 2.0
5.09k stars 1.01k forks source link

Sample workload does not work with FailedCreatePodSandBox error #9540

Open zosocanuck opened 2 months ago

zosocanuck commented 2 months ago

Description of problem

Unable to deploy sample workload with stable-3.2, k3s on a CentOS 8 VM

Expected result

pod should run

Actual result

 Type     Reason                  Age                   From     Message
  ----     ------                  ----                  ----     -------
  Warning  FailedCreatePodSandBox  4m3s (x453 over 31h)  kubelet  Failed to create pod sandbox: rpc error: code = DeadlineExceeded desc = context deadline exceeded

Further information

Show kata-collect-data.sh details

# Meta details Running `kata-collect-data.sh` version `3.3.0 (commit 6dd038fd585c38bfe26de19f108cce688bc725ec)` at `2024-04-22.16:09:01.861746186-0700`. ---

Runtime

Runtime is `/usr/local/bin/kata-runtime`. # `kata-env`

/usr/local/bin/kata-runtime kata-env

```toml /usr/share/defaults/kata-containers/configuration-qemu.toml: file /usr/bin/qemu-system-x86_64 does not exist ```

---

Runtime config files

# Runtime config files ## Runtime default config files ``` /etc/kata-containers/configuration.toml /usr/share/defaults/kata-containers/configuration.toml ``` ## Runtime config file contents Config file `/etc/kata-containers/configuration.toml` not found

cat "/usr/share/defaults/kata-containers/configuration.toml"

---

Containerd shim v2

Containerd shim v2 is `/usr/local/bin/containerd-shim-kata-v2`.

containerd-shim-kata-v2 --version

``` Kata Containers containerd shim (Golang): id: "io.containerd.kata.v2", version: 3.3.0, commit: 6dd038fd585c38bfe26de19f108cce688bc725ec ```

---

KSM throttler

# KSM throttler ## version ## systemd service

Image details

# Image details No image ---

Initrd details

# Initrd details No initrd ---

Logfiles

# Logfiles ## Runtime logs

Runtime logs

No recent runtime problems found in system journal.

## Throttler logs
Throttler logs

No recent throttler problems found in system journal.

## Kata Containerd Shim v2 logs
Kata Containerd Shim v2

Recent problems found in system journal: ``` time="2024-04-22T14:29:50.122790195-07:00" level=warning msg="Could not add /dev/mshv to the devices cgroup" name=containerd-shim-v2 pid=437648 sandbox=0de4c56fe3f6b0a91656940ee6ae010b904a1be687cba61ec8e9514bca3d41e9 source=cgroups time="2024-04-22T14:29:50.175402526-07:00" level=error msg="qemu-system-x86_64: -chardev socket,id=char-0d0e9f8c767a8e93,path=/run/vc/vm/0de4c56fe3f6b0a91656940ee6ae010b904a1be687cba61ec8e9514bca3d41e9/vhost-fs.sock: Failed to connect to '/run/vc/vm/0de4c56fe3f6b0a91656940ee6ae010b904a1be687cba61ec8e9514bca3d41e9/vhost-fs.sock': Connection refused" name=containerd-shim-v2 pid=437648 qemuPid=437658 sandbox=0de4c56fe3f6b0a91656940ee6ae010b904a1be687cba61ec8e9514bca3d41e9 source=virtcontainers/hypervisor subsystem=qemu time="2024-04-22T14:34:04.131220841-07:00" level=warning msg="Could not add /dev/mshv to the devices cgroup" name=containerd-shim-v2 pid=437938 sandbox=44e7b6f81d100a29498c4680d856132d47f82ac5f61025d69648fd0060d87f10 source=cgroups time="2024-04-22T14:34:04.192341339-07:00" level=error msg="qemu-system-x86_64: -chardev socket,id=char-8610e87b46e149e4,path=/run/vc/vm/44e7b6f81d100a29498c4680d856132d47f82ac5f61025d69648fd0060d87f10/vhost-fs.sock: Failed to connect to '/run/vc/vm/44e7b6f81d100a29498c4680d856132d47f82ac5f61025d69648fd0060d87f10/vhost-fs.sock': Connection refused" name=containerd-shim-v2 pid=437938 qemuPid=437947 sandbox=44e7b6f81d100a29498c4680d856132d47f82ac5f61025d69648fd0060d87f10 source=virtcontainers/hypervisor subsystem=qemu time="2024-04-22T14:38:16.131764769-07:00" level=warning msg="Could not add /dev/mshv to the devices cgroup" name=containerd-shim-v2 pid=438241 sandbox=ce4b70f4ad03310f095b31a473e611a7e4af4d29a9da8b4cf7c813e74def067f source=cgroups time="2024-04-22T14:38:16.183486633-07:00" level=error msg="qemu-system-x86_64: -chardev socket,id=char-9a8944c47c173f95,path=/run/vc/vm/ce4b70f4ad03310f095b31a473e611a7e4af4d29a9da8b4cf7c813e74def067f/vhost-fs.sock: Failed to connect to '/run/vc/vm/ce4b70f4ad03310f095b31a473e611a7e4af4d29a9da8b4cf7c813e74def067f/vhost-fs.sock': Connection refused" name=containerd-shim-v2 pid=438241 qemuPid=438251 sandbox=ce4b70f4ad03310f095b31a473e611a7e4af4d29a9da8b4cf7c813e74def067f source=virtcontainers/hypervisor subsystem=qemu time="2024-04-22T14:42:31.128163196-07:00" level=warning msg="Could not add /dev/mshv to the devices cgroup" name=containerd-shim-v2 pid=438540 sandbox=f05b6ca249fe098318610d3791f8566619975826f1962d41150f38e2d5dfeac2 source=cgroups time="2024-04-22T14:42:31.179003429-07:00" level=error msg="qemu-system-x86_64: -chardev socket,id=char-c342cae786379aa7,path=/run/vc/vm/f05b6ca249fe098318610d3791f8566619975826f1962d41150f38e2d5dfeac2/vhost-fs.sock: Failed to connect to '/run/vc/vm/f05b6ca249fe098318610d3791f8566619975826f1962d41150f38e2d5dfeac2/vhost-fs.sock': Connection refused" name=containerd-shim-v2 pid=438540 qemuPid=438549 sandbox=f05b6ca249fe098318610d3791f8566619975826f1962d41150f38e2d5dfeac2 source=virtcontainers/hypervisor subsystem=qemu time="2024-04-22T14:46:43.174857488-07:00" level=warning msg="Could not add /dev/mshv to the devices cgroup" name=containerd-shim-v2 pid=438835 sandbox=ac8e3503c86dc2eafcc2cb6de3a14a0697fc98b95fb1e364c80bda04c16f6ab9 source=cgroups time="2024-04-22T14:46:43.218060681-07:00" level=error msg="qemu-system-x86_64: -chardev socket,id=char-b411c47b8d143c25,path=/run/vc/vm/ac8e3503c86dc2eafcc2cb6de3a14a0697fc98b95fb1e364c80bda04c16f6ab9/vhost-fs.sock: Failed to connect to '/run/vc/vm/ac8e3503c86dc2eafcc2cb6de3a14a0697fc98b95fb1e364c80bda04c16f6ab9/vhost-fs.sock': Connection refused" name=containerd-shim-v2 pid=438835 qemuPid=438845 sandbox=ac8e3503c86dc2eafcc2cb6de3a14a0697fc98b95fb1e364c80bda04c16f6ab9 source=virtcontainers/hypervisor subsystem=qemu time="2024-04-22T14:50:56.170944148-07:00" level=warning msg="Could not add /dev/mshv to the devices cgroup" name=containerd-shim-v2 pid=439152 sandbox=c4ef1ddfadb1bf61defcfb3bbe55d23412ec7147da9891015c9c6123fec541b4 source=cgroups time="2024-04-22T14:50:56.215338703-07:00" level=error msg="qemu-system-x86_64: -chardev socket,id=char-af558ebb7f75fd21,path=/run/vc/vm/c4ef1ddfadb1bf61defcfb3bbe55d23412ec7147da9891015c9c6123fec541b4/vhost-fs.sock: Failed to connect to '/run/vc/vm/c4ef1ddfadb1bf61defcfb3bbe55d23412ec7147da9891015c9c6123fec541b4/vhost-fs.sock': Connection refused" name=containerd-shim-v2 pid=439152 qemuPid=439161 sandbox=c4ef1ddfadb1bf61defcfb3bbe55d23412ec7147da9891015c9c6123fec541b4 source=virtcontainers/hypervisor subsystem=qemu time="2024-04-22T14:55:08.12159544-07:00" level=warning msg="Could not add /dev/mshv to the devices cgroup" name=containerd-shim-v2 pid=439449 sandbox=9563612d30c047b0c728a18e84e05918c86655483a60de82a1ca59e6b634e11e source=cgroups time="2024-04-22T14:55:08.177307184-07:00" level=error msg="qemu-system-x86_64: -chardev socket,id=char-7418aa1bdfc55173,path=/run/vc/vm/9563612d30c047b0c728a18e84e05918c86655483a60de82a1ca59e6b634e11e/vhost-fs.sock: Failed to connect to '/run/vc/vm/9563612d30c047b0c728a18e84e05918c86655483a60de82a1ca59e6b634e11e/vhost-fs.sock': Connection refused" name=containerd-shim-v2 pid=439449 qemuPid=439458 sandbox=9563612d30c047b0c728a18e84e05918c86655483a60de82a1ca59e6b634e11e source=virtcontainers/hypervisor subsystem=qemu time="2024-04-22T14:59:22.172458881-07:00" level=warning msg="Could not add /dev/mshv to the devices cgroup" name=containerd-shim-v2 pid=439731 sandbox=b4a53003084cc64379668b36ae4f20ff156fd0e06164cee67a02da55b26ea99e source=cgroups time="2024-04-22T14:59:22.222345474-07:00" level=error msg="qemu-system-x86_64: -chardev socket,id=char-3cb852ad16bb97ff,path=/run/vc/vm/b4a53003084cc64379668b36ae4f20ff156fd0e06164cee67a02da55b26ea99e/vhost-fs.sock: Failed to connect to '/run/vc/vm/b4a53003084cc64379668b36ae4f20ff156fd0e06164cee67a02da55b26ea99e/vhost-fs.sock': Connection refused" name=containerd-shim-v2 pid=439731 qemuPid=439740 sandbox=b4a53003084cc64379668b36ae4f20ff156fd0e06164cee67a02da55b26ea99e source=virtcontainers/hypervisor subsystem=qemu time="2024-04-22T15:03:34.185004292-07:00" level=warning msg="Could not add /dev/mshv to the devices cgroup" name=containerd-shim-v2 pid=440166 sandbox=d83af11f5167525b921fc304085dae23399a5fce876614366a5dfd20c91da218 source=cgroups time="2024-04-22T15:03:34.2313613-07:00" level=error msg="qemu-system-x86_64: -chardev socket,id=char-20d2675b7f376b34,path=/run/vc/vm/d83af11f5167525b921fc304085dae23399a5fce876614366a5dfd20c91da218/vhost-fs.sock: Failed to connect to '/run/vc/vm/d83af11f5167525b921fc304085dae23399a5fce876614366a5dfd20c91da218/vhost-fs.sock': Connection refused" name=containerd-shim-v2 pid=440166 qemuPid=440176 sandbox=d83af11f5167525b921fc304085dae23399a5fce876614366a5dfd20c91da218 source=virtcontainers/hypervisor subsystem=qemu time="2024-04-22T15:07:49.123955118-07:00" level=warning msg="Could not add /dev/mshv to the devices cgroup" name=containerd-shim-v2 pid=440525 sandbox=7b90c51131bb09f42ac7de7abfd81f373d1d8fd39a98ad2cb2d55be8a2dc2638 source=cgroups time="2024-04-22T15:07:49.166523677-07:00" level=error msg="qemu-system-x86_64: -chardev socket,id=char-382b64676bfd19d2,path=/run/vc/vm/7b90c51131bb09f42ac7de7abfd81f373d1d8fd39a98ad2cb2d55be8a2dc2638/vhost-fs.sock: Failed to connect to '/run/vc/vm/7b90c51131bb09f42ac7de7abfd81f373d1d8fd39a98ad2cb2d55be8a2dc2638/vhost-fs.sock': Connection refused" name=containerd-shim-v2 pid=440525 qemuPid=440534 sandbox=7b90c51131bb09f42ac7de7abfd81f373d1d8fd39a98ad2cb2d55be8a2dc2638 source=virtcontainers/hypervisor subsystem=qemu time="2024-04-22T15:12:01.16880481-07:00" level=warning msg="Could not add /dev/mshv to the devices cgroup" name=containerd-shim-v2 pid=440839 sandbox=0f8d182f08a8a0ff90ec27866de7dd485333b80f697d1caad782b73a5016a7dd source=cgroups time="2024-04-22T15:12:01.215239061-07:00" level=error msg="qemu-system-x86_64: -chardev socket,id=char-bb6ae630dcc70b5a,path=/run/vc/vm/0f8d182f08a8a0ff90ec27866de7dd485333b80f697d1caad782b73a5016a7dd/vhost-fs.sock: Failed to connect to '/run/vc/vm/0f8d182f08a8a0ff90ec27866de7dd485333b80f697d1caad782b73a5016a7dd/vhost-fs.sock': Connection refused" name=containerd-shim-v2 pid=440839 qemuPid=440848 sandbox=0f8d182f08a8a0ff90ec27866de7dd485333b80f697d1caad782b73a5016a7dd source=virtcontainers/hypervisor subsystem=qemu time="2024-04-22T15:16:12.174773244-07:00" level=warning msg="Could not add /dev/mshv to the devices cgroup" name=containerd-shim-v2 pid=441143 sandbox=bf6e170a70d63d5e7607103e5023703beeaa6132a164a4c970b8c9d5864ad77c source=cgroups time="2024-04-22T15:16:12.216542259-07:00" level=error msg="qemu-system-x86_64: -chardev socket,id=char-636e243cbeb801b1,path=/run/vc/vm/bf6e170a70d63d5e7607103e5023703beeaa6132a164a4c970b8c9d5864ad77c/vhost-fs.sock: Failed to connect to '/run/vc/vm/bf6e170a70d63d5e7607103e5023703beeaa6132a164a4c970b8c9d5864ad77c/vhost-fs.sock': Connection refused" name=containerd-shim-v2 pid=441143 qemuPid=441152 sandbox=bf6e170a70d63d5e7607103e5023703beeaa6132a164a4c970b8c9d5864ad77c source=virtcontainers/hypervisor subsystem=qemu time="2024-04-22T15:20:27.128683656-07:00" level=warning msg="Could not add /dev/mshv to the devices cgroup" name=containerd-shim-v2 pid=442154 sandbox=f8a0e2753c016c04363a21d623bad3b14addee608fef3e4ae47fb204908feb3d source=cgroups time="2024-04-22T15:20:27.178183353-07:00" level=error msg="qemu-system-x86_64: -chardev socket,id=char-11356e9e7f43a720,path=/run/vc/vm/f8a0e2753c016c04363a21d623bad3b14addee608fef3e4ae47fb204908feb3d/vhost-fs.sock: Failed to connect to '/run/vc/vm/f8a0e2753c016c04363a21d623bad3b14addee608fef3e4ae47fb204908feb3d/vhost-fs.sock': Connection refused" name=containerd-shim-v2 pid=442154 qemuPid=442163 sandbox=f8a0e2753c016c04363a21d623bad3b14addee608fef3e4ae47fb204908feb3d source=virtcontainers/hypervisor subsystem=qemu time="2024-04-22T15:24:41.12353856-07:00" level=warning msg="Could not add /dev/mshv to the devices cgroup" name=containerd-shim-v2 pid=446714 sandbox=3c8679964d0a5a0e7b9cb59af4bb7e3aa23a0f62eb1b5a50a6b27e2c700d6c82 source=cgroups time="2024-04-22T15:24:41.169980856-07:00" level=error msg="qemu-system-x86_64: -chardev socket,id=char-d49f9f196882b319,path=/run/vc/vm/3c8679964d0a5a0e7b9cb59af4bb7e3aa23a0f62eb1b5a50a6b27e2c700d6c82/vhost-fs.sock: Failed to connect to '/run/vc/vm/3c8679964d0a5a0e7b9cb59af4bb7e3aa23a0f62eb1b5a50a6b27e2c700d6c82/vhost-fs.sock': Connection refused" name=containerd-shim-v2 pid=446714 qemuPid=446723 sandbox=3c8679964d0a5a0e7b9cb59af4bb7e3aa23a0f62eb1b5a50a6b27e2c700d6c82 source=virtcontainers/hypervisor subsystem=qemu time="2024-04-22T15:28:56.128573877-07:00" level=warning msg="Could not add /dev/mshv to the devices cgroup" name=containerd-shim-v2 pid=447044 sandbox=a18c9b67536f2fa288d47cfaeed414cfc26c685f582863963c8627a27fa48767 source=cgroups time="2024-04-22T15:28:56.176165744-07:00" level=error msg="qemu-system-x86_64: -chardev socket,id=char-bad0a564e0467db4,path=/run/vc/vm/a18c9b67536f2fa288d47cfaeed414cfc26c685f582863963c8627a27fa48767/vhost-fs.sock: Failed to connect to '/run/vc/vm/a18c9b67536f2fa288d47cfaeed414cfc26c685f582863963c8627a27fa48767/vhost-fs.sock': Connection refused" name=containerd-shim-v2 pid=447044 qemuPid=447053 sandbox=a18c9b67536f2fa288d47cfaeed414cfc26c685f582863963c8627a27fa48767 source=virtcontainers/hypervisor subsystem=qemu time="2024-04-22T15:33:11.128998116-07:00" level=warning msg="Could not add /dev/mshv to the devices cgroup" name=containerd-shim-v2 pid=447358 sandbox=abb40649f9f05dee323d918bfba14019731668f1b794554bc050c0a9b5683ec6 source=cgroups time="2024-04-22T15:33:11.176774368-07:00" level=error msg="qemu-system-x86_64: -chardev socket,id=char-a9dd9f138ed43968,path=/run/vc/vm/abb40649f9f05dee323d918bfba14019731668f1b794554bc050c0a9b5683ec6/vhost-fs.sock: Failed to connect to '/run/vc/vm/abb40649f9f05dee323d918bfba14019731668f1b794554bc050c0a9b5683ec6/vhost-fs.sock': Connection refused" name=containerd-shim-v2 pid=447358 qemuPid=447368 sandbox=abb40649f9f05dee323d918bfba14019731668f1b794554bc050c0a9b5683ec6 source=virtcontainers/hypervisor subsystem=qemu time="2024-04-22T15:37:22.13317584-07:00" level=warning msg="Could not add /dev/mshv to the devices cgroup" name=containerd-shim-v2 pid=447665 sandbox=943b378a89a0ef97b1fc7d6a09d70b7ffe49694c5d411ddcd2614c807a656ec3 source=cgroups time="2024-04-22T15:37:22.18656594-07:00" level=error msg="qemu-system-x86_64: -chardev socket,id=char-a11fb224154b9147,path=/run/vc/vm/943b378a89a0ef97b1fc7d6a09d70b7ffe49694c5d411ddcd2614c807a656ec3/vhost-fs.sock: Failed to connect to '/run/vc/vm/943b378a89a0ef97b1fc7d6a09d70b7ffe49694c5d411ddcd2614c807a656ec3/vhost-fs.sock': Connection refused" name=containerd-shim-v2 pid=447665 qemuPid=447675 sandbox=943b378a89a0ef97b1fc7d6a09d70b7ffe49694c5d411ddcd2614c807a656ec3 source=virtcontainers/hypervisor subsystem=qemu time="2024-04-22T15:41:35.125791135-07:00" level=warning msg="Could not add /dev/mshv to the devices cgroup" name=containerd-shim-v2 pid=447968 sandbox=ec536b58747655efaa14df659ae5265762ac0f167f59ec59d211a4bc80c0ecde source=cgroups time="2024-04-22T15:41:35.184922313-07:00" level=error msg="qemu-system-x86_64: -chardev socket,id=char-900f5cbee97f0f34,path=/run/vc/vm/ec536b58747655efaa14df659ae5265762ac0f167f59ec59d211a4bc80c0ecde/vhost-fs.sock: Failed to connect to '/run/vc/vm/ec536b58747655efaa14df659ae5265762ac0f167f59ec59d211a4bc80c0ecde/vhost-fs.sock': Connection refused" name=containerd-shim-v2 pid=447968 qemuPid=447978 sandbox=ec536b58747655efaa14df659ae5265762ac0f167f59ec59d211a4bc80c0ecde source=virtcontainers/hypervisor subsystem=qemu time="2024-04-22T15:45:46.129261576-07:00" level=warning msg="Could not add /dev/mshv to the devices cgroup" name=containerd-shim-v2 pid=448619 sandbox=ae3a4376a45d5d78cf143357d9186f677f27dac3a65b4c156f75c6a1aeab56c7 source=cgroups time="2024-04-22T15:45:46.178663803-07:00" level=error msg="qemu-system-x86_64: -chardev socket,id=char-6599a9bdc71524d6,path=/run/vc/vm/ae3a4376a45d5d78cf143357d9186f677f27dac3a65b4c156f75c6a1aeab56c7/vhost-fs.sock: Failed to connect to '/run/vc/vm/ae3a4376a45d5d78cf143357d9186f677f27dac3a65b4c156f75c6a1aeab56c7/vhost-fs.sock': Connection refused" name=containerd-shim-v2 pid=448619 qemuPid=448628 sandbox=ae3a4376a45d5d78cf143357d9186f677f27dac3a65b4c156f75c6a1aeab56c7 source=virtcontainers/hypervisor subsystem=qemu time="2024-04-22T15:49:57.131580642-07:00" level=warning msg="Could not add /dev/mshv to the devices cgroup" name=containerd-shim-v2 pid=453436 sandbox=eaeec31ce88247dc07a4a76116c33ed31e6d233f8da25ed98d456231935843cd source=cgroups time="2024-04-22T15:49:57.245039879-07:00" level=error msg="qemu-system-x86_64: -device vhost-user-fs-pci,chardev=char-498044a36753df46,tag=kataShared,queue-size=1024: Failed to write msg. Wrote -1 instead of 12." name=containerd-shim-v2 pid=453436 qemuPid=453445 sandbox=eaeec31ce88247dc07a4a76116c33ed31e6d233f8da25ed98d456231935843cd source=virtcontainers/hypervisor subsystem=qemu time="2024-04-22T15:49:57.24533387-07:00" level=error msg="qemu-system-x86_64: -device vhost-user-fs-pci,chardev=char-498044a36753df46,tag=kataShared,queue-size=1024: vhost_backend_init failed: Protocol error" name=containerd-shim-v2 pid=453436 qemuPid=453445 sandbox=eaeec31ce88247dc07a4a76116c33ed31e6d233f8da25ed98d456231935843cd source=virtcontainers/hypervisor subsystem=qemu time="2024-04-22T15:49:57.245321226-07:00" level=error msg="Failed to negotiate QMP Capabilities" error="exitting QMP loop, command cancelled" name=containerd-shim-v2 pid=453436 sandbox=eaeec31ce88247dc07a4a76116c33ed31e6d233f8da25ed98d456231935843cd source=virtcontainers/hypervisor subsystem=qemu time="2024-04-22T15:54:09.167336272-07:00" level=warning msg="Could not add /dev/mshv to the devices cgroup" name=containerd-shim-v2 pid=454017 sandbox=d6d5a777f5063e99a71b31843af9e475c2a56770173adbf4eb432961fc5435c1 source=cgroups time="2024-04-22T15:54:09.218314591-07:00" level=error msg="qemu-system-x86_64: -chardev socket,id=char-43149fbfc4d65cef,path=/run/vc/vm/d6d5a777f5063e99a71b31843af9e475c2a56770173adbf4eb432961fc5435c1/vhost-fs.sock: Failed to connect to '/run/vc/vm/d6d5a777f5063e99a71b31843af9e475c2a56770173adbf4eb432961fc5435c1/vhost-fs.sock': Connection refused" name=containerd-shim-v2 pid=454017 qemuPid=454027 sandbox=d6d5a777f5063e99a71b31843af9e475c2a56770173adbf4eb432961fc5435c1 source=virtcontainers/hypervisor subsystem=qemu time="2024-04-22T15:58:24.134356263-07:00" level=warning msg="Could not add /dev/mshv to the devices cgroup" name=containerd-shim-v2 pid=454636 sandbox=eab869baca286bb1535005d5ed6ac7552f0801e9f13f4623b4a332fe59f7d112 source=cgroups time="2024-04-22T15:58:24.185647944-07:00" level=error msg="qemu-system-x86_64: -chardev socket,id=char-fc10babb7d6c0552,path=/run/vc/vm/eab869baca286bb1535005d5ed6ac7552f0801e9f13f4623b4a332fe59f7d112/vhost-fs.sock: Failed to connect to '/run/vc/vm/eab869baca286bb1535005d5ed6ac7552f0801e9f13f4623b4a332fe59f7d112/vhost-fs.sock': Connection refused" name=containerd-shim-v2 pid=454636 qemuPid=454645 sandbox=eab869baca286bb1535005d5ed6ac7552f0801e9f13f4623b4a332fe59f7d112 source=virtcontainers/hypervisor subsystem=qemu time="2024-04-22T16:02:37.167495961-07:00" level=warning msg="Could not add /dev/mshv to the devices cgroup" name=containerd-shim-v2 pid=454988 sandbox=8935a227b922c7bbfac9845f820697e5158d4386b660ae0adcb7f897808f3d75 source=cgroups time="2024-04-22T16:02:37.212912004-07:00" level=error msg="qemu-system-x86_64: -chardev socket,id=char-f1201c0207086825,path=/run/vc/vm/8935a227b922c7bbfac9845f820697e5158d4386b660ae0adcb7f897808f3d75/vhost-fs.sock: Failed to connect to '/run/vc/vm/8935a227b922c7bbfac9845f820697e5158d4386b660ae0adcb7f897808f3d75/vhost-fs.sock': Connection refused" name=containerd-shim-v2 pid=454988 qemuPid=454998 sandbox=8935a227b922c7bbfac9845f820697e5158d4386b660ae0adcb7f897808f3d75 source=virtcontainers/hypervisor subsystem=qemu time="2024-04-22T16:06:52.133504521-07:00" level=warning msg="Could not add /dev/mshv to the devices cgroup" name=containerd-shim-v2 pid=455300 sandbox=bfafaba0b6482d6e2a38269715a55183a9ecaa8c549c43432bc51058b72d3359 source=cgroups time="2024-04-22T16:06:52.186742836-07:00" level=error msg="qemu-system-x86_64: -chardev socket,id=char-4a5c76e4591248c0,path=/run/vc/vm/bfafaba0b6482d6e2a38269715a55183a9ecaa8c549c43432bc51058b72d3359/vhost-fs.sock: Failed to connect to '/run/vc/vm/bfafaba0b6482d6e2a38269715a55183a9ecaa8c549c43432bc51058b72d3359/vhost-fs.sock': Connection refused" name=containerd-shim-v2 pid=455300 qemuPid=455310 sandbox=bfafaba0b6482d6e2a38269715a55183a9ecaa8c549c43432bc51058b72d3359 source=virtcontainers/hypervisor subsystem=qemu ```

---

Container manager details

# Container manager details

Kubernetes

## Kubernetes

kubectl version

``` Client Version: v1.30.0 Kustomize Version: v5.0.4-0.20230601165947-6ce0bf390ce3 The connection to the server localhost:8080 was refused - did you specify the right host or port? ```

kubectl config view

``` apiVersion: v1 clusters: null contexts: null current-context: "" kind: Config preferences: {} users: null ```

systemctl show kubelet

``` Restart=no NotifyAccess=none RestartUSec=100ms TimeoutStartUSec=1min 30s TimeoutStopUSec=1min 30s RuntimeMaxUSec=infinity WatchdogUSec=0 WatchdogTimestampMonotonic=0 PermissionsStartOnly=no RootDirectoryStartOnly=no RemainAfterExit=no GuessMainPID=yes MainPID=0 ControlPID=0 FileDescriptorStoreMax=0 NFileDescriptorStore=0 StatusErrno=0 Result=success UID=[not set] GID=[not set] NRestarts=0 ExecMainStartTimestampMonotonic=0 ExecMainExitTimestampMonotonic=0 ExecMainPID=0 ExecMainCode=0 ExecMainStatus=0 MemoryCurrent=[not set] CPUUsageNSec=[not set] EffectiveCPUs= EffectiveMemoryNodes= TasksCurrent=[not set] IPIngressBytes=18446744073709551615 IPIngressPackets=18446744073709551615 IPEgressBytes=18446744073709551615 IPEgressPackets=18446744073709551615 Delegate=no CPUAccounting=no CPUWeight=[not set] StartupCPUWeight=[not set] CPUShares=[not set] StartupCPUShares=[not set] CPUQuotaPerSecUSec=infinity CPUQuotaPeriodUSec=infinity AllowedCPUs= AllowedMemoryNodes= IOAccounting=no IOWeight=[not set] StartupIOWeight=[not set] BlockIOAccounting=no BlockIOWeight=[not set] StartupBlockIOWeight=[not set] MemoryAccounting=yes DefaultMemoryLow=0 DefaultMemoryMin=0 MemoryMin=0 MemoryLow=0 MemoryHigh=infinity MemoryMax=infinity MemorySwapMax=infinity MemoryLimit=infinity DevicePolicy=auto TasksAccounting=yes TasksMax=100959 IPAccounting=no UMask=0022 LimitCPU=infinity LimitCPUSoft=infinity LimitFSIZE=infinity LimitFSIZESoft=infinity LimitDATA=infinity LimitDATASoft=infinity LimitSTACK=infinity LimitSTACKSoft=8388608 LimitCORE=infinity LimitCORESoft=infinity LimitRSS=infinity LimitRSSSoft=infinity LimitNOFILE=1048576 LimitNOFILESoft=1048576 LimitAS=infinity LimitASSoft=infinity LimitNPROC=63099 LimitNPROCSoft=63099 LimitMEMLOCK=67108864 LimitMEMLOCKSoft=67108864 LimitLOCKS=infinity LimitLOCKSSoft=infinity LimitSIGPENDING=63099 LimitSIGPENDINGSoft=63099 LimitMSGQUEUE=819200 LimitMSGQUEUESoft=819200 LimitNICE=0 LimitNICESoft=0 LimitRTPRIO=0 LimitRTPRIOSoft=0 LimitRTTIME=infinity LimitRTTIMESoft=infinity OOMScoreAdjust=0 Nice=0 IOSchedulingClass=0 IOSchedulingPriority=0 CPUSchedulingPolicy=0 CPUSchedulingPriority=0 CPUAffinity= CPUAffinityFromNUMA=no NUMAPolicy=n/a NUMAMask= TimerSlackNSec=50000 CPUSchedulingResetOnFork=no NonBlocking=no StandardInput=null StandardInputData= StandardOutput=inherit StandardError=inherit TTYReset=no TTYVHangup=no TTYVTDisallocate=no SyslogPriority=30 SyslogLevelPrefix=yes SyslogLevel=6 SyslogFacility=3 LogLevelMax=-1 LogRateLimitIntervalUSec=0 LogRateLimitBurst=0 SecureBits=0 CapabilityBoundingSet=cap_chown cap_dac_override cap_dac_read_search cap_fowner cap_fsetid cap_kill cap_setgid cap_setuid cap_setpcap cap_linux_immutable cap_net_bind_service cap_net_broadcast cap_net_admin cap_net_raw cap_ipc_lock cap_ipc_owner cap_sys_module cap_sys_rawio cap_sys_chroot cap_sys_ptrace cap_sys_pacct cap_sys_admin cap_sys_boot cap_sys_nice cap_sys_resource cap_sys_time cap_sys_tty_config cap_mknod cap_lease cap_audit_write cap_audit_control cap_setfcap cap_mac_override cap_mac_admin cap_syslog cap_wake_alarm cap_block_suspend cap_audit_read cap_perfmon cap_bpf AmbientCapabilities= DynamicUser=no RemoveIPC=no MountFlags= PrivateTmp=no PrivateDevices=no ProtectKernelTunables=no ProtectKernelModules=no ProtectControlGroups=no PrivateNetwork=no PrivateUsers=no PrivateMounts=no ProtectHome=no ProtectSystem=no SameProcessGroup=no UtmpMode=init IgnoreSIGPIPE=yes NoNewPrivileges=no SystemCallErrorNumber=0 LockPersonality=no RuntimeDirectoryPreserve=no RuntimeDirectoryMode=0755 StateDirectoryMode=0755 CacheDirectoryMode=0755 LogsDirectoryMode=0755 ConfigurationDirectoryMode=0755 MemoryDenyWriteExecute=no RestrictRealtime=no RestrictSUIDSGID=no RestrictNamespaces=no MountAPIVFS=no KeyringMode=private KillMode=control-group KillSignal=15 SendSIGKILL=yes SendSIGHUP=no Id=kubelet.service Names=kubelet.service Description=kubelet.service LoadState=not-found ActiveState=inactive FreezerState=running SubState=dead StateChangeTimestampMonotonic=0 InactiveExitTimestampMonotonic=0 ActiveEnterTimestampMonotonic=0 ActiveExitTimestampMonotonic=0 InactiveEnterTimestampMonotonic=0 CanStart=no CanStop=yes CanReload=no CanIsolate=no CanFreeze=yes StopWhenUnneeded=no RefuseManualStart=no RefuseManualStop=no AllowIsolate=no DefaultDependencies=yes OnFailureJobMode=replace IgnoreOnIsolate=no NeedDaemonReload=no JobTimeoutUSec=infinity JobRunningTimeoutUSec=infinity JobTimeoutAction=none ConditionResult=no AssertResult=no ConditionTimestampMonotonic=0 AssertTimestampMonotonic=0 LoadError=org.freedesktop.systemd1.NoSuchUnit "Unit kubelet.service not found." Transient=no Perpetual=no StartLimitIntervalUSec=10s StartLimitBurst=5 StartLimitAction=none FailureAction=none SuccessAction=none CollectMode=inactive ```

Podman

## Podman

podman --version

``` podman version 3.3.1 ```

podman system info

``` host: arch: amd64 buildahVersion: 1.22.3 cgroupControllers: - cpuset - cpu - cpuacct - blkio - memory - devices - freezer - net_cls - perf_event - net_prio - hugetlb - pids - rdma cgroupManager: systemd cgroupVersion: v1 conmon: package: conmon-2.0.29-1.module_el8.5.0+890+6b136101.x86_64 path: /usr/bin/conmon version: 'conmon version 2.0.29, commit: 84384406047fae626269133e1951c4b92eed7603' cpus: 4 distribution: distribution: '"centos"' version: "8" eventLogger: file hostname: localhost.localdomain idMappings: gidmap: null uidmap: null kernel: 4.18.0-348.7.1.el8_5.x86_64 linkmode: dynamic memFree: 679497728 memTotal: 16600567808 ociRuntime: name: runc package: runc-1.0.2-1.module_el8.5.0+911+f19012f9.x86_64 path: /usr/bin/runc version: |- runc version 1.0.2 spec: 1.0.2-dev go: go1.16.7 libseccomp: 2.5.1 os: linux remoteSocket: path: /run/podman/podman.sock security: apparmorEnabled: false capabilities: CAP_NET_RAW,CAP_CHOWN,CAP_DAC_OVERRIDE,CAP_FOWNER,CAP_FSETID,CAP_KILL,CAP_NET_BIND_SERVICE,CAP_SETFCAP,CAP_SETGID,CAP_SETPCAP,CAP_SETUID,CAP_SYS_CHROOT rootless: false seccompEnabled: true seccompProfilePath: /usr/share/containers/seccomp.json selinuxEnabled: true serviceIsRemote: false slirp4netns: executable: /usr/bin/slirp4netns package: slirp4netns-1.1.8-1.module_el8.5.0+890+6b136101.x86_64 version: |- slirp4netns version 1.1.8 commit: d361001f495417b880f20329121e3aa431a8f90f libslirp: 4.4.0 SLIRP_CONFIG_VERSION_MAX: 3 libseccomp: 2.5.1 swapFree: 8475099136 swapTotal: 8476684288 uptime: 105h 7m 51.97s (Approximately 4.38 days) registries: search: - registry.fedoraproject.org - registry.access.redhat.com - registry.centos.org - docker.io store: configFile: /etc/containers/storage.conf containerStore: number: 0 paused: 0 running: 0 stopped: 0 graphDriverName: overlay graphOptions: overlay.mountopt: nodev,metacopy=on graphRoot: /var/lib/containers/storage graphStatus: Backing Filesystem: xfs Native Overlay Diff: "false" Supports d_type: "true" Using metacopy: "true" imageStore: number: 0 runRoot: /run/containers/storage volumePath: /var/lib/containers/storage/volumes version: APIVersion: 3.3.1 Built: 1636493036 BuiltTime: Tue Nov 9 13:23:56 2021 GitCommit: "" GoVersion: go1.16.7 OsArch: linux/amd64 Version: 3.3.1 ```

cat /etc/containers/registries.conf

``` # For more information on this configuration file, see containers-registries.conf(5). # # NOTE: RISK OF USING UNQUALIFIED IMAGE NAMES # We recommend always using fully qualified image names including the registry # server (full dns name), namespace, image name, and tag # (e.g., registry.redhat.io/ubi8/ubi:latest). Pulling by digest (i.e., # quay.io/repository/name@digest) further eliminates the ambiguity of tags. # When using short names, there is always an inherent risk that the image being # pulled could be spoofed. For example, a user wants to pull an image named # `foobar` from a registry and expects it to come from myregistry.com. If # myregistry.com is not first in the search list, an attacker could place a # different `foobar` image at a registry earlier in the search list. The user # would accidentally pull and run the attacker's image and code rather than the # intended content. We recommend only adding registries which are completely # trusted (i.e., registries which don't allow unknown or anonymous users to # create accounts with arbitrary names). This will prevent an image from being # spoofed, squatted or otherwise made insecure. If it is necessary to use one # of these registries, it should be added at the end of the list. # # # An array of host[:port] registries to try when pulling an unqualified image, in order. unqualified-search-registries = ["registry.fedoraproject.org", "registry.access.redhat.com", "registry.centos.org", "docker.io"] # [[registry]] # # The "prefix" field is used to choose the relevant [[registry]] TOML table; # # (only) the TOML table with the longest match for the input image name # # (taking into account namespace/repo/tag/digest separators) is used. # # # # The prefix can also be of the form: *.example.com for wildcard subdomain # # matching. # # # # If the prefix field is missing, it defaults to be the same as the "location" field. # prefix = "example.com/foo" # # # If true, unencrypted HTTP as well as TLS connections with untrusted # # certificates are allowed. # insecure = false # # # If true, pulling images with matching names is forbidden. # blocked = false # # # The physical location of the "prefix"-rooted namespace. # # # # By default, this is equal to "prefix" (in which case "prefix" can be omitted # # and the [[registry]] TOML table can only specify "location"). # # # # Example: Given # # prefix = "example.com/foo" # # location = "internal-registry-for-example.net/bar" # # requests for the image example.com/foo/myimage:latest will actually work with the # # internal-registry-for-example.net/bar/myimage:latest image. # # # The location can be empty iff prefix is in a # # wildcarded format: "*.example.com". In this case, the input reference will # # be used as-is without any rewrite. # location = internal-registry-for-example.com/bar" # # # (Possibly-partial) mirrors for the "prefix"-rooted namespace. # # # # The mirrors are attempted in the specified order; the first one that can be # # contacted and contains the image will be used (and if none of the mirrors contains the image, # # the primary location specified by the "registry.location" field, or using the unmodified # # user-specified reference, is tried last). # # # # Each TOML table in the "mirror" array can contain the following fields, with the same semantics # # as if specified in the [[registry]] TOML table directly: # # - location # # - insecure # [[registry.mirror]] # location = "example-mirror-0.local/mirror-for-foo" # [[registry.mirror]] # location = "example-mirror-1.local/mirrors/foo" # insecure = true # # Given the above, a pull of example.com/foo/image:latest will try: # # 1. example-mirror-0.local/mirror-for-foo/image:latest # # 2. example-mirror-1.local/mirrors/foo/image:latest # # 3. internal-registry-for-example.net/bar/image:latest # # in order, and use the first one that exists. short-name-mode = "permissive" ```

cat /etc/containers/storage.conf

``` # This file is is the configuration file for all tools # that use the containers/storage library. # See man 5 containers-storage.conf for more information # The "container storage" table contains all of the server options. [storage] # Default Storage Driver, Must be set for proper operation. driver = "overlay" # Temporary storage location runroot = "/run/containers/storage" # Primary Read/Write location of container storage graphroot = "/var/lib/containers/storage" # Storage path for rootless users # # rootless_storage_path = "$HOME/.local/share/containers/storage" [storage.options] # Storage options to be passed to underlying storage drivers # AdditionalImageStores is used to pass paths to additional Read/Only image stores # Must be comma separated list. additionalimagestores = [ ] # Remap-UIDs/GIDs is the mapping from UIDs/GIDs as they should appear inside of # a container, to the UIDs/GIDs as they should appear outside of the container, # and the length of the range of UIDs/GIDs. Additional mapped sets can be # listed and will be heeded by libraries, but there are limits to the number of # mappings which the kernel will allow when you later attempt to run a # container. # # remap-uids = 0:1668442479:65536 # remap-gids = 0:1668442479:65536 # Remap-User/Group is a user name which can be used to look up one or more UID/GID # ranges in the /etc/subuid or /etc/subgid file. Mappings are set up starting # with an in-container ID of 0 and then a host-level ID taken from the lowest # range that matches the specified name, and using the length of that range. # Additional ranges are then assigned, using the ranges which specify the # lowest host-level IDs first, to the lowest not-yet-mapped in-container ID, # until all of the entries have been used for maps. # # remap-user = "containers" # remap-group = "containers" # Root-auto-userns-user is a user name which can be used to look up one or more UID/GID # ranges in the /etc/subuid and /etc/subgid file. These ranges will be partitioned # to containers configured to create automatically a user namespace. Containers # configured to automatically create a user namespace can still overlap with containers # having an explicit mapping set. # This setting is ignored when running as rootless. # root-auto-userns-user = "storage" # # Auto-userns-min-size is the minimum size for a user namespace created automatically. # auto-userns-min-size=1024 # # Auto-userns-max-size is the minimum size for a user namespace created automatically. # auto-userns-max-size=65536 [storage.options.overlay] # ignore_chown_errors can be set to allow a non privileged user running with # a single UID within a user namespace to run containers. The user can pull # and use any image even those with multiple uids. Note multiple UIDs will be # squashed down to the default uid in the container. These images will have no # separation between the users in the container. Only supported for the overlay # and vfs drivers. #ignore_chown_errors = "false" # Inodes is used to set a maximum inodes of the container image. # inodes = "" # Path to an helper program to use for mounting the file system instead of mounting it # directly. #mount_program = "/usr/bin/fuse-overlayfs" # mountopt specifies comma separated list of extra mount options mountopt = "nodev,metacopy=on" # Set to skip a PRIVATE bind mount on the storage home directory. # skip_mount_home = "false" # Size is used to set a maximum size of the container image. # size = "" # ForceMask specifies the permissions mask that is used for new files and # directories. # # The values "shared" and "private" are accepted. # Octal permission masks are also accepted. # # "": No value specified. # All files/directories, get set with the permissions identified within the # image. # "private": it is equivalent to 0700. # All files/directories get set with 0700 permissions. The owner has rwx # access to the files. No other users on the system can access the files. # This setting could be used with networked based homedirs. # "shared": it is equivalent to 0755. # The owner has rwx access to the files and everyone else can read, access # and execute them. This setting is useful for sharing containers storage # with other users. For instance have a storage owned by root but shared # to rootless users as an additional store. # NOTE: All files within the image are made readable and executable by any # user on the system. Even /etc/shadow within your image is now readable by # any user. # # OCTAL: Users can experiment with other OCTAL Permissions. # # Note: The force_mask Flag is an experimental feature, it could change in the # future. When "force_mask" is set the original permission mask is stored in # the "user.containers.override_stat" xattr and the "mount_program" option must # be specified. Mount programs like "/usr/bin/fuse-overlayfs" present the # extended attribute permissions to processes within containers rather then the # "force_mask" permissions. # # force_mask = "" [storage.options.thinpool] # Storage Options for thinpool # autoextend_percent determines the amount by which pool needs to be # grown. This is specified in terms of % of pool size. So a value of 20 means # that when threshold is hit, pool will be grown by 20% of existing # pool size. # autoextend_percent = "20" # autoextend_threshold determines the pool extension threshold in terms # of percentage of pool size. For example, if threshold is 60, that means when # pool is 60% full, threshold has been hit. # autoextend_threshold = "80" # basesize specifies the size to use when creating the base device, which # limits the size of images and containers. # basesize = "10G" # blocksize specifies a custom blocksize to use for the thin pool. # blocksize="64k" # directlvm_device specifies a custom block storage device to use for the # thin pool. Required if you setup devicemapper. # directlvm_device = "" # directlvm_device_force wipes device even if device already has a filesystem. # directlvm_device_force = "True" # fs specifies the filesystem type to use for the base device. # fs="xfs" # log_level sets the log level of devicemapper. # 0: LogLevelSuppress 0 (Default) # 2: LogLevelFatal # 3: LogLevelErr # 4: LogLevelWarn # 5: LogLevelNotice # 6: LogLevelInfo # 7: LogLevelDebug # log_level = "7" # min_free_space specifies the min free space percent in a thin pool require for # new device creation to succeed. Valid values are from 0% - 99%. # Value 0% disables # min_free_space = "10%" # mkfsarg specifies extra mkfs arguments to be used when creating the base # device. # mkfsarg = "" # metadata_size is used to set the `pvcreate --metadatasize` options when # creating thin devices. Default is 128k # metadata_size = "" # Size is used to set a maximum size of the container image. # size = "" # use_deferred_removal marks devicemapper block device for deferred removal. # If the thinpool is in use when the driver attempts to remove it, the driver # tells the kernel to remove it as soon as possible. Note this does not free # up the disk space, use deferred deletion to fully remove the thinpool. # use_deferred_removal = "True" # use_deferred_deletion marks thinpool device for deferred deletion. # If the device is busy when the driver attempts to delete it, the driver # will attempt to delete device every 30 seconds until successful. # If the program using the driver exits, the driver will continue attempting # to cleanup the next time the driver is used. Deferred deletion permanently # deletes the device and all data stored in device will be lost. # use_deferred_deletion = "True" # xfs_nospace_max_retries specifies the maximum number of retries XFS should # attempt to complete IO when ENOSPC (no space) error is returned by # underlying storage device. # xfs_nospace_max_retries = "0" ```

cat /etc/containers/policy.json

``` { "default": [ { "type": "insecureAcceptAnything" } ], "transports": { "docker": { "registry.access.redhat.com": [ { "type": "signedBy", "keyType": "GPGKeys", "keyPath": "/etc/pki/rpm-gpg/RPM-GPG-KEY-redhat-release" } ], "registry.redhat.io": [ { "type": "signedBy", "keyType": "GPGKeys", "keyPath": "/etc/pki/rpm-gpg/RPM-GPG-KEY-redhat-release" } ] }, "docker-daemon": { "": [ { "type": "insecureAcceptAnything" } ] } } } ```

cat /usr/share/containers/containers.conf

cat /usr/share/containers/mounts.conf

``` /usr/share/rhel/secrets:/run/secrets ```

cat /usr/share/containers/seccomp.json

``` { "defaultAction": "SCMP_ACT_ERRNO", "defaultErrnoRet": 38, "archMap": [ { "architecture": "SCMP_ARCH_X86_64", "subArchitectures": [ "SCMP_ARCH_X86", "SCMP_ARCH_X32" ] }, { "architecture": "SCMP_ARCH_AARCH64", "subArchitectures": [ "SCMP_ARCH_ARM" ] }, { "architecture": "SCMP_ARCH_MIPS64", "subArchitectures": [ "SCMP_ARCH_MIPS", "SCMP_ARCH_MIPS64N32" ] }, { "architecture": "SCMP_ARCH_MIPS64N32", "subArchitectures": [ "SCMP_ARCH_MIPS", "SCMP_ARCH_MIPS64" ] }, { "architecture": "SCMP_ARCH_MIPSEL64", "subArchitectures": [ "SCMP_ARCH_MIPSEL", "SCMP_ARCH_MIPSEL64N32" ] }, { "architecture": "SCMP_ARCH_MIPSEL64N32", "subArchitectures": [ "SCMP_ARCH_MIPSEL", "SCMP_ARCH_MIPSEL64" ] }, { "architecture": "SCMP_ARCH_S390X", "subArchitectures": [ "SCMP_ARCH_S390" ] } ], "syscalls": [ { "names": [ "bdflush", "io_pgetevents", "kexec_file_load", "kexec_load", "migrate_pages", "move_pages", "nfsservctl", "nice", "oldfstat", "oldlstat", "oldolduname", "oldstat", "olduname", "pciconfig_iobase", "pciconfig_read", "pciconfig_write", "sgetmask", "ssetmask", "swapcontext", "swapoff", "swapon", "sysfs", "uselib", "userfaultfd", "ustat", "vm86", "vm86old", "vmsplice" ], "action": "SCMP_ACT_ERRNO", "args": [], "comment": "", "includes": {}, "excludes": {}, "errnoRet": 1 }, { "names": [ "_llseek", "_newselect", "accept", "accept4", "access", "adjtimex", "alarm", "bind", "brk", "capget", "capset", "chdir", "chmod", "chown", "chown32", "clock_adjtime", "clock_adjtime64", "clock_getres", "clock_getres_time64", "clock_gettime", "clock_gettime64", "clock_nanosleep", "clock_nanosleep_time64", "clone", "clone3", "close", "close_range", "connect", "copy_file_range", "creat", "dup", "dup2", "dup3", "epoll_create", "epoll_create1", "epoll_ctl", "epoll_ctl_old", "epoll_pwait", "epoll_pwait2", "epoll_wait", "epoll_wait_old", "eventfd", "eventfd2", "execve", "execveat", "exit", "exit_group", "faccessat", "faccessat2", "fadvise64", "fadvise64_64", "fallocate", "fanotify_mark", "fchdir", "fchmod", "fchmodat", "fchown", "fchown32", "fchownat", "fcntl", "fcntl64", "fdatasync", "fgetxattr", "flistxattr", "flock", "fork", "fremovexattr", "fsconfig", "fsetxattr", "fsmount", "fsopen", "fspick", "fstat", "fstat64", "fstatat64", "fstatfs", "fstatfs64", "fsync", "ftruncate", "ftruncate64", "futex", "futex_time64", "futimesat", "get_robust_list", "get_thread_area", "getcpu", "getcwd", "getdents", "getdents64", "getegid", "getegid32", "geteuid", "geteuid32", "getgid", "getgid32", "getgroups", "getgroups32", "getitimer", "get_mempolicy", "getpeername", "getpgid", "getpgrp", "getpid", "getppid", "getpriority", "getrandom", "getresgid", "getresgid32", "getresuid", "getresuid32", "getrlimit", "getrusage", "getsid", "getsockname", "getsockopt", "gettid", "gettimeofday", "getuid", "getuid32", "getxattr", "inotify_add_watch", "inotify_init", "inotify_init1", "inotify_rm_watch", "io_cancel", "io_destroy", "io_getevents", "io_setup", "io_submit", "ioctl", "ioprio_get", "ioprio_set", "ipc", "keyctl", "kill", "lchown", "lchown32", "lgetxattr", "link", "linkat", "listen", "listxattr", "llistxattr", "lremovexattr", "lseek", "lsetxattr", "lstat", "lstat64", "madvise", "mbind", "memfd_create", "mincore", "mkdir", "mkdirat", "mknod", "mknodat", "mlock", "mlock2", "mlockall", "mmap", "mmap2", "mount", "move_mount", "mprotect", "mq_getsetattr", "mq_notify", "mq_open", "mq_timedreceive", "mq_timedreceive_time64", "mq_timedsend", "mq_timedsend_time64", "mq_unlink", "mremap", "msgctl", "msgget", "msgrcv", "msgsnd", "msync", "munlock", "munlockall", "munmap", "name_to_handle_at", "nanosleep", "newfstatat", "open", "openat", "openat2", "open_tree", "pause", "pidfd_getfd", "pidfd_open", "pidfd_send_signal", "pipe", "pipe2", "pivot_root", "pkey_alloc", "pkey_free", "pkey_mprotect", "poll", "ppoll", "ppoll_time64", "prctl", "pread64", "preadv", "preadv2", "prlimit64", "pselect6", "pselect6_time64", "pwrite64", "pwritev", "pwritev2", "read", "readahead", "readdir", "readlink", "readlinkat", "readv", "reboot", "recv", "recvfrom", "recvmmsg", "recvmmsg_time64", "recvmsg", "remap_file_pages", "removexattr", "rename", "renameat", "renameat2", "restart_syscall", "rmdir", "rseq", "rt_sigaction", "rt_sigpending", "rt_sigprocmask", "rt_sigqueueinfo", "rt_sigreturn", "rt_sigsuspend", "rt_sigtimedwait", "rt_sigtimedwait_time64", "rt_tgsigqueueinfo", "sched_get_priority_max", "sched_get_priority_min", "sched_getaffinity", "sched_getattr", "sched_getparam", "sched_getscheduler", "sched_rr_get_interval", "sched_rr_get_interval_time64", "sched_setaffinity", "sched_setattr", "sched_setparam", "sched_setscheduler", "sched_yield", "seccomp", "select", "semctl", "semget", "semop", "semtimedop", "semtimedop_time64", "send", "sendfile", "sendfile64", "sendmmsg", "sendmsg", "sendto", "setns", "set_mempolicy", "set_robust_list", "set_thread_area", "set_tid_address", "setfsgid", "setfsgid32", "setfsuid", "setfsuid32", "setgid", "setgid32", "setgroups", "setgroups32", "setitimer", "setpgid", "setpriority", "setregid", "setregid32", "setresgid", "setresgid32", "setresuid", "setresuid32", "setreuid", "setreuid32", "setrlimit", "setsid", "setsockopt", "setuid", "setuid32", "setxattr", "shmat", "shmctl", "shmdt", "shmget", "shutdown", "sigaltstack", "signalfd", "signalfd4", "sigreturn", "socket", "socketcall", "socketpair", "splice", "stat", "stat64", "statfs", "statfs64", "statx", "symlink", "symlinkat", "sync", "sync_file_range", "syncfs", "sysinfo", "syslog", "tee", "tgkill", "time", "timer_create", "timer_delete", "timer_getoverrun", "timer_gettime", "timer_gettime64", "timer_settime", "timer_settime64", "timerfd_create", "timerfd_gettime", "timerfd_gettime64", "timerfd_settime", "timerfd_settime64", "times", "tkill", "truncate", "truncate64", "ugetrlimit", "umask", "umount", "umount2", "uname", "unlink", "unlinkat", "unshare", "utime", "utimensat", "utimensat_time64", "utimes", "vfork", "wait4", "waitid", "waitpid", "write", "writev" ], "action": "SCMP_ACT_ALLOW", "args": [], "comment": "", "includes": {}, "excludes": {} }, { "names": [ "personality" ], "action": "SCMP_ACT_ALLOW", "args": [ { "index": 0, "value": 0, "valueTwo": 0, "op": "SCMP_CMP_EQ" } ], "comment": "", "includes": {}, "excludes": {} }, { "names": [ "personality" ], "action": "SCMP_ACT_ALLOW", "args": [ { "index": 0, "value": 8, "valueTwo": 0, "op": "SCMP_CMP_EQ" } ], "comment": "", "includes": {}, "excludes": {} }, { "names": [ "personality" ], "action": "SCMP_ACT_ALLOW", "args": [ { "index": 0, "value": 131072, "valueTwo": 0, "op": "SCMP_CMP_EQ" } ], "comment": "", "includes": {}, "excludes": {} }, { "names": [ "personality" ], "action": "SCMP_ACT_ALLOW", "args": [ { "index": 0, "value": 131080, "valueTwo": 0, "op": "SCMP_CMP_EQ" } ], "comment": "", "includes": {}, "excludes": {} }, { "names": [ "personality" ], "action": "SCMP_ACT_ALLOW", "args": [ { "index": 0, "value": 4294967295, "valueTwo": 0, "op": "SCMP_CMP_EQ" } ], "comment": "", "includes": {}, "excludes": {} }, { "names": [ "sync_file_range2" ], "action": "SCMP_ACT_ALLOW", "args": [], "comment": "", "includes": { "arches": [ "ppc64le" ] }, "excludes": {} }, { "names": [ "arm_fadvise64_64", "arm_sync_file_range", "sync_file_range2", "breakpoint", "cacheflush", "set_tls" ], "action": "SCMP_ACT_ALLOW", "args": [], "comment": "", "includes": { "arches": [ "arm", "arm64" ] }, "excludes": {} }, { "names": [ "arch_prctl" ], "action": "SCMP_ACT_ALLOW", "args": [], "comment": "", "includes": { "arches": [ "amd64", "x32" ] }, "excludes": {} }, { "names": [ "modify_ldt" ], "action": "SCMP_ACT_ALLOW", "args": [], "comment": "", "includes": { "arches": [ "amd64", "x32", "x86" ] }, "excludes": {} }, { "names": [ "s390_pci_mmio_read", "s390_pci_mmio_write", "s390_runtime_instr" ], "action": "SCMP_ACT_ALLOW", "args": [], "comment": "", "includes": { "arches": [ "s390", "s390x" ] }, "excludes": {} }, { "names": [ "open_by_handle_at" ], "action": "SCMP_ACT_ALLOW", "args": [], "comment": "", "includes": { "caps": [ "CAP_DAC_READ_SEARCH" ] }, "excludes": {} }, { "names": [ "open_by_handle_at" ], "action": "SCMP_ACT_ERRNO", "args": [], "comment": "", "includes": {}, "excludes": { "caps": [ "CAP_DAC_READ_SEARCH" ] }, "errnoRet": 1 }, { "names": [ "bpf", "fanotify_init", "lookup_dcookie", "perf_event_open", "quotactl", "setdomainname", "sethostname", "setns" ], "action": "SCMP_ACT_ALLOW", "args": [], "comment": "", "includes": { "caps": [ "CAP_SYS_ADMIN" ] }, "excludes": {} }, { "names": [ "bpf", "fanotify_init", "lookup_dcookie", "perf_event_open", "quotactl", "setdomainname", "sethostname", "setns" ], "action": "SCMP_ACT_ERRNO", "args": [], "comment": "", "includes": {}, "excludes": { "caps": [ "CAP_SYS_ADMIN" ] }, "errnoRet": 1 }, { "names": [ "chroot" ], "action": "SCMP_ACT_ALLOW", "args": [], "comment": "", "includes": { "caps": [ "CAP_SYS_CHROOT" ] }, "excludes": {} }, { "names": [ "chroot" ], "action": "SCMP_ACT_ERRNO", "args": [], "comment": "", "includes": {}, "excludes": { "caps": [ "CAP_SYS_CHROOT" ] }, "errnoRet": 1 }, { "names": [ "delete_module", "init_module", "finit_module", "query_module" ], "action": "SCMP_ACT_ALLOW", "args": [], "comment": "", "includes": { "caps": [ "CAP_SYS_MODULE" ] }, "excludes": {} }, { "names": [ "delete_module", "init_module", "finit_module", "query_module" ], "action": "SCMP_ACT_ERRNO", "args": [], "comment": "", "includes": {}, "excludes": { "caps": [ "CAP_SYS_MODULE" ] }, "errnoRet": 1 }, { "names": [ "acct" ], "action": "SCMP_ACT_ALLOW", "args": [], "comment": "", "includes": { "caps": [ "CAP_SYS_PACCT" ] }, "excludes": {} }, { "names": [ "acct" ], "action": "SCMP_ACT_ERRNO", "args": [], "comment": "", "includes": {}, "excludes": { "caps": [ "CAP_SYS_PACCT" ] }, "errnoRet": 1 }, { "names": [ "kcmp", "process_madvise", "process_vm_readv", "process_vm_writev", "ptrace" ], "action": "SCMP_ACT_ALLOW", "args": [], "comment": "", "includes": { "caps": [ "CAP_SYS_PTRACE" ] }, "excludes": {} }, { "names": [ "kcmp", "process_madvise", "process_vm_readv", "process_vm_writev", "ptrace" ], "action": "SCMP_ACT_ERRNO", "args": [], "comment": "", "includes": {}, "excludes": { "caps": [ "CAP_SYS_PTRACE" ] }, "errnoRet": 1 }, { "names": [ "iopl", "ioperm" ], "action": "SCMP_ACT_ALLOW", "args": [], "comment": "", "includes": { "caps": [ "CAP_SYS_RAWIO" ] }, "excludes": {} }, { "names": [ "iopl", "ioperm" ], "action": "SCMP_ACT_ERRNO", "args": [], "comment": "", "includes": {}, "excludes": { "caps": [ "CAP_SYS_RAWIO" ] }, "errnoRet": 1 }, { "names": [ "settimeofday", "stime", "clock_settime", "clock_settime64" ], "action": "SCMP_ACT_ALLOW", "args": [], "comment": "", "includes": { "caps": [ "CAP_SYS_TIME" ] }, "excludes": {} }, { "names": [ "settimeofday", "stime", "clock_settime", "clock_settime64" ], "action": "SCMP_ACT_ERRNO", "args": [], "comment": "", "includes": {}, "excludes": { "caps": [ "CAP_SYS_TIME" ] }, "errnoRet": 1 }, { "names": [ "vhangup" ], "action": "SCMP_ACT_ALLOW", "args": [], "comment": "", "includes": { "caps": [ "CAP_SYS_TTY_CONFIG" ] }, "excludes": {} }, { "names": [ "vhangup" ], "action": "SCMP_ACT_ERRNO", "args": [], "comment": "", "includes": {}, "excludes": { "caps": [ "CAP_SYS_TTY_CONFIG" ] }, "errnoRet": 1 }, { "names": [ "socket" ], "action": "SCMP_ACT_ERRNO", "args": [ { "index": 0, "value": 16, "valueTwo": 0, "op": "SCMP_CMP_EQ" }, { "index": 2, "value": 9, "valueTwo": 0, "op": "SCMP_CMP_EQ" } ], "comment": "", "includes": {}, "excludes": { "caps": [ "CAP_AUDIT_WRITE" ] }, "errnoRet": 22 }, { "names": [ "socket" ], "action": "SCMP_ACT_ALLOW", "args": [ { "index": 2, "value": 9, "valueTwo": 0, "op": "SCMP_CMP_NE" } ], "comment": "", "includes": {}, "excludes": { "caps": [ "CAP_AUDIT_WRITE" ] } }, { "names": [ "socket" ], "action": "SCMP_ACT_ALLOW", "args": [ { "index": 0, "value": 16, "valueTwo": 0, "op": "SCMP_CMP_NE" } ], "comment": "", "includes": {}, "excludes": { "caps": [ "CAP_AUDIT_WRITE" ] } }, { "names": [ "socket" ], "action": "SCMP_ACT_ALLOW", "args": [ { "index": 2, "value": 9, "valueTwo": 0, "op": "SCMP_CMP_NE" } ], "comment": "", "includes": {}, "excludes": { "caps": [ "CAP_AUDIT_WRITE" ] } }, { "names": [ "socket" ], "action": "SCMP_ACT_ALLOW", "args": null, "comment": "", "includes": { "caps": [ "CAP_AUDIT_WRITE" ] }, "excludes": {} } ] } ```

---

Packages

# Packages No `dpkg` Have `rpm`

rpm -qa|egrep "(cc-oci-runtime|cc-runtime|runv|kata-runtime|kata-ksm-throttler|kata-containers-image|linux-container|qemu-)"

``` ipxe-roms-qemu-20181214-8.git133f4c47.el8.noarch qemu-kvm-4.2.0-59.module_el8.5.0+1063+c9b9feff.1.x86_64 qemu-kvm-block-curl-4.2.0-59.module_el8.5.0+1063+c9b9feff.1.x86_64 qemu-kvm-block-ssh-4.2.0-59.module_el8.5.0+1063+c9b9feff.1.x86_64 qemu-kvm-block-gluster-4.2.0-59.module_el8.5.0+1063+c9b9feff.1.x86_64 qemu-img-4.2.0-59.module_el8.5.0+1063+c9b9feff.1.x86_64 qemu-kvm-block-rbd-4.2.0-59.module_el8.5.0+1063+c9b9feff.1.x86_64 libvirt-daemon-driver-qemu-7.6.0-6.el8.x86_64 qemu-kvm-core-4.2.0-59.module_el8.5.0+1063+c9b9feff.1.x86_64 qemu-guest-agent-4.2.0-59.module_el8.5.0+1063+c9b9feff.1.x86_64 qemu-kvm-common-4.2.0-59.module_el8.5.0+1063+c9b9feff.1.x86_64 qemu-kvm-block-iscsi-4.2.0-59.module_el8.5.0+1063+c9b9feff.1.x86_64 ```

---

Kata Monitor

Kata Monitor `kata-monitor`.

kata-monitor --version

``` kata-monitor Version: 0.3.0 Go version: go1.22.2 Git commit: 6dd038fd585c38bfe26de19f108cce688bc725ec OS/Arch: linux/amd64 ```

---

zosocanuck commented 2 months ago

This is the workload I attempted to run:

$ kubectl apply -f https://raw.githubusercontent.com/kata-containers/kata-containers/main/tools/packaging/kata-deploy/examples/test-deploy-kata-qemu.yaml

beraldoleal commented 1 month ago

Hi @zosocanuck based on your logs it seems that you are missing some packages:

/usr/share/defaults/kata-containers/configuration-qemu.toml: file /usr/bin/qemu-system-x86_64 does not exist