kata-containers / kata-containers

Kata Containers is an open source project and community working to build a standard implementation of lightweight Virtual Machines (VMs) that feel and perform like containers, but provide the workload isolation and security advantages of VMs. https://katacontainers.io/
Apache License 2.0
5.43k stars 1.05k forks source link

runtime: Files are not synced between host and guest VMs #9986

Closed squarti closed 1 month ago

squarti commented 3 months ago

Description of problem

When running the remote hypervisor, the regex used to filter which mount volumes should be watched does not work if the kubelet root dir is configured to a path other than /var/lib/kubelet.

Regex line:

var configVolRegexString = "^/var/lib/kubelet/pods/[a-fA-F0-9\\-]{36}/volumes/kubernetes\\.io~(configmap|secret|projected|downward-api)"

Files are copied at start up in line, but the directories are not watched because this check fails.

Expected result

Secrets/Configmaps/DownwardApi files are copied to the remote guest VM as new versions are created.

Actual result

Only the initial version of the secrets/configmaps/downwardApi files are copied to the guest VM.

Further information

Kubelet Service

# cat /etc/systemd/system/multi-user.target.wants/kubelet.service
[Unit]
Description=Kubernetes Kubelet
Documentation=https://github.com/kubernetes/kubernetes
After=network.target auditd.service

[Service]
ExecStartPre=/sbin/swapoff -a
ExecStartPre=/bin/systemctl stop -f haproxy.service
ExecStartPre=-/usr/local/sbin/create-localproxy-netns.sh
ExecStart=/usr/local/bin/kubelet \
          --config=/etc/kubernetes/kubelet-config.yaml \
          --root-dir=/var/data/kubelet \
                    --cloud-provider=external \
          --v=2 \
          --kubeconfig=/etc/kubernetes/kubelet-kubeconfig \
          --hostname-override=10.240.0.91 \
           \
           \
           \
          --version=v1.29.5+IKS \
           \
          --runtime-cgroups=/podruntime/runtime
Restart=always
RestartSec=5
TimeoutStartSec=15
SyslogIdentifier=kubelet.service

[Install]
WantedBy=multi-user.target

Sample log:

Jul 09 16:00:44 kube-cpm615rd0e9jv43tm4fg-cdcloudapia-workers-000002a9 containerd[163694]: time="2024-07-09T16:00:44.946352332Z" level=info msg="ShareFile: Copying file from src (/var/data/kubelet/pods/1bc53fb2-13f7-4c6a-a3da-92be1c061182/etc-hosts) to dest (/run/kata-containers/shared/containers/2536345954ee641684bc962969c19e30462ee006f0b0c46778708335a950cb1f-d7b6f483ac09e1fd-hosts)" name=containerd-shim-v2 pid=166746 sandbox=db826f2ba31b00fa1885d8de99cabc01e46b688695acc3d5bc784f89980d4fcb source=virtcontainers subsystem=fs_share

Kata Containers survey

Please consider taking the survey to help us help you: https://openinfrafoundation.formstack.com/forms/kata_containers_user_survey

squarti commented 3 months ago
Show kata-collect-data.sh details

# Meta details Running `kata-collect-data.sh` version `3.6.0 (commit 6a4919eeb9bfd86c3a4d74ce02b31c1f9eb85aef)` at `2024-07-09.17:04:38.629930880+0000`. ---

Runtime

Runtime is `/opt/kata/bin/kata-runtime`. # `kata-env`

/opt/kata/bin/kata-runtime kata-env

```toml [Kernel] Path = "/opt/kata/share/kata-containers/vmlinux-6.1.62-132" Parameters = "systemd.unit=kata-containers.target systemd.mask=systemd-networkd.service systemd.mask=systemd-networkd.socket scsi_mod.scan=none" [Meta] Version = "1.0.27" [Image] Path = "/opt/kata/share/kata-containers/kata-ubuntu-latest.image" [Initrd] Path = "" [Hypervisor] MachineType = "q35" Version = "QEMU emulator version 7.2.0 (kata-static)\nCopyright (c) 2003-2022 Fabrice Bellard and the QEMU Project developers" Path = "/opt/kata/bin/qemu-system-x86_64" BlockDeviceDriver = "virtio-scsi" EntropySource = "/dev/urandom" SharedFS = "virtio-fs" VirtioFSDaemon = "/opt/kata/libexec/virtiofsd" SocketPath = "" Msize9p = 8192 MemorySlots = 10 HotPlugVFIO = "no-port" ColdPlugVFIO = "no-port" Debug = false [Hypervisor.SecurityInfo] Rootless = false DisableSeccomp = false GuestHookPath = "" EnableAnnotations = ["enable_iommu", "virtio_fs_extra_args", "kernel_params"] ConfidentialGuest = false [Runtime] Path = "/opt/kata/bin/kata-runtime" GuestSeLinuxLabel = "" Debug = false Trace = false DisableGuestSeccomp = true DisableNewNetNs = false SandboxCgroupOnly = false [Runtime.Config] Path = "/opt/kata/share/defaults/kata-containers/configuration-qemu.toml" [Runtime.Version] OCI = "1.1.0+dev" [Runtime.Version.Version] Semver = "3.5.0" Commit = "3939ec9bed380d21ddfead85e8dabb7011c4c923" Major = 3 Minor = 5 Patch = 0 [Host] Kernel = "5.4.0-182-generic" Architecture = "amd64" VMContainerCapable = true SupportVSocks = true [Host.Distro] Name = "Ubuntu" Version = "20.04" [Host.CPU] Vendor = "GenuineIntel" Model = "Intel Xeon Processor (Cascadelake)" CPUs = 4 [Host.Memory] Total = 32723660 Free = 16014780 Available = 29551124 [Agent] Debug = false Trace = false ```

---

Runtime config files

# Runtime config files ## Runtime default config files ``` /etc/kata-containers/configuration.toml /opt/kata/share/defaults/kata-containers/configuration.toml ``` ## Runtime config file contents Config file `/etc/kata-containers/configuration.toml` not found

cat "/opt/kata/share/defaults/kata-containers/configuration.toml"

```toml # Copyright (c) 2017-2019 Intel Corporation # Copyright (c) 2021 Adobe Inc. # # SPDX-License-Identifier: Apache-2.0 # # XXX: WARNING: this file is auto-generated. # XXX: # XXX: Source file: "config/configuration-qemu.toml.in" # XXX: Project: # XXX: Name: Kata Containers # XXX: Type: kata [hypervisor.qemu] path = "/opt/kata/bin/qemu-system-x86_64" kernel = "/opt/kata/share/kata-containers/vmlinux.container" image = "/opt/kata/share/kata-containers/kata-containers.img" # initrd = "/opt/kata/share/kata-containers/kata-containers-initrd.img" machine_type = "q35" # rootfs filesystem type: # - ext4 (default) # - xfs # - erofs rootfs_type="ext4" # Enable confidential guest support. # Toggling that setting may trigger different hardware features, ranging # from memory encryption to both memory and CPU-state encryption and integrity. # The Kata Containers runtime dynamically detects the available feature set and # aims at enabling the largest possible one, returning an error if none is # available, or none is supported by the hypervisor. # # Known limitations: # * Does not work by design: # - CPU Hotplug # - Memory Hotplug # - NVDIMM devices # # Default false # confidential_guest = true # Choose AMD SEV-SNP confidential guests # In case of using confidential guests on AMD hardware that supports both SEV # and SEV-SNP, the following enables SEV-SNP guests. SEV guests are default. # Default false # sev_snp_guest = true # Enable running QEMU VMM as a non-root user. # By default QEMU VMM run as root. When this is set to true, QEMU VMM process runs as # a non-root random user. See documentation for the limitations of this mode. # rootless = true # List of valid annotation names for the hypervisor # Each member of the list is a regular expression, which is the base name # of the annotation, e.g. "path" for io.katacontainers.config.hypervisor.path" enable_annotations = ["enable_iommu", "virtio_fs_extra_args", "kernel_params"] # List of valid annotations values for the hypervisor # Each member of the list is a path pattern as described by glob(3). # The default if not set is empty (all annotations rejected.) # Your distribution recommends: ["/opt/kata/bin/qemu-system-x86_64"] valid_hypervisor_paths = ["/opt/kata/bin/qemu-system-x86_64"] # Optional space-separated list of options to pass to the guest kernel. # For example, use `kernel_params = "vsyscall=emulate"` if you are having # trouble running pre-2.15 glibc. # # WARNING: - any parameter specified here will take priority over the default # parameter value of the same name used to start the virtual machine. # Do not set values here unless you understand the impact of doing so as you # may stop the virtual machine from booting. # To see the list of default parameters, enable hypervisor debug, create a # container and look for 'default-kernel-parameters' log entries. kernel_params = " " # Path to the firmware. # If you want that qemu uses the default firmware leave this option empty firmware = "" # Path to the firmware volume. # firmware TDVF or OVMF can be split into FIRMWARE_VARS.fd (UEFI variables # as configuration) and FIRMWARE_CODE.fd (UEFI program image). UEFI variables # can be customized per each user while UEFI code is kept same. firmware_volume = "" # Machine accelerators # comma-separated list of machine accelerators to pass to the hypervisor. # For example, `machine_accelerators = "nosmm,nosmbus,nosata,nopit,static-prt,nofw"` machine_accelerators="" # Qemu seccomp sandbox feature # comma-separated list of seccomp sandbox features to control the syscall access. # For example, `seccompsandbox= "on,obsolete=deny,spawn=deny,resourcecontrol=deny"` # Note: "elevateprivileges=deny" doesn't work with daemonize option, so it's removed from the seccomp sandbox # Another note: enabling this feature may reduce performance, you may enable # /proc/sys/net/core/bpf_jit_enable to reduce the impact. see https://man7.org/linux/man-pages/man8/bpfc.8.html #seccompsandbox="on,obsolete=deny,spawn=deny,resourcecontrol=deny" # CPU features # comma-separated list of cpu features to pass to the cpu # For example, `cpu_features = "pmu=off,vmx=off" cpu_features="pmu=off" # Default number of vCPUs per SB/VM: # unspecified or 0 --> will be set to 1 # < 0 --> will be set to the actual number of physical cores # > 0 <= number of physical cores --> will be set to the specified number # > number of physical cores --> will be set to the actual number of physical cores default_vcpus = 1 # Default maximum number of vCPUs per SB/VM: # unspecified or == 0 --> will be set to the actual number of physical cores or to the maximum number # of vCPUs supported by KVM if that number is exceeded # > 0 <= number of physical cores --> will be set to the specified number # > number of physical cores --> will be set to the actual number of physical cores or to the maximum number # of vCPUs supported by KVM if that number is exceeded # WARNING: Depending of the architecture, the maximum number of vCPUs supported by KVM is used when # the actual number of physical cores is greater than it. # WARNING: Be aware that this value impacts the virtual machine's memory footprint and CPU # the hotplug functionality. For example, `default_maxvcpus = 240` specifies that until 240 vCPUs # can be added to a SB/VM, but the memory footprint will be big. Another example, with # `default_maxvcpus = 8` the memory footprint will be small, but 8 will be the maximum number of # vCPUs supported by the SB/VM. In general, we recommend that you do not edit this variable, # unless you know what are you doing. # NOTICE: on arm platform with gicv2 interrupt controller, set it to 8. default_maxvcpus = 0 # Bridges can be used to hot plug devices. # Limitations: # * Currently only pci bridges are supported # * Until 30 devices per bridge can be hot plugged. # * Until 5 PCI bridges can be cold plugged per VM. # This limitation could be a bug in qemu or in the kernel # Default number of bridges per SB/VM: # unspecified or 0 --> will be set to 1 # > 1 <= 5 --> will be set to the specified number # > 5 --> will be set to 5 default_bridges = 1 # Default memory size in MiB for SB/VM. # If unspecified then it will be set 2048 MiB. default_memory = 2048 # # Default memory slots per SB/VM. # If unspecified then it will be set 10. # This is will determine the times that memory will be hotadded to sandbox/VM. #memory_slots = 10 # Default maximum memory in MiB per SB / VM # unspecified or == 0 --> will be set to the actual amount of physical RAM # > 0 <= amount of physical RAM --> will be set to the specified number # > amount of physical RAM --> will be set to the actual amount of physical RAM default_maxmemory = 0 # The size in MiB will be plused to max memory of hypervisor. # It is the memory address space for the NVDIMM devie. # If set block storage driver (block_device_driver) to "nvdimm", # should set memory_offset to the size of block device. # Default 0 #memory_offset = 0 # Specifies virtio-mem will be enabled or not. # Please note that this option should be used with the command # "echo 1 > /proc/sys/vm/overcommit_memory". # Default false #enable_virtio_mem = true # Disable block device from being used for a container's rootfs. # In case of a storage driver like devicemapper where a container's # root file system is backed by a block device, the block device is passed # directly to the hypervisor for performance reasons. # This flag prevents the block device from being passed to the hypervisor, # virtio-fs is used instead to pass the rootfs. disable_block_device_use = false # Shared file system type: # - virtio-fs (default) # - virtio-9p # - virtio-fs-nydus # - none shared_fs = "virtio-fs" # Path to vhost-user-fs daemon. virtio_fs_daemon = "/opt/kata/libexec/virtiofsd" # List of valid annotations values for the virtiofs daemon # The default if not set is empty (all annotations rejected.) # Your distribution recommends: ["/opt/kata/libexec/virtiofsd"] valid_virtio_fs_daemon_paths = ["/opt/kata/libexec/virtiofsd"] # Default size of DAX cache in MiB virtio_fs_cache_size = 0 # Default size of virtqueues virtio_fs_queue_size = 1024 # Extra args for virtiofsd daemon # # Format example: # ["--arg1=xxx", "--arg2=yyy"] # Examples: # Set virtiofsd log level to debug : ["--log-level=debug"] # # see `virtiofsd -h` for possible options. virtio_fs_extra_args = ["--thread-pool-size=1", "--announce-submounts"] # Cache mode: # # - never # Metadata, data, and pathname lookup are not cached in guest. They are # always fetched from host and any changes are immediately pushed to host. # # - auto # Metadata and pathname lookup cache expires after a configured amount of # time (default is 1 second). Data is cached while the file is open (close # to open consistency). # # - always # Metadata, data, and pathname lookup are cached in guest and never expire. virtio_fs_cache = "auto" # Block storage driver to be used for the hypervisor in case the container # rootfs is backed by a block device. This is virtio-scsi, virtio-blk # or nvdimm. block_device_driver = "virtio-scsi" # aio is the I/O mechanism used by qemu # Options: # # - threads # Pthread based disk I/O. # # - native # Native Linux I/O. # # - io_uring # Linux io_uring API. This provides the fastest I/O operations on Linux, requires kernel>5.1 and # qemu >=5.0. block_device_aio = "io_uring" # Specifies cache-related options will be set to block devices or not. # Default false #block_device_cache_set = true # Specifies cache-related options for block devices. # Denotes whether use of O_DIRECT (bypass the host page cache) is enabled. # Default false #block_device_cache_direct = true # Specifies cache-related options for block devices. # Denotes whether flush requests for the device are ignored. # Default false #block_device_cache_noflush = true # Enable iothreads (data-plane) to be used. This causes IO to be # handled in a separate IO thread. This is currently only implemented # for SCSI. # enable_iothreads = false # Enable pre allocation of VM RAM, default false # Enabling this will result in lower container density # as all of the memory will be allocated and locked # This is useful when you want to reserve all the memory # upfront or in the cases where you want memory latencies # to be very predictable # Default false #enable_mem_prealloc = true # Enable huge pages for VM RAM, default false # Enabling this will result in the VM memory # being allocated using huge pages. # This is useful when you want to use vhost-user network # stacks within the container. This will automatically # result in memory pre allocation #enable_hugepages = true # Enable vhost-user storage device, default false # Enabling this will result in some Linux reserved block type # major range 240-254 being chosen to represent vhost-user devices. enable_vhost_user_store = false # The base directory specifically used for vhost-user devices. # Its sub-path "block" is used for block devices; "block/sockets" is # where we expect vhost-user sockets to live; "block/devices" is where # simulated block device nodes for vhost-user devices to live. vhost_user_store_path = "/var/run/kata-containers/vhost-user" # Enable vIOMMU, default false # Enabling this will result in the VM having a vIOMMU device # This will also add the following options to the kernel's # command line: intel_iommu=on,iommu=pt #enable_iommu = true # Enable IOMMU_PLATFORM, default false # Enabling this will result in the VM device having iommu_platform=on set #enable_iommu_platform = true # List of valid annotations values for the vhost user store path # The default if not set is empty (all annotations rejected.) # Your distribution recommends: ["/var/run/kata-containers/vhost-user"] valid_vhost_user_store_paths = ["/var/run/kata-containers/vhost-user"] # The timeout for reconnecting on non-server spdk sockets when the remote end goes away. # qemu will delay this many seconds and then attempt to reconnect. # Zero disables reconnecting, and the default is zero. vhost_user_reconnect_timeout_sec = 0 # Enable file based guest memory support. The default is an empty string which # will disable this feature. In the case of virtio-fs, this is enabled # automatically and '/dev/shm' is used as the backing folder. # This option will be ignored if VM templating is enabled. #file_mem_backend = "" # List of valid annotations values for the file_mem_backend annotation # The default if not set is empty (all annotations rejected.) # Your distribution recommends: [""] valid_file_mem_backends = [""] # -pflash can add image file to VM. The arguments of it should be in format # of ["/path/to/flash0.img", "/path/to/flash1.img"] pflashes = [] # This option changes the default hypervisor and kernel parameters # to enable debug output where available. # # Default false #enable_debug = true # This option allows to add an extra HMP or QMP socket when `enable_debug = true` # # WARNING: Anyone with access to the extra socket can take full control of # Qemu. This is for debugging purpose only and must *NEVER* be used in # production. # # Valid values are : # - "hmp" # - "qmp" # - "qmp-pretty" (same as "qmp" with pretty json formatting) # # If set to the empty string "", no extra monitor socket is added. This is # the default. #extra_monitor_socket = hmp # Disable the customizations done in the runtime when it detects # that it is running on top a VMM. This will result in the runtime # behaving as it would when running on bare metal. # #disable_nesting_checks = true # This is the msize used for 9p shares. It is the number of bytes # used for 9p packet payload. #msize_9p = 8192 # If false and nvdimm is supported, use nvdimm device to plug guest image. # Otherwise virtio-block device is used. # # nvdimm is not supported when `confidential_guest = true`. # # Default is false #disable_image_nvdimm = true # VFIO devices are hotplugged on a bridge by default. # Enable hotplugging on root bus. This may be required for devices with # a large PCI bar, as this is a current limitation with hotplugging on # a bridge. # Default false #hotplug_vfio_on_root_bus = true # Enable hot-plugging of VFIO devices to a bridge-port, # root-port or switch-port. # The default setting is "no-port" #hot_plug_vfio = "root-port" # In a confidential compute environment hot-plugging can compromise # security. # Enable cold-plugging of VFIO devices to a bridge-port, # root-port or switch-port. # The default setting is "no-port", which means disabled. #cold_plug_vfio = "root-port" # Before hot plugging a PCIe device, you need to add a pcie_root_port device. # Use this parameter when using some large PCI bar devices, such as Nvidia GPU # The value means the number of pcie_root_port # This value is valid when hotplug_vfio_on_root_bus is true and machine_type is "q35" # Default 0 #pcie_root_port = 2 # If vhost-net backend for virtio-net is not desired, set to true. Default is false, which trades off # security (vhost-net runs ring0) for network I/O performance. #disable_vhost_net = true # # Default entropy source. # The path to a host source of entropy (including a real hardware RNG) # /dev/urandom and /dev/random are two main options. # Be aware that /dev/random is a blocking source of entropy. If the host # runs out of entropy, the VMs boot time will increase leading to get startup # timeouts. # The source of entropy /dev/urandom is non-blocking and provides a # generally acceptable source of entropy. It should work well for pretty much # all practical purposes. #entropy_source= "/dev/urandom" # List of valid annotations values for entropy_source # The default if not set is empty (all annotations rejected.) # Your distribution recommends: ["/dev/urandom","/dev/random",""] valid_entropy_sources = ["/dev/urandom","/dev/random",""] # Path to OCI hook binaries in the *guest rootfs*. # This does not affect host-side hooks which must instead be added to # the OCI spec passed to the runtime. # # You can create a rootfs with hooks by customizing the osbuilder scripts: # https://github.com/kata-containers/kata-containers/tree/main/tools/osbuilder # # Hooks must be stored in a subdirectory of guest_hook_path according to their # hook type, i.e. "guest_hook_path/{prestart,poststart,poststop}". # The agent will scan these directories for executable files and add them, in # lexicographical order, to the lifecycle of the guest container. # Hooks are executed in the runtime namespace of the guest. See the official documentation: # https://github.com/opencontainers/runtime-spec/blob/v1.0.1/config.md#posix-platform-hooks # Warnings will be logged if any error is encountered while scanning for hooks, # but it will not abort container execution. #guest_hook_path = "/usr/share/oci/hooks" # # Use rx Rate Limiter to control network I/O inbound bandwidth(size in bits/sec for SB/VM). # In Qemu, we use classful qdiscs HTB(Hierarchy Token Bucket) to discipline traffic. # Default 0-sized value means unlimited rate. #rx_rate_limiter_max_rate = 0 # Use tx Rate Limiter to control network I/O outbound bandwidth(size in bits/sec for SB/VM). # In Qemu, we use classful qdiscs HTB(Hierarchy Token Bucket) and ifb(Intermediate Functional Block) # to discipline traffic. # Default 0-sized value means unlimited rate. #tx_rate_limiter_max_rate = 0 # Set where to save the guest memory dump file. # If set, when GUEST_PANICKED event occurred, # guest memeory will be dumped to host filesystem under guest_memory_dump_path, # This directory will be created automatically if it does not exist. # # The dumped file(also called vmcore) can be processed with crash or gdb. # # WARNING: # Dump guest’s memory can take very long depending on the amount of guest memory # and use much disk space. #guest_memory_dump_path="/var/crash/kata" # If enable paging. # Basically, if you want to use "gdb" rather than "crash", # or need the guest-virtual addresses in the ELF vmcore, # then you should enable paging. # # See: https://www.qemu.org/docs/master/qemu-qmp-ref.html#Dump-guest-memory for details #guest_memory_dump_paging=false # Enable swap in the guest. Default false. # When enable_guest_swap is enabled, insert a raw file to the guest as the swap device # if the swappiness of a container (set by annotation "io.katacontainers.container.resource.swappiness") # is bigger than 0. # The size of the swap device should be # swap_in_bytes (set by annotation "io.katacontainers.container.resource.swap_in_bytes") - memory_limit_in_bytes. # If swap_in_bytes is not set, the size should be memory_limit_in_bytes. # If swap_in_bytes and memory_limit_in_bytes is not set, the size should # be default_memory. #enable_guest_swap = true # use legacy serial for guest console if available and implemented for architecture. Default false #use_legacy_serial = true # disable applying SELinux on the VMM process (default false) disable_selinux=false # disable applying SELinux on the container process # If set to false, the type `container_t` is applied to the container process by default. # Note: To enable guest SELinux, the guest rootfs must be CentOS that is created and built # with `SELINUX=yes`. # (default: true) disable_guest_selinux=true [factory] # VM templating support. Once enabled, new VMs are created from template # using vm cloning. They will share the same initial kernel, initramfs and # agent memory by mapping it readonly. It helps speeding up new container # creation and saves a lot of memory if there are many kata containers running # on the same host. # # When disabled, new VMs are created from scratch. # # Note: Requires "initrd=" to be set ("image=" is not supported). # # Default false #enable_template = true # Specifies the path of template. # # Default "/run/vc/vm/template" #template_path = "/run/vc/vm/template" # The number of caches of VMCache: # unspecified or == 0 --> VMCache is disabled # > 0 --> will be set to the specified number # # VMCache is a function that creates VMs as caches before using it. # It helps speed up new container creation. # The function consists of a server and some clients communicating # through Unix socket. The protocol is gRPC in protocols/cache/cache.proto. # The VMCache server will create some VMs and cache them by factory cache. # It will convert the VM to gRPC format and transport it when gets # requestion from clients. # Factory grpccache is the VMCache client. It will request gRPC format # VM and convert it back to a VM. If VMCache function is enabled, # kata-runtime will request VM from factory grpccache when it creates # a new sandbox. # # Default 0 #vm_cache_number = 0 # Specify the address of the Unix socket that is used by VMCache. # # Default /var/run/kata-containers/cache.sock #vm_cache_endpoint = "/var/run/kata-containers/cache.sock" [agent.kata] # If enabled, make the agent display debug-level messages. # (default: disabled) #enable_debug = true # Enable agent tracing. # # If enabled, the agent will generate OpenTelemetry trace spans. # # Notes: # # - If the runtime also has tracing enabled, the agent spans will be # associated with the appropriate runtime parent span. # - If enabled, the runtime will wait for the container to shutdown, # increasing the container shutdown time slightly. # # (default: disabled) #enable_tracing = true # Comma separated list of kernel modules and their parameters. # These modules will be loaded in the guest kernel using modprobe(8). # The following example can be used to load two kernel modules with parameters # - kernel_modules=["e1000e InterruptThrottleRate=3000,3000,3000 EEE=1", "i915 enable_ppgtt=0"] # The first word is considered as the module name and the rest as its parameters. # Container will not be started when: # * A kernel module is specified and the modprobe command is not installed in the guest # or it fails loading the module. # * The module is not available in the guest or it doesn't met the guest kernel # requirements, like architecture and version. # kernel_modules=[] # Enable debug console. # If enabled, user can connect guest OS running inside hypervisor # through "kata-runtime exec " command #debug_console_enabled = true # Agent connection dialing timeout value in seconds # (default: 45) dial_timeout = 45 [runtime] # If enabled, the runtime will log additional debug messages to the # system log # (default: disabled) #enable_debug = true # # Internetworking model # Determines how the VM should be connected to the # the container network interface # Options: # # - macvtap # Used when the Container network interface can be bridged using # macvtap. # # - none # Used when customize network. Only creates a tap device. No veth pair. # # - tcfilter # Uses tc filter rules to redirect traffic from the network interface # provided by plugin to a tap interface connected to the VM. # internetworking_model="tcfilter" # disable guest seccomp # Determines whether container seccomp profiles are passed to the virtual # machine and applied by the kata agent. If set to true, seccomp is not applied # within the guest # (default: true) disable_guest_seccomp=true # vCPUs pinning settings # if enabled, each vCPU thread will be scheduled to a fixed CPU # qualified condition: num(vCPU threads) == num(CPUs in sandbox's CPUSet) # enable_vcpus_pinning = false # Apply a custom SELinux security policy to the container process inside the VM. # This is used when you want to apply a type other than the default `container_t`, # so general users should not uncomment and apply it. # (format: "user:role:type") # Note: You cannot specify MCS policy with the label because the sensitivity levels and # categories are determined automatically by high-level container runtimes such as containerd. #guest_selinux_label="system_u:system_r:container_t" # If enabled, the runtime will create opentracing.io traces and spans. # (See https://www.jaegertracing.io/docs/getting-started). # (default: disabled) #enable_tracing = true # Set the full url to the Jaeger HTTP Thrift collector. # The default if not set will be "http://localhost:14268/api/traces" #jaeger_endpoint = "" # Sets the username to be used if basic auth is required for Jaeger. #jaeger_user = "" # Sets the password to be used if basic auth is required for Jaeger. #jaeger_password = "" # If enabled, the runtime will not create a network namespace for shim and hypervisor processes. # This option may have some potential impacts to your host. It should only be used when you know what you're doing. # `disable_new_netns` conflicts with `internetworking_model=tcfilter` and `internetworking_model=macvtap`. It works only # with `internetworking_model=none`. The tap device will be in the host network namespace and can connect to a bridge # (like OVS) directly. # (default: false) #disable_new_netns = true # if enabled, the runtime will add all the kata processes inside one dedicated cgroup. # The container cgroups in the host are not created, just one single cgroup per sandbox. # The runtime caller is free to restrict or collect cgroup stats of the overall Kata sandbox. # The sandbox cgroup path is the parent cgroup of a container with the PodSandbox annotation. # The sandbox cgroup is constrained if there is no container type annotation. # See: https://pkg.go.dev/github.com/kata-containers/kata-containers/src/runtime/virtcontainers#ContainerType sandbox_cgroup_only=false # If enabled, the runtime will attempt to determine appropriate sandbox size (memory, CPU) before booting the virtual machine. In # this case, the runtime will not dynamically update the amount of memory and CPU in the virtual machine. This is generally helpful # when a hardware architecture or hypervisor solutions is utilized which does not support CPU and/or memory hotplug. # Compatibility for determining appropriate sandbox (VM) size: # - When running with pods, sandbox sizing information will only be available if using Kubernetes >= 1.23 and containerd >= 1.6. CRI-O # does not yet support sandbox sizing annotations. # - When running single containers using a tool like ctr, container sizing information will be available. static_sandbox_resource_mgmt=false # If specified, sandbox_bind_mounts identifieds host paths to be mounted (ro) into the sandboxes shared path. # This is only valid if filesystem sharing is utilized. The provided path(s) will be bindmounted into the shared fs directory. # If defaults are utilized, these mounts should be available in the guest at `/run/kata-containers/shared/containers/sandbox-mounts` # These will not be exposed to the container workloads, and are only provided for potential guest services. sandbox_bind_mounts=[] # VFIO Mode # Determines how VFIO devices should be be presented to the container. # Options: # # - vfio # Matches behaviour of OCI runtimes (e.g. runc) as much as # possible. VFIO devices will appear in the container as VFIO # character devices under /dev/vfio. The exact names may differ # from the host (they need to match the VM's IOMMU group numbers # rather than the host's) # # - guest-kernel # This is a Kata-specific behaviour that's useful in certain cases. # The VFIO device is managed by whatever driver in the VM kernel # claims it. This means it will appear as one or more device nodes # or network interfaces depending on the nature of the device. # Using this mode requires specially built workloads that know how # to locate the relevant device interfaces within the VM. # vfio_mode="guest-kernel" # If enabled, the runtime will not create Kubernetes emptyDir mounts on the guest filesystem. Instead, emptyDir mounts will # be created on the host and shared via virtio-fs. This is potentially slower, but allows sharing of files from host to guest. disable_guest_empty_dir=false # Enabled experimental feature list, format: ["a", "b"]. # Experimental features are features not stable enough for production, # they may break compatibility, and are prepared for a big version bump. # Supported experimental features: # (default: []) experimental=[] # If enabled, user can run pprof tools with shim v2 process through kata-monitor. # (default: false) # enable_pprof = true # Indicates the CreateContainer request timeout needed for the workload(s) # It using guest_pull this includes the time to pull the image inside the guest # Defaults to 60 second(s) # Note: The effective timeout is determined by the lesser of two values: runtime-request-timeout from kubelet config # (https://kubernetes.io/docs/reference/command-line-tools-reference/kubelet/#:~:text=runtime%2Drequest%2Dtimeout) and create_container_timeout. # In essence, the timeout used for guest pull=runtime-request-timeout

Config file `/usr/share/defaults/kata-containers/configuration.toml` not found ---

Containerd shim v2

Containerd shim v2 is `/opt/kata/bin/containerd-shim-kata-v2`.

containerd-shim-kata-v2 --version

``` Kata Containers containerd shim (Golang): id: "io.containerd.kata.v2", version: 3.5.0, commit: 3939ec9bed380d21ddfead85e8dabb7011c4c923 ```

---

KSM throttler

# KSM throttler ## version ## systemd service

Image details

# Image details ```yaml --- osbuilder: url: "https://github.com/kata-containers/kata-containers/tools/osbuilder" version: "unknown" rootfs-creation-time: "2024-06-18T13:18:34.201399902+0000Z" description: "osbuilder rootfs" file-format-version: "0.0.2" architecture: "x86_64" base-distro: name: "ubuntu" version: "focal" packages: default: - "chrony" - "dbus" - "init" - "iptables" - "libseccomp2" extra: agent: url: "https://github.com/kata-containers/kata-containers" name: "kata-agent" version: "3.6.0" agent-is-init-daemon: "no" ``` ---

Initrd details

# Initrd details No initrd ---

Logfiles

# Logfiles ## Runtime logs

Runtime logs

No recent runtime problems found in system journal.

## Throttler logs
Throttler logs

No recent throttler problems found in system journal.

## Kata Containerd Shim v2 logs
Kata Containerd Shim v2

Recent problems found in system journal: ``` time="2024-06-28T12:31:13.329690748Z" level=error msg="rollback failed nydusContainerCleanup()" container=da54cf0053a071e703e1688016781a9a6b9ab529e2be5c4370d8b61791ff2c4f error="nydusd only supports the QEMU/CLH hypervisor currently (see https://github.com/kata-containers/kata-containers/issues/3654)" name=containerd-shim-v2 pid=185684 sandbox=a8f129fd598d3d7aea89de7b74de6792abab36316ff90d45bf8a33fb0da4393d source=virtcontainers subsystem=container time="2024-06-28T12:37:19.311097096Z" level=error msg="createContainer failed" error="context deadline exceeded" name=containerd-shim-v2 pid=185684 sandbox=a8f129fd598d3d7aea89de7b74de6792abab36316ff90d45bf8a33fb0da4393d source=virtcontainers subsystem=kata_agent time="2024-06-28T12:37:19.311292877Z" level=error msg="rollback failed nydusContainerCleanup" error="nydusd only supports the QEMU/CLH hypervisor currently (see https://github.com/kata-containers/kata-containers/issues/3654)" name=containerd-shim-v2 pid=185684 sandbox=a8f129fd598d3d7aea89de7b74de6792abab36316ff90d45bf8a33fb0da4393d source=virtcontainers subsystem=fs_share time="2024-06-28T12:37:19.311384339Z" level=warning msg="Could not remove container share dir" error="no such file or directory" name=containerd-shim-v2 pid=185684 sandbox=a8f129fd598d3d7aea89de7b74de6792abab36316ff90d45bf8a33fb0da4393d share-dir=/run/kata-containers/shared/sandboxes/a8f129fd598d3d7aea89de7b74de6792abab36316ff90d45bf8a33fb0da4393d/mounts/0f64f456267b88bf83b94cdda01fd3e966d3ebc171fa6ff892659e529eee8a3e source=virtcontainers subsystem=fs_share time="2024-06-28T12:37:19.311447691Z" level=error msg="container create failed" container=0f64f456267b88bf83b94cdda01fd3e966d3ebc171fa6ff892659e529eee8a3e error="context deadline exceeded" name=containerd-shim-v2 pid=185684 sandbox=a8f129fd598d3d7aea89de7b74de6792abab36316ff90d45bf8a33fb0da4393d source=virtcontainers subsystem=container time="2024-06-28T12:37:19.311504126Z" level=error msg="rollback failed nydusContainerCleanup()" container=0f64f456267b88bf83b94cdda01fd3e966d3ebc171fa6ff892659e529eee8a3e error="nydusd only supports the QEMU/CLH hypervisor currently (see https://github.com/kata-containers/kata-containers/issues/3654)" name=containerd-shim-v2 pid=185684 sandbox=a8f129fd598d3d7aea89de7b74de6792abab36316ff90d45bf8a33fb0da4393d source=virtcontainers subsystem=container time="2024-06-28T12:37:19.314621248Z" level=error msg="ttrpc: received message on inactive stream" stream=1247 time="2024-06-28T12:43:20.374134065Z" level=error msg="createContainer failed" error="rpc error: code = DeadlineExceeded desc = context deadline exceeded" name=containerd-shim-v2 pid=185684 sandbox=a8f129fd598d3d7aea89de7b74de6792abab36316ff90d45bf8a33fb0da4393d source=virtcontainers subsystem=kata_agent time="2024-06-28T12:43:20.37430722Z" level=error msg="rollback failed nydusContainerCleanup" error="nydusd only supports the QEMU/CLH hypervisor currently (see https://github.com/kata-containers/kata-containers/issues/3654)" name=containerd-shim-v2 pid=185684 sandbox=a8f129fd598d3d7aea89de7b74de6792abab36316ff90d45bf8a33fb0da4393d source=virtcontainers subsystem=fs_share time="2024-06-28T12:43:20.374402901Z" level=warning msg="Could not remove container share dir" error="no such file or directory" name=containerd-shim-v2 pid=185684 sandbox=a8f129fd598d3d7aea89de7b74de6792abab36316ff90d45bf8a33fb0da4393d share-dir=/run/kata-containers/shared/sandboxes/a8f129fd598d3d7aea89de7b74de6792abab36316ff90d45bf8a33fb0da4393d/mounts/8ae4d1812c864d67508780326aba8ef73981a42e3138433cae046fcc169131de source=virtcontainers subsystem=fs_share time="2024-06-28T12:43:20.374473118Z" level=error msg="container create failed" container=8ae4d1812c864d67508780326aba8ef73981a42e3138433cae046fcc169131de error="rpc error: code = DeadlineExceeded desc = context deadline exceeded" name=containerd-shim-v2 pid=185684 sandbox=a8f129fd598d3d7aea89de7b74de6792abab36316ff90d45bf8a33fb0da4393d source=virtcontainers subsystem=container time="2024-06-28T12:43:20.374516908Z" level=error msg="rollback failed nydusContainerCleanup()" container=8ae4d1812c864d67508780326aba8ef73981a42e3138433cae046fcc169131de error="nydusd only supports the QEMU/CLH hypervisor currently (see https://github.com/kata-containers/kata-containers/issues/3654)" name=containerd-shim-v2 pid=185684 sandbox=a8f129fd598d3d7aea89de7b74de6792abab36316ff90d45bf8a33fb0da4393d source=virtcontainers subsystem=container time="2024-06-28T12:49:34.341971658Z" level=error msg="createContainer failed" error="rpc error: code = DeadlineExceeded desc = timeout" name=containerd-shim-v2 pid=185684 sandbox=a8f129fd598d3d7aea89de7b74de6792abab36316ff90d45bf8a33fb0da4393d source=virtcontainers subsystem=kata_agent time="2024-06-28T12:49:34.342147751Z" level=error msg="rollback failed nydusContainerCleanup" error="nydusd only supports the QEMU/CLH hypervisor currently (see https://github.com/kata-containers/kata-containers/issues/3654)" name=containerd-shim-v2 pid=185684 sandbox=a8f129fd598d3d7aea89de7b74de6792abab36316ff90d45bf8a33fb0da4393d source=virtcontainers subsystem=fs_share time="2024-06-28T12:49:34.342223115Z" level=warning msg="Could not remove container share dir" error="no such file or directory" name=containerd-shim-v2 pid=185684 sandbox=a8f129fd598d3d7aea89de7b74de6792abab36316ff90d45bf8a33fb0da4393d share-dir=/run/kata-containers/shared/sandboxes/a8f129fd598d3d7aea89de7b74de6792abab36316ff90d45bf8a33fb0da4393d/mounts/8b7f8f63286e3a1bb8071b63feabf58407b1f09b7ac147689c430e3e0848b338 source=virtcontainers subsystem=fs_share time="2024-06-28T12:49:34.342269687Z" level=error msg="container create failed" container=8b7f8f63286e3a1bb8071b63feabf58407b1f09b7ac147689c430e3e0848b338 error="rpc error: code = DeadlineExceeded desc = timeout" name=containerd-shim-v2 pid=185684 sandbox=a8f129fd598d3d7aea89de7b74de6792abab36316ff90d45bf8a33fb0da4393d source=virtcontainers subsystem=container time="2024-06-28T12:49:34.342317706Z" level=error msg="rollback failed nydusContainerCleanup()" container=8b7f8f63286e3a1bb8071b63feabf58407b1f09b7ac147689c430e3e0848b338 error="nydusd only supports the QEMU/CLH hypervisor currently (see https://github.com/kata-containers/kata-containers/issues/3654)" name=containerd-shim-v2 pid=185684 sandbox=a8f129fd598d3d7aea89de7b74de6792abab36316ff90d45bf8a33fb0da4393d source=virtcontainers subsystem=container time="2024-06-28T12:55:39.302498977Z" level=error msg="createContainer failed" error="rpc error: code = DeadlineExceeded desc = timeout" name=containerd-shim-v2 pid=185684 sandbox=a8f129fd598d3d7aea89de7b74de6792abab36316ff90d45bf8a33fb0da4393d source=virtcontainers subsystem=kata_agent time="2024-06-28T12:55:39.302665985Z" level=error msg="rollback failed nydusContainerCleanup" error="nydusd only supports the QEMU/CLH hypervisor currently (see https://github.com/kata-containers/kata-containers/issues/3654)" name=containerd-shim-v2 pid=185684 sandbox=a8f129fd598d3d7aea89de7b74de6792abab36316ff90d45bf8a33fb0da4393d source=virtcontainers subsystem=fs_share time="2024-06-28T12:55:39.302758066Z" level=warning msg="Could not remove container share dir" error="no such file or directory" name=containerd-shim-v2 pid=185684 sandbox=a8f129fd598d3d7aea89de7b74de6792abab36316ff90d45bf8a33fb0da4393d share-dir=/run/kata-containers/shared/sandboxes/a8f129fd598d3d7aea89de7b74de6792abab36316ff90d45bf8a33fb0da4393d/mounts/a5e74eb379eefa50f21392644d3887326639be3ccea846a8e544d4f45a3b0ef2 source=virtcontainers subsystem=fs_share time="2024-06-28T12:55:39.302833381Z" level=error msg="container create failed" container=a5e74eb379eefa50f21392644d3887326639be3ccea846a8e544d4f45a3b0ef2 error="rpc error: code = DeadlineExceeded desc = timeout" name=containerd-shim-v2 pid=185684 sandbox=a8f129fd598d3d7aea89de7b74de6792abab36316ff90d45bf8a33fb0da4393d source=virtcontainers subsystem=container time="2024-06-28T12:55:39.30288067Z" level=error msg="rollback failed nydusContainerCleanup()" container=a5e74eb379eefa50f21392644d3887326639be3ccea846a8e544d4f45a3b0ef2 error="nydusd only supports the QEMU/CLH hypervisor currently (see https://github.com/kata-containers/kata-containers/issues/3654)" name=containerd-shim-v2 pid=185684 sandbox=a8f129fd598d3d7aea89de7b74de6792abab36316ff90d45bf8a33fb0da4393d source=virtcontainers subsystem=container time="2024-06-28T12:55:40.290249577Z" level=info msg="watchSandbox gets an error or stop signal" error="" name=containerd-shim-v2 pid=185684 sandbox=a8f129fd598d3d7aea89de7b74de6792abab36316ff90d45bf8a33fb0da4393d source=containerd-kata-shim-v2 time="2024-06-28T12:55:40.304141999Z" level=info msg="failed to get OOM event from sandbox" error="rpc error: code = Internal desc = \"\"" name=containerd-shim-v2 pid=185684 sandbox=a8f129fd598d3d7aea89de7b74de6792abab36316ff90d45bf8a33fb0da4393d source=containerd-kata-shim-v2 time="2024-06-28T12:55:41.304502498Z" level=info msg="agent has shutdown, return from watching of OOM events" error="ttrpc: closed" name=containerd-shim-v2 pid=185684 sandbox=a8f129fd598d3d7aea89de7b74de6792abab36316ff90d45bf8a33fb0da4393d source=containerd-kata-shim-v2 time="2024-06-28T12:55:41.453958137Z" level=error msg="failed to cleanup the &{%!s(*cgroups.cgroup=&{0x56409fa43cc0 [0xc000370700 0xc0002b70f0 0xc0002b7100 0xc0002b7110 0xc0002b7120 0xc0002b7130 0xc0002b7140 0xc0002b7150 0xc0002b7160 0xc00011a648 0xc000370720 0xc0002b7170 0xc0002b7190 0xc0002b4cc0] {0 0} }) kubepods-besteffort-pod9684d400_b701_4265_9bc6_86cbb6bfccd8.slice:cri-containerd:a8f129fd598d3d7aea89de7b74de6792abab36316ff90d45bf8a33fb0da4393d %!s(*specs.LinuxCPU=&{ }) [{%!s(bool=false) %!s(*int64=) %!s(*int64=) rwm} {%!s(bool=true) c %!s(*int64=0xc0002ec6e0) %!s(*int64=0xc0002ec6e8) rwm} {%!s(bool=true) c %!s(*int64=0xc0002ec6f0) %!s(*int64=0xc0002ec6f8) rwm} {%!s(bool=true) c %!s(*int64=0xc0002b4248) %!s(*int64=0xc0002b4250) rwm} {%!s(bool=true) c %!s(*int64=0xc0002b4278) %!s(*int64=0xc0002b4280) rwm} {%!s(bool=true) c %!s(*int64=0xc0002b4518) %!s(*int64=0xc0002b4520) rwm} {%!s(bool=true) c %!s(*int64=0xc0002b4548) %!s(*int64=0xc0002b4550) rwm} {%!s(bool=true) c %!s(*int64=0xc0002b4758) %!s(*int64=0xc0002b4760) rwm} {%!s(bool=true) c %!s(*int64=0xc0002b4788) %!s(*int64=0xc0002b4790) rwm} {%!s(bool=true) c %!s(*int64=0xc0002b47b8) %!s(*int64=0xc0002b47c0) rwm} {%!s(bool=true) c %!s(*int64=0xc0002b47e8) %!s(*int64=0xc0002b47f0) rwm} {%!s(bool=true) c %!s(*int64=0xc0002b4a58) %!s(*int64=0xc0002b4a60) rwm} {%!s(bool=true) c %!s(*int64=0xc0002b4a88) %!s(*int64=0xc0002b4a90) rwm} {%!s(bool=true) c %!s(*int64=0xc0002b4ba8) %!s(*int64=0xc0002b4bb0) rwm} {%!s(bool=true) c %!s(*int64=0xc0002ec818) %!s(*int64=0xc0002ec820) m} {%!s(bool=true) b %!s(*int64=0xc0002ec818) %!s(*int64=0xc0002ec820) m} {%!s(bool=true) c %!s(*int64=0xc0002ec828) %!s(*int64=0xc0002ec820) rwm} {%!s(bool=true) c %!s(*int64=0xc0002ec830) %!s(*int64=0xc0002ec838) rwm}] {%!s(int32=0) %!s(uint32=0)}} resource controllers" error="cgroups: cgroup deleted" name=containerd-shim-v2 pid=185684 sandbox=a8f129fd598d3d7aea89de7b74de6792abab36316ff90d45bf8a33fb0da4393d source=virtcontainers subsystem=sandbox time="2024-06-28T17:40:08.190210878Z" level=warning msg="Advanced PCIe Topology only available for QEMU/CLH hypervisor, ignoring hot(cold)_vfio_port setting" name=containerd-shim-v2 pid=66978 sandbox=432ae0083df3f408fd55feb3ebc3636ef9bf0387c89eccaa949313a1422270bf source=katautils time="2024-06-28T17:40:08.190520557Z" level=warning msg="Could not add /dev/mshv to the devices cgroup" name=containerd-shim-v2 pid=66978 sandbox=432ae0083df3f408fd55feb3ebc3636ef9bf0387c89eccaa949313a1422270bf source=cgroups time="2024-06-28T18:01:50.937764135Z" level=info msg="watchSandbox gets an error or stop signal" error="" name=containerd-shim-v2 pid=66978 sandbox=432ae0083df3f408fd55feb3ebc3636ef9bf0387c89eccaa949313a1422270bf source=containerd-kata-shim-v2 time="2024-06-28T18:01:50.950730014Z" level=warning msg="Agent did not stop sandbox" error="rpc error: code = Internal desc = Device or resource busy (os error 16)" name=containerd-shim-v2 pid=66978 sandbox=432ae0083df3f408fd55feb3ebc3636ef9bf0387c89eccaa949313a1422270bf sandboxid=432ae0083df3f408fd55feb3ebc3636ef9bf0387c89eccaa949313a1422270bf source=virtcontainers subsystem=sandbox time="2024-06-28T18:01:52.288265669Z" level=info msg="agent has shutdown, return from watching of OOM events" error="ttrpc: closed" name=containerd-shim-v2 pid=66978 sandbox=432ae0083df3f408fd55feb3ebc3636ef9bf0387c89eccaa949313a1422270bf source=containerd-kata-shim-v2 time="2024-06-28T18:01:52.290512676Z" level=error msg="failed to cleanup the &{%!s(*cgroups.cgroup=&{0x563d9d481cc0 [0xc00007f3e0 0xc00019fc60 0xc00019fc70 0xc00019fc80 0xc00019fc90 0xc00019fca0 0xc00019fcb0 0xc00019fcc0 0xc00019fcd0 0xc000012b40 0xc00007f400 0xc00019fce0 0xc00019fd00 0xc0000e9980] {0 0} }) kubepods-besteffort-podc9269890_19b6_4c25_8691_d0a5f33a38d7.slice:cri-containerd:432ae0083df3f408fd55feb3ebc3636ef9bf0387c89eccaa949313a1422270bf %!s(*specs.LinuxCPU=&{ }) [{%!s(bool=false) %!s(*int64=) %!s(*int64=) rwm} {%!s(bool=true) c %!s(*int64=0xc000213500) %!s(*int64=0xc000213508) rwm} {%!s(bool=true) c %!s(*int64=0xc000213510) %!s(*int64=0xc000213518) rwm} {%!s(bool=true) c %!s(*int64=0xc0000e8c08) %!s(*int64=0xc0000e8c10) rwm} {%!s(bool=true) c %!s(*int64=0xc0000e8c38) %!s(*int64=0xc0000e8c40) rwm} {%!s(bool=true) c %!s(*int64=0xc0000e8de8) %!s(*int64=0xc0000e8df0) rwm} {%!s(bool=true) c %!s(*int64=0xc0000e8e18) %!s(*int64=0xc0000e8e20) rwm} {%!s(bool=true) c %!s(*int64=0xc0000e8e48) %!s(*int64=0xc0000e8e50) rwm} {%!s(bool=true) c %!s(*int64=0xc0000e8ff8) %!s(*int64=0xc0000e9000) rwm} {%!s(bool=true) c %!s(*int64=0xc0000e9028) %!s(*int64=0xc0000e9030) rwm} {%!s(bool=true) c %!s(*int64=0xc0000e9058) %!s(*int64=0xc0000e9060) rwm} {%!s(bool=true) c %!s(*int64=0xc0000e92f8) %!s(*int64=0xc0000e9300) rwm} {%!s(bool=true) c %!s(*int64=0xc0000e9328) %!s(*int64=0xc0000e9330) rwm} {%!s(bool=true) c %!s(*int64=0xc0000e9418) %!s(*int64=0xc0000e9420) rwm} {%!s(bool=true) c %!s(*int64=0xc000213638) %!s(*int64=0xc000213640) m} {%!s(bool=true) b %!s(*int64=0xc000213638) %!s(*int64=0xc000213640) m} {%!s(bool=true) c %!s(*int64=0xc000213648) %!s(*int64=0xc000213640) rwm} {%!s(bool=true) c %!s(*int64=0xc000213650) %!s(*int64=0xc000213658) rwm}] {%!s(int32=0) %!s(uint32=0)}} resource controllers" error="cgroups: cgroup deleted" name=containerd-shim-v2 pid=66978 sandbox=432ae0083df3f408fd55feb3ebc3636ef9bf0387c89eccaa949313a1422270bf source=virtcontainers subsystem=sandbox time="2024-07-09T15:11:27.668913407Z" level=warning msg="Advanced PCIe Topology only available for QEMU/CLH hypervisor, ignoring hot(cold)_vfio_port setting" name=containerd-shim-v2 pid=143874 sandbox=99b900151f809aa0767422c145601d2e7028221a853665a9ed76f7dfaf82def9 source=katautils time="2024-07-09T15:11:27.669431462Z" level=warning msg="Could not add /dev/mshv to the devices cgroup" name=containerd-shim-v2 pid=143874 sandbox=99b900151f809aa0767422c145601d2e7028221a853665a9ed76f7dfaf82def9 source=cgroups time="2024-07-09T15:21:29.089876894Z" level=info msg="Removing network after failure in createSandbox" name=containerd-shim-v2 pid=143874 sandbox=99b900151f809aa0767422c145601d2e7028221a853665a9ed76f7dfaf82def9 source=virtcontainers time="2024-07-09T15:21:29.092558256Z" level=error msg="failed to cleanup the &{%!s(*cgroups.cgroup=&{0x5642d8d5a900 [0xc000510a20 0xc0001af4d0 0xc0001af4e0 0xc0001af4f0 0xc0001af500 0xc0001af510 0xc0001af520 0xc0001af530 0xc0001af540 0xc0004ba4c8 0xc000510a40 0xc0001af550 0xc0001af570 0xc000297920] {0 0} }) kubepods-besteffort-pod5fe76028_ab1c_4a9b_87cc_07332b8329ae.slice:cri-containerd:99b900151f809aa0767422c145601d2e7028221a853665a9ed76f7dfaf82def9 %!s(*specs.LinuxCPU=&{ }) [{%!s(bool=false) %!s(*int64=) %!s(*int64=) rwm} {%!s(bool=true) c %!s(*int64=0xc000201f78) %!s(*int64=0xc000201fa0) rwm} {%!s(bool=true) c %!s(*int64=0xc000201fa8) %!s(*int64=0xc000201fb0) rwm} {%!s(bool=true) c %!s(*int64=0xc000297598) %!s(*int64=0xc0002975a0) rwm} {%!s(bool=true) c %!s(*int64=0xc0002975c8) %!s(*int64=0xc0002975d0) rwm} {%!s(bool=true) c %!s(*int64=0xc0002975f8) %!s(*int64=0xc000297600) rwm} {%!s(bool=true) c %!s(*int64=0xc000297628) %!s(*int64=0xc000297630) rwm} {%!s(bool=true) c %!s(*int64=0xc000297658) %!s(*int64=0xc000297660) rwm} {%!s(bool=true) c %!s(*int64=0xc000297688) %!s(*int64=0xc000297690) rwm} {%!s(bool=true) c %!s(*int64=0xc0002976b8) %!s(*int64=0xc0002976c0) rwm} {%!s(bool=true) c %!s(*int64=0xc0002976e8) %!s(*int64=0xc0002976f0) rwm} {%!s(bool=true) c %!s(*int64=0xc0002977d8) %!s(*int64=0xc0002977e0) rwm} {%!s(bool=true) c %!s(*int64=0xc000297808) %!s(*int64=0xc000297810) rwm} {%!s(bool=true) c %!s(*int64=0xc000297838) %!s(*int64=0xc000297840) rwm} {%!s(bool=true) c %!s(*int64=0xc0003a8f18) %!s(*int64=0xc0003a8f20) m} {%!s(bool=true) b %!s(*int64=0xc0003a8f18) %!s(*int64=0xc0003a8f20) m} {%!s(bool=true) c %!s(*int64=0xc0003a8f28) %!s(*int64=0xc0003a8f20) rwm} {%!s(bool=true) c %!s(*int64=0xc0003a8f30) %!s(*int64=0xc0003a8f38) rwm}] {%!s(int32=0) %!s(uint32=0)}} resource controllers" error="cgroups: cgroup deleted" name=containerd-shim-v2 pid=143874 sandbox=99b900151f809aa0767422c145601d2e7028221a853665a9ed76f7dfaf82def9 source=virtcontainers subsystem=sandbox time="2024-07-09T15:21:30.808506705Z" level=warning msg="Advanced PCIe Topology only available for QEMU/CLH hypervisor, ignoring hot(cold)_vfio_port setting" name=containerd-shim-v2 pid=148033 sandbox=80a57b58e516348878cb3aad3c23545f6da1674c3e3f570a707a537b43a07922 source=katautils time="2024-07-09T15:21:30.808835025Z" level=warning msg="Could not add /dev/mshv to the devices cgroup" name=containerd-shim-v2 pid=148033 sandbox=80a57b58e516348878cb3aad3c23545f6da1674c3e3f570a707a537b43a07922 source=cgroups time="2024-07-09T15:31:31.9155531Z" level=info msg="Removing network after failure in createSandbox" name=containerd-shim-v2 pid=148033 sandbox=80a57b58e516348878cb3aad3c23545f6da1674c3e3f570a707a537b43a07922 source=virtcontainers time="2024-07-09T15:31:31.91787064Z" level=error msg="failed to cleanup the &{%!s(*cgroups.cgroup=&{0x555de2df2900 [0xc0003c77e0 0xc0002d1f70 0xc0002d1f80 0xc0002d1f90 0xc0002d1fa0 0xc0002d1fb0 0xc0002d1fc0 0xc0002d1fd0 0xc0002d1fe0 0xc000013518 0xc0003c7800 0xc0002d1ff0 0xc000248060 0xc00023fa10] {0 0} }) kubepods-besteffort-pod5fe76028_ab1c_4a9b_87cc_07332b8329ae.slice:cri-containerd:80a57b58e516348878cb3aad3c23545f6da1674c3e3f570a707a537b43a07922 %!s(*specs.LinuxCPU=&{ }) [{%!s(bool=false) %!s(*int64=) %!s(*int64=) rwm} {%!s(bool=true) c %!s(*int64=0xc00025ee68) %!s(*int64=0xc00025ee90) rwm} {%!s(bool=true) c %!s(*int64=0xc00025ee98) %!s(*int64=0xc00025eea0) rwm} {%!s(bool=true) c %!s(*int64=0xc00023f688) %!s(*int64=0xc00023f690) rwm} {%!s(bool=true) c %!s(*int64=0xc00023f6b8) %!s(*int64=0xc00023f6c0) rwm} {%!s(bool=true) c %!s(*int64=0xc00023f6e8) %!s(*int64=0xc00023f6f0) rwm} {%!s(bool=true) c %!s(*int64=0xc00023f718) %!s(*int64=0xc00023f720) rwm} {%!s(bool=true) c %!s(*int64=0xc00023f748) %!s(*int64=0xc00023f750) rwm} {%!s(bool=true) c %!s(*int64=0xc00023f778) %!s(*int64=0xc00023f780) rwm} {%!s(bool=true) c %!s(*int64=0xc00023f7a8) %!s(*int64=0xc00023f7b0) rwm} {%!s(bool=true) c %!s(*int64=0xc00023f7d8) %!s(*int64=0xc00023f7e0) rwm} {%!s(bool=true) c %!s(*int64=0xc00023f8c8) %!s(*int64=0xc00023f8d0) rwm} {%!s(bool=true) c %!s(*int64=0xc00023f8f8) %!s(*int64=0xc00023f900) rwm} {%!s(bool=true) c %!s(*int64=0xc00023f928) %!s(*int64=0xc00023f930) rwm} {%!s(bool=true) c %!s(*int64=0xc00025efc8) %!s(*int64=0xc00025efd0) m} {%!s(bool=true) b %!s(*int64=0xc00025efc8) %!s(*int64=0xc00025efd0) m} {%!s(bool=true) c %!s(*int64=0xc00025efd8) %!s(*int64=0xc00025efd0) rwm} {%!s(bool=true) c %!s(*int64=0xc00025efe0) %!s(*int64=0xc00025efe8) rwm}] {%!s(int32=0) %!s(uint32=0)}} resource controllers" error="cgroups: cgroup deleted" name=containerd-shim-v2 pid=148033 sandbox=80a57b58e516348878cb3aad3c23545f6da1674c3e3f570a707a537b43a07922 source=virtcontainers subsystem=sandbox time="2024-07-09T15:31:33.805171372Z" level=warning msg="Advanced PCIe Topology only available for QEMU/CLH hypervisor, ignoring hot(cold)_vfio_port setting" name=containerd-shim-v2 pid=152214 sandbox=3951e92d77299aeca9968ab783d7b94154d81f4a4e7bc40b6b3caeec40b46109 source=katautils time="2024-07-09T15:31:33.805644691Z" level=warning msg="Could not add /dev/mshv to the devices cgroup" name=containerd-shim-v2 pid=152214 sandbox=3951e92d77299aeca9968ab783d7b94154d81f4a4e7bc40b6b3caeec40b46109 source=cgroups time="2024-07-09T15:41:34.195611075Z" level=info msg="Removing network after failure in createSandbox" name=containerd-shim-v2 pid=152214 sandbox=3951e92d77299aeca9968ab783d7b94154d81f4a4e7bc40b6b3caeec40b46109 source=virtcontainers time="2024-07-09T15:41:34.198189674Z" level=error msg="failed to cleanup the &{%!s(*cgroups.cgroup=&{0x564dde431900 [0xc0003c5040 0xc0002481d0 0xc0002481e0 0xc0002481f0 0xc000248200 0xc000248210 0xc000248220 0xc000248240 0xc000248270 0xc0000a42d0 0xc0003c5060 0xc000248360 0xc000248380 0xc00042d3b0] {0 0} }) kubepods-besteffort-pod5fe76028_ab1c_4a9b_87cc_07332b8329ae.slice:cri-containerd:3951e92d77299aeca9968ab783d7b94154d81f4a4e7bc40b6b3caeec40b46109 %!s(*specs.LinuxCPU=&{ }) [{%!s(bool=false) %!s(*int64=) %!s(*int64=) rwm} {%!s(bool=true) c %!s(*int64=0xc0002f0478) %!s(*int64=0xc0002f04a0) rwm} {%!s(bool=true) c %!s(*int64=0xc0002f04a8) %!s(*int64=0xc0002f04b0) rwm} {%!s(bool=true) c %!s(*int64=0xc00042d028) %!s(*int64=0xc00042d030) rwm} {%!s(bool=true) c %!s(*int64=0xc00042d058) %!s(*int64=0xc00042d060) rwm} {%!s(bool=true) c %!s(*int64=0xc00042d088) %!s(*int64=0xc00042d090) rwm} {%!s(bool=true) c %!s(*int64=0xc00042d0b8) %!s(*int64=0xc00042d0c0) rwm} {%!s(bool=true) c %!s(*int64=0xc00042d0e8) %!s(*int64=0xc00042d0f0) rwm} {%!s(bool=true) c %!s(*int64=0xc00042d118) %!s(*int64=0xc00042d120) rwm} {%!s(bool=true) c %!s(*int64=0xc00042d148) %!s(*int64=0xc00042d150) rwm} {%!s(bool=true) c %!s(*int64=0xc00042d178) %!s(*int64=0xc00042d180) rwm} {%!s(bool=true) c %!s(*int64=0xc00042d268) %!s(*int64=0xc00042d270) rwm} {%!s(bool=true) c %!s(*int64=0xc00042d298) %!s(*int64=0xc00042d2a0) rwm} {%!s(bool=true) c %!s(*int64=0xc00042d2c8) %!s(*int64=0xc00042d2d0) rwm} {%!s(bool=true) c %!s(*int64=0xc0002f05d8) %!s(*int64=0xc0002f05e0) m} {%!s(bool=true) b %!s(*int64=0xc0002f05d8) %!s(*int64=0xc0002f05e0) m} {%!s(bool=true) c %!s(*int64=0xc0002f05e8) %!s(*int64=0xc0002f05e0) rwm} {%!s(bool=true) c %!s(*int64=0xc0002f05f0) %!s(*int64=0xc0002f05f8) rwm}] {%!s(int32=0) %!s(uint32=0)}} resource controllers" error="cgroups: cgroup deleted" name=containerd-shim-v2 pid=152214 sandbox=3951e92d77299aeca9968ab783d7b94154d81f4a4e7bc40b6b3caeec40b46109 source=virtcontainers subsystem=sandbox time="2024-07-09T15:44:52.209084757Z" level=warning msg="Advanced PCIe Topology only available for QEMU/CLH hypervisor, ignoring hot(cold)_vfio_port setting" name=containerd-shim-v2 pid=159211 sandbox=05741e884ef6c7f8a1573e78e22bd797c0ec78a84346349822b1a5dead4bca1a source=katautils time="2024-07-09T15:44:52.209507013Z" level=warning msg="Could not add /dev/mshv to the devices cgroup" name=containerd-shim-v2 pid=159211 sandbox=05741e884ef6c7f8a1573e78e22bd797c0ec78a84346349822b1a5dead4bca1a source=cgroups time="2024-07-09T15:54:53.491504589Z" level=info msg="Removing network after failure in createSandbox" name=containerd-shim-v2 pid=159211 sandbox=05741e884ef6c7f8a1573e78e22bd797c0ec78a84346349822b1a5dead4bca1a source=virtcontainers time="2024-07-09T15:54:53.49385197Z" level=error msg="failed to cleanup the &{%!s(*cgroups.cgroup=&{0x55e33a178900 [0xc000080a80 0xc00043ef30 0xc00043ef40 0xc00043ef70 0xc00043ef80 0xc00043ef90 0xc00043efa0 0xc00043efc0 0xc00043efd0 0xc0005482a0 0xc000080aa0 0xc00043efe0 0xc00043f000 0xc000353e30] {0 0} }) kubepods-besteffort-pod2a2cf442_7d91_4c49_afdd_dbe34933afd4.slice:cri-containerd:05741e884ef6c7f8a1573e78e22bd797c0ec78a84346349822b1a5dead4bca1a %!s(*specs.LinuxCPU=&{ }) [{%!s(bool=false) %!s(*int64=) %!s(*int64=) rwm} {%!s(bool=true) c %!s(*int64=0xc0002d5b38) %!s(*int64=0xc0002d5b60) rwm} {%!s(bool=true) c %!s(*int64=0xc0002d5b68) %!s(*int64=0xc0002d5b70) rwm} {%!s(bool=true) c %!s(*int64=0xc000353aa8) %!s(*int64=0xc000353ab0) rwm} {%!s(bool=true) c %!s(*int64=0xc000353ad8) %!s(*int64=0xc000353ae0) rwm} {%!s(bool=true) c %!s(*int64=0xc000353b08) %!s(*int64=0xc000353b10) rwm} {%!s(bool=true) c %!s(*int64=0xc000353b38) %!s(*int64=0xc000353b40) rwm} {%!s(bool=true) c %!s(*int64=0xc000353b68) %!s(*int64=0xc000353b70) rwm} {%!s(bool=true) c %!s(*int64=0xc000353b98) %!s(*int64=0xc000353ba0) rwm} {%!s(bool=true) c %!s(*int64=0xc000353bc8) %!s(*int64=0xc000353bd0) rwm} {%!s(bool=true) c %!s(*int64=0xc000353bf8) %!s(*int64=0xc000353c00) rwm} {%!s(bool=true) c %!s(*int64=0xc000353ce8) %!s(*int64=0xc000353cf0) rwm} {%!s(bool=true) c %!s(*int64=0xc000353d18) %!s(*int64=0xc000353d20) rwm} {%!s(bool=true) c %!s(*int64=0xc000353d48) %!s(*int64=0xc000353d50) rwm} {%!s(bool=true) c %!s(*int64=0xc0002d5d18) %!s(*int64=0xc0002d5d20) m} {%!s(bool=true) b %!s(*int64=0xc0002d5d18) %!s(*int64=0xc0002d5d20) m} {%!s(bool=true) c %!s(*int64=0xc0002d5d28) %!s(*int64=0xc0002d5d20) rwm} {%!s(bool=true) c %!s(*int64=0xc0002d5d30) %!s(*int64=0xc0002d5d38) rwm}] {%!s(int32=0) %!s(uint32=0)}} resource controllers" error="cgroups: cgroup deleted" name=containerd-shim-v2 pid=159211 sandbox=05741e884ef6c7f8a1573e78e22bd797c0ec78a84346349822b1a5dead4bca1a source=virtcontainers subsystem=sandbox time="2024-07-09T16:00:11.519863007Z" level=warning msg="Advanced PCIe Topology only available for QEMU/CLH hypervisor, ignoring hot(cold)_vfio_port setting" name=containerd-shim-v2 pid=166746 sandbox=db826f2ba31b00fa1885d8de99cabc01e46b688695acc3d5bc784f89980d4fcb source=katautils time="2024-07-09T16:00:11.520274764Z" level=warning msg="Could not add /dev/mshv to the devices cgroup" name=containerd-shim-v2 pid=166746 sandbox=db826f2ba31b00fa1885d8de99cabc01e46b688695acc3d5bc784f89980d4fcb source=cgroups ```

---

Container manager details

# Container manager details

Kubernetes

## Kubernetes

kubectl version

``` Client Version: v1.29.5+IKS Kustomize Version: v5.0.4-0.20230601165947-6ce0bf390ce3 The connection to the server localhost:8080 was refused - did you specify the right host or port? ```

kubectl config view

``` apiVersion: v1 clusters: null contexts: null current-context: "" kind: Config preferences: {} users: null ```

systemctl show kubelet

``` Type=simple Restart=always NotifyAccess=none RestartUSec=5s TimeoutStartUSec=15s TimeoutStopUSec=1min 30s TimeoutAbortUSec=1min 30s RuntimeMaxUSec=infinity WatchdogUSec=0 WatchdogTimestampMonotonic=0 RootDirectoryStartOnly=no RemainAfterExit=no GuessMainPID=yes MainPID=1563 ControlPID=0 FileDescriptorStoreMax=0 NFileDescriptorStore=0 StatusErrno=0 Result=success ReloadResult=success CleanResult=success UID=[not set] GID=[not set] NRestarts=1 OOMPolicy=stop ExecMainStartTimestamp=Tue 2024-06-25 15:18:32 UTC ExecMainStartTimestampMonotonic=49582381 ExecMainExitTimestampMonotonic=0 ExecMainPID=1563 ExecMainCode=0 ExecMainStatus=0 ExecStartPre={ path=/sbin/swapoff ; argv[]=/sbin/swapoff -a ; ignore_errors=no ; start_time=[n/a] ; stop_time=[n/a] ; pid=0 ; code=(null) ; status=0/0 } ExecStartPre={ path=/bin/systemctl ; argv[]=/bin/systemctl stop -f haproxy.service ; ignore_errors=no ; start_time=[n/a] ; stop_time=[n/a] ; pid=0 ; code=(null) ; status=0/0 } ExecStartPre={ path=/usr/local/sbin/create-localproxy-netns.sh ; argv[]=/usr/local/sbin/create-localproxy-netns.sh ; ignore_errors=yes ; start_time=[n/a] ; stop_time=[n/a] ; pid=0 ; code=(null) ; status=0/0 } ExecStartPreEx={ path=/sbin/swapoff ; argv[]=/sbin/swapoff -a ; flags= ; start_time=[n/a] ; stop_time=[n/a] ; pid=0 ; code=(null) ; status=0/0 } ExecStartPreEx={ path=/bin/systemctl ; argv[]=/bin/systemctl stop -f haproxy.service ; flags= ; start_time=[n/a] ; stop_time=[n/a] ; pid=0 ; code=(null) ; status=0/0 } ExecStartPreEx={ path=/usr/local/sbin/create-localproxy-netns.sh ; argv[]=/usr/local/sbin/create-localproxy-netns.sh ; flags=ignore-failure ; start_time=[n/a] ; stop_time=[n/a] ; pid=0 ; code=(null) ; status=0/0 } ExecStart={ path=/usr/local/bin/kubelet ; argv[]=/usr/local/bin/kubelet --config=/etc/kubernetes/kubelet-config.yaml --root-dir=/var/data/kubelet --cloud-provider=external --v=2 --kubeconfig=/etc/kubernetes/kubelet-kubeconfig --hostname-override=10.240.0.91 --version=v1.29.5+IKS --runtime-cgroups=/podruntime/runtime ; ignore_errors=no ; start_time=[n/a] ; stop_time=[n/a] ; pid=0 ; code=(null) ; status=0/0 } ExecStartEx={ path=/usr/local/bin/kubelet ; argv[]=/usr/local/bin/kubelet --config=/etc/kubernetes/kubelet-config.yaml --root-dir=/var/data/kubelet --cloud-provider=external --v=2 --kubeconfig=/etc/kubernetes/kubelet-kubeconfig --hostname-override=10.240.0.91 --version=v1.29.5+IKS --runtime-cgroups=/podruntime/runtime ; flags= ; start_time=[n/a] ; stop_time=[n/a] ; pid=0 ; code=(null) ; status=0/0 } Slice=system.slice ControlGroup=/system.slice/kubelet.service MemoryCurrent=33116160 CPUUsageNSec=[not set] EffectiveCPUs= EffectiveMemoryNodes= TasksCurrent=0 IPIngressBytes=[no data] IPIngressPackets=[no data] IPEgressBytes=[no data] IPEgressPackets=[no data] IOReadBytes=18446744073709551615 IOReadOperations=18446744073709551615 IOWriteBytes=18446744073709551615 IOWriteOperations=18446744073709551615 Delegate=no CPUAccounting=no CPUWeight=[not set] StartupCPUWeight=[not set] CPUShares=[not set] StartupCPUShares=[not set] CPUQuotaPerSecUSec=infinity CPUQuotaPeriodUSec=infinity AllowedCPUs= AllowedMemoryNodes= IOAccounting=no IOWeight=[not set] StartupIOWeight=[not set] BlockIOAccounting=no BlockIOWeight=[not set] StartupBlockIOWeight=[not set] MemoryAccounting=yes DefaultMemoryLow=0 DefaultMemoryMin=0 MemoryMin=0 MemoryLow=0 MemoryHigh=infinity MemoryMax=infinity MemorySwapMax=infinity MemoryLimit=infinity DevicePolicy=auto TasksAccounting=yes TasksMax=38306 IPAccounting=no UMask=0022 LimitCPU=infinity LimitCPUSoft=infinity LimitFSIZE=infinity LimitFSIZESoft=infinity LimitDATA=infinity LimitDATASoft=infinity LimitSTACK=infinity LimitSTACKSoft=8388608 LimitCORE=infinity LimitCORESoft=0 LimitRSS=infinity LimitRSSSoft=infinity LimitNOFILE=524288 LimitNOFILESoft=1024 LimitAS=infinity LimitASSoft=infinity LimitNPROC=127689 LimitNPROCSoft=127689 LimitMEMLOCK=65536 LimitMEMLOCKSoft=65536 LimitLOCKS=infinity LimitLOCKSSoft=infinity LimitSIGPENDING=127689 LimitSIGPENDINGSoft=127689 LimitMSGQUEUE=819200 LimitMSGQUEUESoft=819200 LimitNICE=0 LimitNICESoft=0 LimitRTPRIO=0 LimitRTPRIOSoft=0 LimitRTTIME=infinity LimitRTTIMESoft=infinity OOMScoreAdjust=0 Nice=0 IOSchedulingClass=0 IOSchedulingPriority=0 CPUSchedulingPolicy=0 CPUSchedulingPriority=0 CPUAffinity= CPUAffinityFromNUMA=no NUMAPolicy=n/a NUMAMask= TimerSlackNSec=50000 CPUSchedulingResetOnFork=no NonBlocking=no StandardInput=null StandardInputData= StandardOutput=journal StandardError=inherit TTYReset=no TTYVHangup=no TTYVTDisallocate=no SyslogPriority=30 SyslogIdentifier=kubelet.service SyslogLevelPrefix=yes SyslogLevel=6 SyslogFacility=3 LogLevelMax=-1 LogRateLimitIntervalUSec=0 LogRateLimitBurst=0 SecureBits=0 CapabilityBoundingSet=cap_chown cap_dac_override cap_dac_read_search cap_fowner cap_fsetid cap_kill cap_setgid cap_setuid cap_setpcap cap_linux_immutable cap_net_bind_service cap_net_broadcast cap_net_admin cap_net_raw cap_ipc_lock cap_ipc_owner cap_sys_module cap_sys_rawio cap_sys_chroot cap_sys_ptrace cap_sys_pacct cap_sys_admin cap_sys_boot cap_sys_nice cap_sys_resource cap_sys_time cap_sys_tty_config cap_mknod cap_lease cap_audit_write cap_audit_control cap_setfcap cap_mac_override cap_mac_admin cap_syslog cap_wake_alarm cap_block_suspend cap_audit_read AmbientCapabilities= DynamicUser=no RemoveIPC=no MountFlags= PrivateTmp=no PrivateDevices=no ProtectKernelTunables=no ProtectKernelModules=no ProtectKernelLogs=no ProtectControlGroups=no PrivateNetwork=no PrivateUsers=no PrivateMounts=no ProtectHome=no ProtectSystem=no SameProcessGroup=no UtmpMode=init IgnoreSIGPIPE=yes NoNewPrivileges=no SystemCallErrorNumber=0 LockPersonality=no RuntimeDirectoryPreserve=no RuntimeDirectoryMode=0755 StateDirectoryMode=0755 CacheDirectoryMode=0755 LogsDirectoryMode=0755 ConfigurationDirectoryMode=0755 TimeoutCleanUSec=infinity MemoryDenyWriteExecute=no RestrictRealtime=no RestrictSUIDSGID=no RestrictNamespaces=no MountAPIVFS=no KeyringMode=private ProtectHostname=no KillMode=control-group KillSignal=15 RestartKillSignal=15 FinalKillSignal=9 SendSIGKILL=yes SendSIGHUP=no WatchdogSignal=6 Id=kubelet.service Names=kubelet.service Requires=sysinit.target system.slice WantedBy=multi-user.target Conflicts=shutdown.target Before=shutdown.target multi-user.target After=basic.target auditd.service network.target system.slice sysinit.target systemd-journald.socket Documentation=https://github.com/kubernetes/kubernetes Description=Kubernetes Kubelet LoadState=loaded ActiveState=active SubState=running FragmentPath=/lib/systemd/system/kubelet.service UnitFileState=enabled UnitFilePreset=enabled StateChangeTimestamp=Tue 2024-06-25 15:21:23 UTC StateChangeTimestampMonotonic=220149568 InactiveExitTimestamp=Tue 2024-06-25 15:18:32 UTC InactiveExitTimestampMonotonic=49559119 ActiveEnterTimestamp=Tue 2024-06-25 15:18:32 UTC ActiveEnterTimestampMonotonic=49582468 ActiveExitTimestamp=Tue 2024-06-25 15:18:28 UTC ActiveExitTimestampMonotonic=44520533 InactiveEnterTimestamp=Tue 2024-06-25 15:18:32 UTC InactiveEnterTimestampMonotonic=49557160 CanStart=yes CanStop=yes CanReload=no CanIsolate=no StopWhenUnneeded=no RefuseManualStart=no RefuseManualStop=no AllowIsolate=no DefaultDependencies=yes OnFailureJobMode=replace IgnoreOnIsolate=no NeedDaemonReload=no JobTimeoutUSec=infinity JobRunningTimeoutUSec=infinity JobTimeoutAction=none ConditionResult=yes AssertResult=yes ConditionTimestamp=Tue 2024-06-25 15:18:32 UTC ConditionTimestampMonotonic=49557225 AssertTimestamp=Tue 2024-06-25 15:18:32 UTC AssertTimestampMonotonic=49557226 Transient=no Perpetual=no StartLimitIntervalUSec=0 StartLimitBurst=5 StartLimitAction=none FailureAction=none SuccessAction=none InvocationID=d7d13303b0e14ba39b6b1515cd914582 CollectMode=inactive ```

containerd

## containerd

containerd --version

``` containerd github.com/containerd/containerd v1.7.17 3a4de459a68952ffb703bbe7f2290861a75b6b67 ```

systemctl show containerd

``` Type=notify Restart=always NotifyAccess=main RestartUSec=5s TimeoutStartUSec=1min 30s TimeoutStopUSec=1min 30s TimeoutAbortUSec=1min 30s RuntimeMaxUSec=infinity WatchdogUSec=0 WatchdogTimestampMonotonic=0 RootDirectoryStartOnly=no RemainAfterExit=no GuessMainPID=yes MainPID=163694 ControlPID=0 FileDescriptorStoreMax=0 NFileDescriptorStore=0 StatusErrno=0 Result=success ReloadResult=success CleanResult=success UID=[not set] GID=[not set] NRestarts=0 OOMPolicy=continue ExecMainStartTimestamp=Tue 2024-07-09 15:55:41 UTC ExecMainStartTimestampMonotonic=1211878217661 ExecMainExitTimestampMonotonic=0 ExecMainPID=163694 ExecMainCode=0 ExecMainStatus=0 ExecStartPre={ path=/sbin/modprobe ; argv[]=/sbin/modprobe overlay ; ignore_errors=yes ; start_time=[Tue 2024-07-09 15:55:41 UTC] ; stop_time=[Tue 2024-07-09 15:55:41 UTC] ; pid=163693 ; code=exited ; status=0 } ExecStartPreEx={ path=/sbin/modprobe ; argv[]=/sbin/modprobe overlay ; flags=ignore-failure ; start_time=[Tue 2024-07-09 15:55:41 UTC] ; stop_time=[Tue 2024-07-09 15:55:41 UTC] ; pid=163693 ; code=exited ; status=0 } ExecStart={ path=/usr/local/bin/containerd ; argv[]=/usr/local/bin/containerd ; ignore_errors=no ; start_time=[Tue 2024-07-09 15:55:41 UTC] ; stop_time=[n/a] ; pid=163694 ; code=(null) ; status=0/0 } ExecStartEx={ path=/usr/local/bin/containerd ; argv[]=/usr/local/bin/containerd ; flags= ; start_time=[Tue 2024-07-09 15:55:41 UTC] ; stop_time=[n/a] ; pid=163694 ; code=(null) ; status=0/0 } Slice=system.slice ControlGroup=/system.slice/containerd.service MemoryCurrent=40165376 CPUUsageNSec=[not set] EffectiveCPUs= EffectiveMemoryNodes= TasksCurrent=0 IPIngressBytes=[no data] IPIngressPackets=[no data] IPEgressBytes=[no data] IPEgressPackets=[no data] IOReadBytes=18446744073709551615 IOReadOperations=18446744073709551615 IOWriteBytes=18446744073709551615 IOWriteOperations=18446744073709551615 Delegate=yes DelegateControllers=cpu cpuacct cpuset io blkio memory devices pids bpf-firewall bpf-devices CPUAccounting=no CPUWeight=[not set] StartupCPUWeight=[not set] CPUShares=[not set] StartupCPUShares=[not set] CPUQuotaPerSecUSec=infinity CPUQuotaPeriodUSec=infinity AllowedCPUs= AllowedMemoryNodes= IOAccounting=no IOWeight=[not set] StartupIOWeight=[not set] BlockIOAccounting=no BlockIOWeight=[not set] StartupBlockIOWeight=[not set] MemoryAccounting=yes DefaultMemoryLow=0 DefaultMemoryMin=0 MemoryMin=0 MemoryLow=0 MemoryHigh=infinity MemoryMax=infinity MemorySwapMax=infinity MemoryLimit=infinity DevicePolicy=auto TasksAccounting=yes TasksMax=infinity IPAccounting=no Environment=TMPDIR=/var/data/tmp UMask=0022 LimitCPU=infinity LimitCPUSoft=infinity LimitFSIZE=infinity LimitFSIZESoft=infinity LimitDATA=infinity LimitDATASoft=infinity LimitSTACK=infinity LimitSTACKSoft=8388608 LimitCORE=infinity LimitCORESoft=infinity LimitRSS=infinity LimitRSSSoft=infinity LimitNOFILE=infinity LimitNOFILESoft=infinity LimitAS=infinity LimitASSoft=infinity LimitNPROC=infinity LimitNPROCSoft=infinity LimitMEMLOCK=65536 LimitMEMLOCKSoft=65536 LimitLOCKS=infinity LimitLOCKSSoft=infinity LimitSIGPENDING=127689 LimitSIGPENDINGSoft=127689 LimitMSGQUEUE=819200 LimitMSGQUEUESoft=819200 LimitNICE=0 LimitNICESoft=0 LimitRTPRIO=0 LimitRTPRIOSoft=0 LimitRTTIME=infinity LimitRTTIMESoft=infinity OOMScoreAdjust=-999 Nice=0 IOSchedulingClass=0 IOSchedulingPriority=0 CPUSchedulingPolicy=0 CPUSchedulingPriority=0 CPUAffinity= CPUAffinityFromNUMA=no NUMAPolicy=n/a NUMAMask= TimerSlackNSec=50000 CPUSchedulingResetOnFork=no NonBlocking=no StandardInput=null StandardInputData= StandardOutput=journal StandardError=inherit TTYReset=no TTYVHangup=no TTYVTDisallocate=no SyslogPriority=30 SyslogLevelPrefix=yes SyslogLevel=6 SyslogFacility=3 LogLevelMax=-1 LogRateLimitIntervalUSec=0 LogRateLimitBurst=0 SecureBits=0 CapabilityBoundingSet=cap_chown cap_dac_override cap_dac_read_search cap_fowner cap_fsetid cap_kill cap_setgid cap_setuid cap_setpcap cap_linux_immutable cap_net_bind_service cap_net_broadcast cap_net_admin cap_net_raw cap_ipc_lock cap_ipc_owner cap_sys_module cap_sys_rawio cap_sys_chroot cap_sys_ptrace cap_sys_pacct cap_sys_admin cap_sys_boot cap_sys_nice cap_sys_resource cap_sys_time cap_sys_tty_config cap_mknod cap_lease cap_audit_write cap_audit_control cap_setfcap cap_mac_override cap_mac_admin cap_syslog cap_wake_alarm cap_block_suspend cap_audit_read AmbientCapabilities= DynamicUser=no RemoveIPC=no MountFlags= PrivateTmp=no PrivateDevices=no ProtectKernelTunables=no ProtectKernelModules=no ProtectKernelLogs=no ProtectControlGroups=no PrivateNetwork=no PrivateUsers=no PrivateMounts=no ProtectHome=no ProtectSystem=no SameProcessGroup=no UtmpMode=init IgnoreSIGPIPE=yes NoNewPrivileges=no SystemCallErrorNumber=0 LockPersonality=no RuntimeDirectoryPreserve=no RuntimeDirectoryMode=0755 StateDirectoryMode=0755 CacheDirectoryMode=0755 LogsDirectoryMode=0755 ConfigurationDirectoryMode=0755 TimeoutCleanUSec=infinity MemoryDenyWriteExecute=no RestrictRealtime=no RestrictSUIDSGID=no RestrictNamespaces=no MountAPIVFS=no KeyringMode=private ProtectHostname=no KillMode=process KillSignal=15 RestartKillSignal=15 FinalKillSignal=9 SendSIGKILL=yes SendSIGHUP=no WatchdogSignal=6 Id=containerd.service Names=containerd.service Requires=nydus-snapshotter.service sysinit.target system.slice RequiredBy=pull-dependencies.service WantedBy=multi-user.target Conflicts=shutdown.target Before=pull-dependencies.service shutdown.target multi-user.target After=local-fs.target basic.target nydus-snapshotter.service sysinit.target network.target systemd-journald.socket system.slice Documentation=https://containerd.io Description=containerd container runtime LoadState=loaded ActiveState=active SubState=running FragmentPath=/etc/systemd/system/containerd.service UnitFileState=enabled UnitFilePreset=enabled StateChangeTimestamp=Tue 2024-07-09 15:55:42 UTC StateChangeTimestampMonotonic=1211879326349 InactiveExitTimestamp=Tue 2024-07-09 15:55:41 UTC InactiveExitTimestampMonotonic=1211878211061 ActiveEnterTimestamp=Tue 2024-07-09 15:55:42 UTC ActiveEnterTimestampMonotonic=1211879326349 ActiveExitTimestamp=Tue 2024-07-09 15:55:41 UTC ActiveExitTimestampMonotonic=1211878180305 InactiveEnterTimestamp=Tue 2024-07-09 15:55:41 UTC InactiveEnterTimestampMonotonic=1211878207095 CanStart=yes CanStop=yes CanReload=no CanIsolate=no StopWhenUnneeded=no RefuseManualStart=no RefuseManualStop=no AllowIsolate=no DefaultDependencies=yes OnFailureJobMode=replace IgnoreOnIsolate=no NeedDaemonReload=no JobTimeoutUSec=infinity JobRunningTimeoutUSec=infinity JobTimeoutAction=none ConditionResult=yes AssertResult=yes ConditionTimestamp=Tue 2024-07-09 15:55:41 UTC ConditionTimestampMonotonic=1211878208314 AssertTimestamp=Tue 2024-07-09 15:55:41 UTC AssertTimestampMonotonic=1211878208314 Transient=no Perpetual=no StartLimitIntervalUSec=0 StartLimitBurst=5 StartLimitAction=none FailureAction=none SuccessAction=none InvocationID=6012518dc9654479ae79a698678da5ec CollectMode=inactive ```

cat /etc/containerd/config.toml

```toml imports = ["/etc/containerd/config.toml.d/nydus-snapshotter.toml", "/etc/containerd/config.toml.d/nydus-snapshotter.toml"] version = 2 root = "/var/data/cripersistentstorage" state = "/run/containerd" oom_score = 0 [grpc] address = "/run/containerd/containerd.sock" uid = 0 gid = 0 max_recv_message_size = 16777216 max_send_message_size = 16777216 [debug] address = "" uid = 0 gid = 0 level = "debug" [metrics] address = "10.240.0.91:10210" grpc_histogram = false [cgroup] path = "/podruntime/runtime" [plugins] [plugins."io.containerd.monitor.v1.cgroups"] no_prometheus = false [plugins."io.containerd.grpc.v1.cri"] disable_tcp_service = true stream_server_address = "127.0.0.1" stream_server_port = "0" stream_idle_timeout = "15m" image_pull_progress_timeout = "5m" enable_selinux = false selinux_category_range = 1024 sandbox_image = "us.icr.io/armada-master/pause-multiarch:3.9" stats_collect_period = 10 systemd_cgroup = false enable_tls_streaming = false tolerate_missing_hugetlb_controller = true ignore_image_defined_volumes = false [plugins."io.containerd.grpc.v1.cri".containerd] snapshotter = "overlayfs" default_runtime_name = "runc" no_pivot = false disable_snapshot_annotations = false discard_unpacked_layers = false [plugins."io.containerd.grpc.v1.cri".containerd.runtimes] [plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc] runtime_type = "io.containerd.runc.v2" pod_annotations = [] container_annotations = [] privileged_without_host_devices = false base_runtime_spec = "" [plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc.options] NoPivotRoot = false NoNewKeyring = false ShimCgroup = "" IoUid = 0 IoGid = 0 BinaryName = "" Root = "" CriuPath = "" SystemdCgroup = true [plugins."io.containerd.grpc.v1.cri".containerd.runtimes.untrusted] runtime_type = "io.containerd.runc.v2" pod_annotations = [] container_annotations = [] privileged_without_host_devices = false [plugins."io.containerd.grpc.v1.cri".containerd.runtimes.kata-remote] runtime_type = "io.containerd.kata-remote.v2" privileged_without_host_devices = true pod_annotations = ["io.katacontainers.*"] snapshotter = "nydus" [plugins."io.containerd.grpc.v1.cri".containerd.runtimes.kata-remote.options] ConfigPath = "/opt/kata/share/defaults/kata-containers//configuration-remote.toml" [plugins."io.containerd.grpc.v1.cri".cni] bin_dir = "/opt/cni/bin" conf_dir = "/etc/cni/net.d" max_conf_num = 1 conf_template = "" [plugins."io.containerd.grpc.v1.cri".registry] config_path = "/etc/containerd/certs.d" [plugins."io.containerd.service.v1.diff-service"] default = ["walking"] [plugins."io.containerd.gc.v1.scheduler"] pause_threshold = 0.02 deletion_threshold = 0 mutation_threshold = 100 schedule_delay = "0s" startup_delay = "100ms" ```

---

Packages

# Packages Have `dpkg`

dpkg -l|egrep "(cc-oci-runtime|cc-runtime|runv|kata-runtime|kata-ksm-throttler|kata-containers-image|linux-container|qemu-)"

``` ```

No `rpm` ---

Kata Monitor

Kata Monitor `kata-monitor`.

kata-monitor --version

``` kata-monitor Version: 0.3.0 Go version: go1.22.2 Git commit: 6a4919eeb9bfd86c3a4d74ce02b31c1f9eb85aef OS/Arch: linux/amd64 ```

---