Closed fighterhit closed 2 years ago
# Meta details
Running `kata-collect-data.sh` version `2.3.2 (commit 1af292c9e693e9bc8e8324a9eb860dad45306fb5)` at `2022-02-23.10:29:21.560061614+0800`.
---
Runtime is `/usr/bin/kata-runtime`.
# `kata-env`
```toml
[Kernel]
Path = "/usr/share/kata-containers/vmlinuz-5.10.25-88-nvidia-gpu"
Parameters = "systemd.unit=kata-containers.target systemd.mask=systemd-networkd.service systemd.mask=systemd-networkd.socket scsi_mod.scan=none"
[Meta]
Version = "1.0.26"
[Image]
Path = "/opt/kata/share/kata-containers/kata-clearlinux-latest.image"
[Initrd]
Path = ""
[Hypervisor]
MachineType = "q35"
Version = "QEMU emulator version 6.1.0 (kata-static)\nCopyright (c) 2003-2021 Fabrice Bellard and the QEMU Project developers"
Path = "/opt/kata/bin/qemu-system-x86_64"
BlockDeviceDriver = "virtio-scsi"
EntropySource = "/dev/urandom"
SharedFS = "virtio-9p"
VirtioFSDaemon = "/opt/kata/libexec/kata-qemu/virtiofsd"
SocketPath = ""
Msize9p = 8192
MemorySlots = 10
PCIeRootPort = 0
HotplugVFIOOnRootBus = false
Debug = false
[Runtime]
Path = "/opt/kata/bin/kata-runtime"
Debug = false
Trace = false
DisableGuestSeccomp = true
DisableNewNetNs = false
SandboxCgroupOnly = false
[Runtime.Config]
Path = "/opt/kata/share/defaults/kata-containers/configuration-qemu.toml"
[Runtime.Version]
OCI = "1.0.2-dev"
[Runtime.Version.Version]
Semver = "2.3.2"
Commit = "1af292c9e693e9bc8e8324a9eb860dad45306fb5"
Major = 2
Minor = 3
Patch = 2
[Netmon]
Path = "/opt/kata/libexec/kata-containers/kata-netmon"
Debug = false
Enable = false
[Netmon.Version]
Semver = "<
# Runtime config files
## Runtime default config files
```
/etc/kata-containers/configuration.toml
/opt/kata/share/defaults/kata-containers/configuration.toml
```
## Runtime config file contents
Config file `/etc/kata-containers/configuration.toml` not found
```toml
# Copyright (c) 2017-2019 Intel Corporation
#
# SPDX-License-Identifier: Apache-2.0
#
# XXX: WARNING: this file is auto-generated.
# XXX:
# XXX: Source file: "config/configuration-qemu.toml.in"
# XXX: Project:
# XXX: Name: Kata Containers
# XXX: Type: kata
[hypervisor.qemu]
path = "/opt/kata/bin/qemu-system-x86_64"
#kernel = "/opt/kata/share/kata-containers/vmlinux.container"
kernel = "/usr/share/kata-containers/vmlinuz-nvidia-gpu.container"
image = "/opt/kata/share/kata-containers/kata-containers.img"
machine_type = "q35"
# Enable confidential guest support.
# Toggling that setting may trigger different hardware features, ranging
# from memory encryption to both memory and CPU-state encryption and integrity.
# The Kata Containers runtime dynamically detects the available feature set and
# aims at enabling the largest possible one.
# Default false
# confidential_guest = true
# Enable running QEMU VMM as a non-root user.
# By default QEMU VMM run as root. When this is set to true, QEMU VMM process runs as
# a non-root random user. See documentation for the limitations of this mode.
# rootless = true
# List of valid annotation names for the hypervisor
# Each member of the list is a regular expression, which is the base name
# of the annotation, e.g. "path" for io.katacontainers.config.hypervisor.path"
enable_annotations = []
# List of valid annotations values for the hypervisor
# Each member of the list is a path pattern as described by glob(3).
# The default if not set is empty (all annotations rejected.)
# Your distribution recommends: ["/opt/kata/bin/qemu-system-x86_64"]
valid_hypervisor_paths = ["/opt/kata/bin/qemu-system-x86_64"]
# Optional space-separated list of options to pass to the guest kernel.
# For example, use `kernel_params = "vsyscall=emulate"` if you are having
# trouble running pre-2.15 glibc.
#
# WARNING: - any parameter specified here will take priority over the default
# parameter value of the same name used to start the virtual machine.
# Do not set values here unless you understand the impact of doing so as you
# may stop the virtual machine from booting.
# To see the list of default parameters, enable hypervisor debug, create a
# container and look for 'default-kernel-parameters' log entries.
kernel_params = ""
# Path to the firmware.
# If you want that qemu uses the default firmware leave this option empty
firmware = ""
# Machine accelerators
# comma-separated list of machine accelerators to pass to the hypervisor.
# For example, `machine_accelerators = "nosmm,nosmbus,nosata,nopit,static-prt,nofw"`
machine_accelerators=""
# CPU features
# comma-separated list of cpu features to pass to the cpu
# For example, `cpu_features = "pmu=off,vmx=off"
cpu_features="pmu=off,kvm=off"
# Default number of vCPUs per SB/VM:
# unspecified or 0 --> will be set to 1
# < 0 --> will be set to the actual number of physical cores
# > 0 <= number of physical cores --> will be set to the specified number
# > number of physical cores --> will be set to the actual number of physical cores
default_vcpus = 1
# Default maximum number of vCPUs per SB/VM:
# unspecified or == 0 --> will be set to the actual number of physical cores or to the maximum number
# of vCPUs supported by KVM if that number is exceeded
# > 0 <= number of physical cores --> will be set to the specified number
# > number of physical cores --> will be set to the actual number of physical cores or to the maximum number
# of vCPUs supported by KVM if that number is exceeded
# WARNING: Depending of the architecture, the maximum number of vCPUs supported by KVM is used when
# the actual number of physical cores is greater than it.
# WARNING: Be aware that this value impacts the virtual machine's memory footprint and CPU
# the hotplug functionality. For example, `default_maxvcpus = 240` specifies that until 240 vCPUs
# can be added to a SB/VM, but the memory footprint will be big. Another example, with
# `default_maxvcpus = 8` the memory footprint will be small, but 8 will be the maximum number of
# vCPUs supported by the SB/VM. In general, we recommend that you do not edit this variable,
# unless you know what are you doing.
# NOTICE: on arm platform with gicv2 interrupt controller, set it to 8.
default_maxvcpus = 0
# Bridges can be used to hot plug devices.
# Limitations:
# * Currently only pci bridges are supported
# * Until 30 devices per bridge can be hot plugged.
# * Until 5 PCI bridges can be cold plugged per VM.
# This limitation could be a bug in qemu or in the kernel
# Default number of bridges per SB/VM:
# unspecified or 0 --> will be set to 1
# > 1 <= 5 --> will be set to the specified number
# > 5 --> will be set to 5
default_bridges = 1
# Default memory size in MiB for SB/VM.
# If unspecified then it will be set 2048 MiB.
default_memory = 2048
#
# Default memory slots per SB/VM.
# If unspecified then it will be set 10.
# This is will determine the times that memory will be hotadded to sandbox/VM.
#memory_slots = 10
# The size in MiB will be plused to max memory of hypervisor.
# It is the memory address space for the NVDIMM devie.
# If set block storage driver (block_device_driver) to "nvdimm",
# should set memory_offset to the size of block device.
# Default 0
#memory_offset = 0
# Specifies virtio-mem will be enabled or not.
# Please note that this option should be used with the command
# "echo 1 > /proc/sys/vm/overcommit_memory".
# Default false
#enable_virtio_mem = true
# Disable block device from being used for a container's rootfs.
# In case of a storage driver like devicemapper where a container's
# root file system is backed by a block device, the block device is passed
# directly to the hypervisor for performance reasons.
# This flag prevents the block device from being passed to the hypervisor,
# 9pfs is used instead to pass the rootfs.
disable_block_device_use = false
# Shared file system type:
# - virtio-fs (default)
# - virtio-9p
shared_fs = "virtio-9p"
#shared_fs = "virtio-fs"
# Path to vhost-user-fs daemon.
virtio_fs_daemon = "/opt/kata/libexec/kata-qemu/virtiofsd"
# List of valid annotations values for the virtiofs daemon
# The default if not set is empty (all annotations rejected.)
# Your distribution recommends: ["/opt/kata/libexec/kata-qemu/virtiofsd"]
valid_virtio_fs_daemon_paths = ["/opt/kata/libexec/kata-qemu/virtiofsd"]
# Default size of DAX cache in MiB
virtio_fs_cache_size = 0
# Extra args for virtiofsd daemon
#
# Format example:
# ["-o", "arg1=xxx,arg2", "-o", "hello world", "--arg3=yyy"]
#
# see `virtiofsd -h` for possible options.
virtio_fs_extra_args = ["--thread-pool-size=1"]
# Cache mode:
#
# - none
# Metadata, data, and pathname lookup are not cached in guest. They are
# always fetched from host and any changes are immediately pushed to host.
#
# - auto
# Metadata and pathname lookup cache expires after a configured amount of
# time (default is 1 second). Data is cached while the file is open (close
# to open consistency).
#
# - always
# Metadata, data, and pathname lookup are cached in guest and never expire.
virtio_fs_cache = "auto"
# Block storage driver to be used for the hypervisor in case the container
# rootfs is backed by a block device. This is virtio-scsi, virtio-blk
# or nvdimm.
block_device_driver = "virtio-scsi"
# Specifies cache-related options will be set to block devices or not.
# Default false
#block_device_cache_set = true
# Specifies cache-related options for block devices.
# Denotes whether use of O_DIRECT (bypass the host page cache) is enabled.
# Default false
#block_device_cache_direct = true
# Specifies cache-related options for block devices.
# Denotes whether flush requests for the device are ignored.
# Default false
#block_device_cache_noflush = true
# Enable iothreads (data-plane) to be used. This causes IO to be
# handled in a separate IO thread. This is currently only implemented
# for SCSI.
#
enable_iothreads = false
# Enable pre allocation of VM RAM, default false
# Enabling this will result in lower container density
# as all of the memory will be allocated and locked
# This is useful when you want to reserve all the memory
# upfront or in the cases where you want memory latencies
# to be very predictable
# Default false
#enable_mem_prealloc = true
# Enable huge pages for VM RAM, default false
# Enabling this will result in the VM memory
# being allocated using huge pages.
# This is useful when you want to use vhost-user network
# stacks within the container. This will automatically
# result in memory pre allocation
#enable_hugepages = true
# Enable vhost-user storage device, default false
# Enabling this will result in some Linux reserved block type
# major range 240-254 being chosen to represent vhost-user devices.
enable_vhost_user_store = false
# The base directory specifically used for vhost-user devices.
# Its sub-path "block" is used for block devices; "block/sockets" is
# where we expect vhost-user sockets to live; "block/devices" is where
# simulated block device nodes for vhost-user devices to live.
vhost_user_store_path = "/var/run/kata-containers/vhost-user"
# Enable vIOMMU, default false
# Enabling this will result in the VM having a vIOMMU device
# This will also add the following options to the kernel's
# command line: intel_iommu=on,iommu=pt
#enable_iommu = true
# Enable IOMMU_PLATFORM, default false
# Enabling this will result in the VM device having iommu_platform=on set
#enable_iommu_platform = true
# List of valid annotations values for the vhost user store path
# The default if not set is empty (all annotations rejected.)
# Your distribution recommends: ["/var/run/kata-containers/vhost-user"]
valid_vhost_user_store_paths = ["/var/run/kata-containers/vhost-user"]
# Enable file based guest memory support. The default is an empty string which
# will disable this feature. In the case of virtio-fs, this is enabled
# automatically and '/dev/shm' is used as the backing folder.
# This option will be ignored if VM templating is enabled.
#file_mem_backend = ""
# List of valid annotations values for the file_mem_backend annotation
# The default if not set is empty (all annotations rejected.)
# Your distribution recommends: [""]
valid_file_mem_backends = [""]
# Enable swap of vm memory. Default false.
# The behaviour is undefined if mem_prealloc is also set to true
#enable_swap = true
# -pflash can add image file to VM. The arguments of it should be in format
# of ["/path/to/flash0.img", "/path/to/flash1.img"]
pflashes = []
# This option changes the default hypervisor and kernel parameters
# to enable debug output where available.
#
# Default false
#enable_debug = true
# Disable the customizations done in the runtime when it detects
# that it is running on top a VMM. This will result in the runtime
# behaving as it would when running on bare metal.
#
#disable_nesting_checks = true
# This is the msize used for 9p shares. It is the number of bytes
# used for 9p packet payload.
#msize_9p = 8192
# If false and nvdimm is supported, use nvdimm device to plug guest image.
# Otherwise virtio-block device is used.
# Default is false
#disable_image_nvdimm = true
# VFIO devices are hotplugged on a bridge by default.
# Enable hotplugging on root bus. This may be required for devices with
# a large PCI bar, as this is a current limitation with hotplugging on
# a bridge.
# Default false
#hotplug_vfio_on_root_bus = true
# Before hot plugging a PCIe device, you need to add a pcie_root_port device.
# Use this parameter when using some large PCI bar devices, such as Nvidia GPU
# The value means the number of pcie_root_port
# This value is valid when hotplug_vfio_on_root_bus is true and machine_type is "q35"
# Default 0
#pcie_root_port = 1
# If vhost-net backend for virtio-net is not desired, set to true. Default is false, which trades off
# security (vhost-net runs ring0) for network I/O performance.
#disable_vhost_net = true
#
# Default entropy source.
# The path to a host source of entropy (including a real hardware RNG)
# /dev/urandom and /dev/random are two main options.
# Be aware that /dev/random is a blocking source of entropy. If the host
# runs out of entropy, the VMs boot time will increase leading to get startup
# timeouts.
# The source of entropy /dev/urandom is non-blocking and provides a
# generally acceptable source of entropy. It should work well for pretty much
# all practical purposes.
#entropy_source= "/dev/urandom"
# List of valid annotations values for entropy_source
# The default if not set is empty (all annotations rejected.)
# Your distribution recommends: ["/dev/urandom","/dev/random",""]
valid_entropy_sources = ["/dev/urandom","/dev/random",""]
# Path to OCI hook binaries in the *guest rootfs*.
# This does not affect host-side hooks which must instead be added to
# the OCI spec passed to the runtime.
#
# You can create a rootfs with hooks by customizing the osbuilder scripts:
# https://github.com/kata-containers/kata-containers/tree/main/tools/osbuilder
#
# Hooks must be stored in a subdirectory of guest_hook_path according to their
# hook type, i.e. "guest_hook_path/{prestart,poststart,poststop}".
# The agent will scan these directories for executable files and add them, in
# lexicographical order, to the lifecycle of the guest container.
# Hooks are executed in the runtime namespace of the guest. See the official documentation:
# https://github.com/opencontainers/runtime-spec/blob/v1.0.1/config.md#posix-platform-hooks
# Warnings will be logged if any error is encountered while scanning for hooks,
# but it will not abort container execution.
#guest_hook_path = "/usr/share/oci/hooks"
#
# Use rx Rate Limiter to control network I/O inbound bandwidth(size in bits/sec for SB/VM).
# In Qemu, we use classful qdiscs HTB(Hierarchy Token Bucket) to discipline traffic.
# Default 0-sized value means unlimited rate.
#rx_rate_limiter_max_rate = 0
# Use tx Rate Limiter to control network I/O outbound bandwidth(size in bits/sec for SB/VM).
# In Qemu, we use classful qdiscs HTB(Hierarchy Token Bucket) and ifb(Intermediate Functional Block)
# to discipline traffic.
# Default 0-sized value means unlimited rate.
#tx_rate_limiter_max_rate = 0
# Set where to save the guest memory dump file.
# If set, when GUEST_PANICKED event occurred,
# guest memeory will be dumped to host filesystem under guest_memory_dump_path,
# This directory will be created automatically if it does not exist.
#
# The dumped file(also called vmcore) can be processed with crash or gdb.
#
# WARNING:
# Dump guest’s memory can take very long depending on the amount of guest memory
# and use much disk space.
#guest_memory_dump_path="/var/crash/kata"
# If enable paging.
# Basically, if you want to use "gdb" rather than "crash",
# or need the guest-virtual addresses in the ELF vmcore,
# then you should enable paging.
#
# See: https://www.qemu.org/docs/master/qemu-qmp-ref.html#Dump-guest-memory for details
#guest_memory_dump_paging=false
# Enable swap in the guest. Default false.
# When enable_guest_swap is enabled, insert a raw file to the guest as the swap device
# if the swappiness of a container (set by annotation "io.katacontainers.container.resource.swappiness")
# is bigger than 0.
# The size of the swap device should be
# swap_in_bytes (set by annotation "io.katacontainers.container.resource.swap_in_bytes") - memory_limit_in_bytes.
# If swap_in_bytes is not set, the size should be memory_limit_in_bytes.
# If swap_in_bytes and memory_limit_in_bytes is not set, the size should
# be default_memory.
#enable_guest_swap = true
[factory]
# VM templating support. Once enabled, new VMs are created from template
# using vm cloning. They will share the same initial kernel, initramfs and
# agent memory by mapping it readonly. It helps speeding up new container
# creation and saves a lot of memory if there are many kata containers running
# on the same host.
#
# When disabled, new VMs are created from scratch.
#
# Note: Requires "initrd=" to be set ("image=" is not supported).
#
# Default false
#enable_template = true
# Specifies the path of template.
#
# Default "/run/vc/vm/template"
#template_path = "/run/vc/vm/template"
# The number of caches of VMCache:
# unspecified or == 0 --> VMCache is disabled
# > 0 --> will be set to the specified number
#
# VMCache is a function that creates VMs as caches before using it.
# It helps speed up new container creation.
# The function consists of a server and some clients communicating
# through Unix socket. The protocol is gRPC in protocols/cache/cache.proto.
# The VMCache server will create some VMs and cache them by factory cache.
# It will convert the VM to gRPC format and transport it when gets
# requestion from clients.
# Factory grpccache is the VMCache client. It will request gRPC format
# VM and convert it back to a VM. If VMCache function is enabled,
# kata-runtime will request VM from factory grpccache when it creates
# a new sandbox.
#
# Default 0
#vm_cache_number = 0
# Specify the address of the Unix socket that is used by VMCache.
#
# Default /var/run/kata-containers/cache.sock
#vm_cache_endpoint = "/var/run/kata-containers/cache.sock"
[agent.kata]
# If enabled, make the agent display debug-level messages.
# (default: disabled)
#enable_debug = true
# Enable agent tracing.
#
# If enabled, the agent will generate OpenTelemetry trace spans.
#
# Notes:
#
# - If the runtime also has tracing enabled, the agent spans will be
# associated with the appropriate runtime parent span.
# - If enabled, the runtime will wait for the container to shutdown,
# increasing the container shutdown time slightly.
#
# (default: disabled)
#enable_tracing = true
# Comma separated list of kernel modules and their parameters.
# These modules will be loaded in the guest kernel using modprobe(8).
# The following example can be used to load two kernel modules with parameters
# - kernel_modules=["e1000e InterruptThrottleRate=3000,3000,3000 EEE=1", "i915 enable_ppgtt=0"]
# The first word is considered as the module name and the rest as its parameters.
# Container will not be started when:
# * A kernel module is specified and the modprobe command is not installed in the guest
# or it fails loading the module.
# * The module is not available in the guest or it doesn't met the guest kernel
# requirements, like architecture and version.
#
kernel_modules=[]
# Enable debug console.
# If enabled, user can connect guest OS running inside hypervisor
# through "kata-runtime exec
```toml
# Copyright (c) 2017-2019 Intel Corporation
#
# SPDX-License-Identifier: Apache-2.0
#
# XXX: WARNING: this file is auto-generated.
# XXX:
# XXX: Source file: "cli/config/configuration-qemu.toml.in"
# XXX: Project:
# XXX: Name: Kata Containers
# XXX: Type: kata
[hypervisor.qemu]
path = "/opt/kata/bin/qemu-system-x86_64"
#kernel = "/usr/share/kata-containers/vmlinuz.container"
kernel = "/usr/share/kata-containers/vmlinuz-nvidia-gpu.container"
image = "/usr/share/kata-containers/kata-containers.img"
machine_type = "pc"
# List of valid annotation names for the hypervisor
# Each member of the list is a regular expression, which is the base name
# of the annotation, e.g. "path" for io.katacontainers.config.hypervisor.path"
# The default if not set is empty (all annotations rejected.)
# Your distribution recommends: [".*"]
enable_annotations = [".*"]
# List of valid annotation values for the hypervisor path
# Each member of the list is a path pattern as described by glob(3).
# The default if not set is empty (all annotations rejected.)
# Your distribution recommends: ["/usr/bin/qemu-vanilla-system-x86_64"]
valid_hypervisor_paths = ["/usr/bin/qemu-vanilla-system-x86_64"]
# Optional space-separated list of options to pass to the guest kernel.
# For example, use `kernel_params = "vsyscall=emulate"` if you are having
# trouble running pre-2.15 glibc.
#
# WARNING: - any parameter specified here will take priority over the default
# parameter value of the same name used to start the virtual machine.
# Do not set values here unless you understand the impact of doing so as you
# may stop the virtual machine from booting.
# To see the list of default parameters, enable hypervisor debug, create a
# container and look for 'default-kernel-parameters' log entries.
kernel_params = ""
# Path to the firmware.
# If you want that qemu uses the default firmware leave this option empty
firmware = ""
# Machine accelerators
# comma-separated list of machine accelerators to pass to the hypervisor.
# For example, `machine_accelerators = "nosmm,nosmbus,nosata,nopit,static-prt,nofw"`
machine_accelerators=""
# CPU features
# comma-separated list of cpu features to pass to the cpu
# For example, `cpu_features = "pmu=off,vmx=off"
cpu_features="pmu=off"
# Default number of vCPUs per SB/VM:
# unspecified or 0 --> will be set to 1
# < 0 --> will be set to the actual number of physical cores
# > 0 <= number of physical cores --> will be set to the specified number
# > number of physical cores --> will be set to the actual number of physical cores
default_vcpus = 1
# Default maximum number of vCPUs per SB/VM:
# unspecified or == 0 --> will be set to the actual number of physical cores or to the maximum number
# of vCPUs supported by KVM if that number is exceeded
# > 0 <= number of physical cores --> will be set to the specified number
# > number of physical cores --> will be set to the actual number of physical cores or to the maximum number
# of vCPUs supported by KVM if that number is exceeded
# WARNING: Depending of the architecture, the maximum number of vCPUs supported by KVM is used when
# the actual number of physical cores is greater than it.
# WARNING: Be aware that this value impacts the virtual machine's memory footprint and CPU
# the hotplug functionality. For example, `default_maxvcpus = 240` specifies that until 240 vCPUs
# can be added to a SB/VM, but the memory footprint will be big. Another example, with
# `default_maxvcpus = 8` the memory footprint will be small, but 8 will be the maximum number of
# vCPUs supported by the SB/VM. In general, we recommend that you do not edit this variable,
# unless you know what are you doing.
# NOTICE: on arm platform with gicv2 interrupt controller, set it to 8.
default_maxvcpus = 0
# Bridges can be used to hot plug devices.
# Limitations:
# * Currently only pci bridges are supported
# * Until 30 devices per bridge can be hot plugged.
# * Until 5 PCI bridges can be cold plugged per VM.
# This limitation could be a bug in qemu or in the kernel
# Default number of bridges per SB/VM:
# unspecified or 0 --> will be set to 1
# > 1 <= 5 --> will be set to the specified number
# > 5 --> will be set to 5
default_bridges = 1
# Default memory size in MiB for SB/VM.
# If unspecified then it will be set 2048 MiB.
default_memory = 2048
#
# Default memory slots per SB/VM.
# If unspecified then it will be set 10.
# This is will determine the times that memory will be hotadded to sandbox/VM.
#memory_slots = 10
# The size in MiB will be plused to max memory of hypervisor.
# It is the memory address space for the NVDIMM devie.
# If set block storage driver (block_device_driver) to "nvdimm",
# should set memory_offset to the size of block device.
# Default 0
#memory_offset = 0
# Specifies virtio-mem will be enabled or not.
# Please note that this option should be used with the command
# "echo 1 > /proc/sys/vm/overcommit_memory".
# Default false
#enable_virtio_mem = true
# Disable block device from being used for a container's rootfs.
# In case of a storage driver like devicemapper where a container's
# root file system is backed by a block device, the block device is passed
# directly to the hypervisor for performance reasons.
# This flag prevents the block device from being passed to the hypervisor,
# 9pfs is used instead to pass the rootfs.
disable_block_device_use = false
# Shared file system type:
# - virtio-9p (default)
# - virtio-fs
shared_fs = "virtio-9p"
# Path to vhost-user-fs daemon.
virtio_fs_daemon = "/usr/bin/virtiofsd"
# List of valid annotation values for the virtiofs daemon path
# Each member of the list is a path pattern as described by glob(3).
# The default if not set is empty (all annotations rejected.)
# Your distribution recommends: ["/usr/bin/virtiofsd"]
valid_virtio_fs_daemon_paths = ["/usr/bin/virtiofsd"]
# Default size of DAX cache in MiB
virtio_fs_cache_size = 0
# Extra args for virtiofsd daemon
#
# Format example:
# ["-o", "arg1=xxx,arg2", "-o", "hello world", "--arg3=yyy"]
#
# see `virtiofsd -h` for possible options.
virtio_fs_extra_args = []
# Cache mode:
#
# - none
# Metadata, data, and pathname lookup are not cached in guest. They are
# always fetched from host and any changes are immediately pushed to host.
#
# - auto
# Metadata and pathname lookup cache expires after a configured amount of
# time (default is 1 second). Data is cached while the file is open (close
# to open consistency).
#
# - always
# Metadata, data, and pathname lookup are cached in guest and never expire.
virtio_fs_cache = "auto"
# Block storage driver to be used for the hypervisor in case the container
# rootfs is backed by a block device. This is virtio-scsi, virtio-blk
# or nvdimm.
block_device_driver = "virtio-scsi"
# Specifies cache-related options will be set to block devices or not.
# Default false
#block_device_cache_set = true
# Specifies cache-related options for block devices.
# Denotes whether use of O_DIRECT (bypass the host page cache) is enabled.
# Default false
#block_device_cache_direct = true
# Specifies cache-related options for block devices.
# Denotes whether flush requests for the device are ignored.
# Default false
#block_device_cache_noflush = true
# Enable iothreads (data-plane) to be used. This causes IO to be
# handled in a separate IO thread. This is currently only implemented
# for SCSI.
#
enable_iothreads = false
# Enable pre allocation of VM RAM, default false
# Enabling this will result in lower container density
# as all of the memory will be allocated and locked
# This is useful when you want to reserve all the memory
# upfront or in the cases where you want memory latencies
# to be very predictable
# Default false
#enable_mem_prealloc = true
# Enable huge pages for VM RAM, default false
# Enabling this will result in the VM memory
# being allocated using huge pages.
# This is useful when you want to use vhost-user network
# stacks within the container. This will automatically
# result in memory pre allocation
#enable_hugepages = true
# Enable vhost-user storage device, default false
# Enabling this will result in some Linux reserved block type
# major range 240-254 being chosen to represent vhost-user devices.
enable_vhost_user_store = false
# The base directory specifically used for vhost-user devices.
# Its sub-path "block" is used for block devices; "block/sockets" is
# where we expect vhost-user sockets to live; "block/devices" is where
# simulated block device nodes for vhost-user devices to live.
vhost_user_store_path = "/var/run/kata-containers/vhost-user"
# Enable vIOMMU, default false
# Enabling this will result in the VM having a vIOMMU device
# This will also add the following options to the kernel's
# command line: intel_iommu=on,iommu=pt
#enable_iommu = true
# Enable IOMMU_PLATFORM, default false
# Enabling this will result in the VM device having iommu_platform=on set
#enable_iommu_platform = true
# List of valid annotation values for the vhost user store path
# Each member of the list is a path pattern as described by glob(3).
# The default if not set is empty (all annotations rejected.)
# Your distribution recommends: ["/var/run/kata-containers/vhost-user"]
valid_vhost_user_store_paths = ["/var/run/kata-containers/vhost-user"]
# Enable file based guest memory support. The default is an empty string which
# will disable this feature. In the case of virtio-fs, this is enabled
# automatically and '/dev/shm' is used as the backing folder.
# This option will be ignored if VM templating is enabled.
#file_mem_backend = ""
# List of valid annotation values for the file_mem_backend path
# Each member of the list is a path pattern as described by glob(3).
# The default if not set is empty (all annotations rejected.)
# Your distribution recommends: [""]
valid_file_mem_backends = [""]
# Enable swap of vm memory. Default false.
# The behaviour is undefined if mem_prealloc is also set to true
#enable_swap = true
# This option changes the default hypervisor and kernel parameters
# to enable debug output where available. This extra output is added
# to the proxy logs, but only when proxy debug is also enabled.
#
# Default false
#enable_debug = true
# Disable the customizations done in the runtime when it detects
# that it is running on top a VMM. This will result in the runtime
# behaving as it would when running on bare metal.
#
#disable_nesting_checks = true
# This is the msize used for 9p shares. It is the number of bytes
# used for 9p packet payload.
#msize_9p = 8192
# If true and vsocks are supported, use vsocks to communicate directly
# with the agent and no proxy is started, otherwise use unix
# sockets and start a proxy to communicate with the agent.
# Default false
#use_vsock = true
# If false and nvdimm is supported, use nvdimm device to plug guest image.
# Otherwise virtio-block device is used.
# Default is false
#disable_image_nvdimm = true
# VFIO devices are hotplugged on a bridge by default.
# Enable hotplugging on root bus. This may be required for devices with
# a large PCI bar, as this is a current limitation with hotplugging on
# a bridge. This value is valid for "pc" machine type.
# Default false
hotplug_vfio_on_root_bus = true
# Before hot plugging a PCIe device, you need to add a pcie_root_port device.
# Use this parameter when using some large PCI bar devices, such as Nvidia GPU
# The value means the number of pcie_root_port
# This value is valid when hotplug_vfio_on_root_bus is true and machine_type is "q35"
# Default 0
pcie_root_port = 1
# If vhost-net backend for virtio-net is not desired, set to true. Default is false, which trades off
# security (vhost-net runs ring0) for network I/O performance.
#disable_vhost_net = true
#
# Default entropy source.
# The path to a host source of entropy (including a real hardware RNG)
# /dev/urandom and /dev/random are two main options.
# Be aware that /dev/random is a blocking source of entropy. If the host
# runs out of entropy, the VMs boot time will increase leading to get startup
# timeouts.
# The source of entropy /dev/urandom is non-blocking and provides a
# generally acceptable source of entropy. It should work well for pretty much
# all practical purposes.
#entropy_source= "/dev/urandom"
# Path to OCI hook binaries in the *guest rootfs*.
# This does not affect host-side hooks which must instead be added to
# the OCI spec passed to the runtime.
#
# You can create a rootfs with hooks by customizing the osbuilder scripts:
# https://github.com/kata-containers/osbuilder
#
# Hooks must be stored in a subdirectory of guest_hook_path according to their
# hook type, i.e. "guest_hook_path/{prestart,postart,poststop}".
# The agent will scan these directories for executable files and add them, in
# lexicographical order, to the lifecycle of the guest container.
# Hooks are executed in the runtime namespace of the guest. See the official documentation:
# https://github.com/opencontainers/runtime-spec/blob/v1.0.1/config.md#posix-platform-hooks
# Warnings will be logged if any error is encountered will scanning for hooks,
# but it will not abort container execution.
#guest_hook_path = "/usr/share/oci/hooks"
[factory]
# VM templating support. Once enabled, new VMs are created from template
# using vm cloning. They will share the same initial kernel, initramfs and
# agent memory by mapping it readonly. It helps speeding up new container
# creation and saves a lot of memory if there are many kata containers running
# on the same host.
#
# When disabled, new VMs are created from scratch.
#
# Note: Requires "initrd=" to be set ("image=" is not supported).
#
# Default false
#enable_template = true
# Specifies the path of template.
#
# Default "/run/vc/vm/template"
#template_path = "/run/vc/vm/template"
# The number of caches of VMCache:
# unspecified or == 0 --> VMCache is disabled
# > 0 --> will be set to the specified number
#
# VMCache is a function that creates VMs as caches before using it.
# It helps speed up new container creation.
# The function consists of a server and some clients communicating
# through Unix socket. The protocol is gRPC in protocols/cache/cache.proto.
# The VMCache server will create some VMs and cache them by factory cache.
# It will convert the VM to gRPC format and transport it when gets
# requestion from clients.
# Factory grpccache is the VMCache client. It will request gRPC format
# VM and convert it back to a VM. If VMCache function is enabled,
# kata-runtime will request VM from factory grpccache when it creates
# a new sandbox.
#
# Default 0
#vm_cache_number = 0
# Specify the address of the Unix socket that is used by VMCache.
#
# Default /var/run/kata-containers/cache.sock
#vm_cache_endpoint = "/var/run/kata-containers/cache.sock"
[proxy.kata]
path = "/usr/libexec/kata-containers/kata-proxy"
# If enabled, proxy messages will be sent to the system log
# (default: disabled)
#enable_debug = true
[shim.kata]
path = "/usr/libexec/kata-containers/kata-shim"
# If enabled, shim messages will be sent to the system log
# (default: disabled)
#enable_debug = true
# If enabled, the shim will create opentracing.io traces and spans.
# (See https://www.jaegertracing.io/docs/getting-started).
#
# Note: By default, the shim runs in a separate network namespace. Therefore,
# to allow it to send trace details to the Jaeger agent running on the host,
# it is necessary to set 'disable_new_netns=true' so that it runs in the host
# network namespace.
#
# (default: disabled)
#enable_tracing = true
[agent.kata]
# If enabled, make the agent display debug-level messages.
# (default: disabled)
#enable_debug = true
# Enable agent tracing.
#
# If enabled, the default trace mode is "dynamic" and the
# default trace type is "isolated". The trace mode and type are set
# explicity with the `trace_type=` and `trace_mode=` options.
#
# Notes:
#
# - Tracing is ONLY enabled when `enable_tracing` is set: explicitly
# setting `trace_mode=` and/or `trace_type=` without setting `enable_tracing`
# will NOT activate agent tracing.
#
# - See https://github.com/kata-containers/agent/blob/master/TRACING.md for
# full details.
#
# (default: disabled)
#enable_tracing = true
#
#trace_mode = "dynamic"
#trace_type = "isolated"
# Comma separated list of kernel modules and their parameters.
# These modules will be loaded in the guest kernel using modprobe(8).
# The following example can be used to load two kernel modules with parameters
# - kernel_modules=["e1000e InterruptThrottleRate=3000,3000,3000 EEE=1", "i915 enable_ppgtt=0"]
# The first word is considered as the module name and the rest as its parameters.
# Container will not be started when:
# * A kernel module is specified and the modprobe command is not installed in the guest
# or it fails loading the module.
# * The module is not available in the guest or it doesn't met the guest kernel
# requirements, like architecture and version.
#
kernel_modules=[]
[netmon]
# If enabled, the network monitoring process gets started when the
# sandbox is created. This allows for the detection of some additional
# network being added to the existing network namespace, after the
# sandbox has been created.
# (default: disabled)
#enable_netmon = true
# Specify the path to the netmon binary.
path = "/usr/libexec/kata-containers/kata-netmon"
# If enabled, netmon messages will be sent to the system log
# (default: disabled)
#enable_debug = true
[runtime]
# If enabled, the runtime will log additional debug messages to the
# system log
# (default: disabled)
#enable_debug = true
#
# Internetworking model
# Determines how the VM should be connected to the
# the container network interface
# Options:
#
# - macvtap
# Used when the Container network interface can be bridged using
# macvtap.
#
# - none
# Used when customize network. Only creates a tap device. No veth pair.
#
# - tcfilter
# Uses tc filter rules to redirect traffic from the network interface
# provided by plugin to a tap interface connected to the VM.
#
internetworking_model="tcfilter"
# disable guest seccomp
# Determines whether container seccomp profiles are passed to the virtual
# machine and applied by the kata agent. If set to true, seccomp is not applied
# within the guest
# (default: true)
disable_guest_seccomp=true
# If enabled, the runtime will create opentracing.io traces and spans.
# (See https://www.jaegertracing.io/docs/getting-started).
# (default: disabled)
#enable_tracing = true
# If enabled, the runtime will not create a network namespace for shim and hypervisor processes.
# This option may have some potential impacts to your host. It should only be used when you know what you're doing.
# `disable_new_netns` conflicts with `enable_netmon`
# `disable_new_netns` conflicts with `internetworking_model=tcfilter` and `internetworking_model=macvtap`. It works only
# with `internetworking_model=none`. The tap device will be in the host network namespace and can connect to a bridge
# (like OVS) directly.
# If you are using docker, `disable_new_netns` only works with `docker run --net=none`
# (default: false)
#disable_new_netns = true
# if enabled, the runtime will add all the kata processes inside one dedicated cgroup.
# The container cgroups in the host are not created, just one single cgroup per sandbox.
# The runtime caller is free to restrict or collect cgroup stats of the overall Kata sandbox.
# The sandbox cgroup path is the parent cgroup of a container with the PodSandbox annotation.
# The sandbox cgroup is constrained if there is no container type annotation.
# See: https://godoc.org/github.com/kata-containers/runtime/virtcontainers#ContainerType
sandbox_cgroup_only=false
# Enabled experimental feature list, format: ["a", "b"].
# Experimental features are features not stable enough for production,
# they may break compatibility, and are prepared for a big version bump.
# Supported experimental features:
# (default: [])
experimental=[]
# If enabled, containers are allowed to join the pid namespace of the agent
# when the env variable KATA_AGENT_PIDNS is set for a container.
# Use this with caution and only when required, as this option allows the container
# to access the agent process. It is recommended to enable this option
# only in debug scenarios and with containers with lowered priveleges.
#enable_agent_pidns = true
```
Containerd shim v2 is `/usr/bin/containerd-shim-kata-v2`.
```
Kata Containers containerd shim: id: "io.containerd.kata.v2", version: 2.3.2, commit: 1af292c9e693e9bc8e8324a9eb860dad45306fb5
```
# KSM throttler
## version
```
kata-ksm-throttler version 1.12.1-028b8fd
```
# Image details
```yaml
---
osbuilder:
url: "https://github.com/kata-containers/kata-containers/tools/osbuilder"
version: "unknown"
rootfs-creation-time: "2022-02-03T08:06:02.041217937+0000Z"
description: "osbuilder rootfs"
file-format-version: "0.0.2"
architecture: "x86_64"
base-distro:
name: "Clear"
version: "35790"
packages:
default:
- "chrony"
- "iptables-bin"
- "kmod-bin"
- "libseccomp"
- "libudev0-shim"
- "systemd"
- "util-linux-bin"
extra:
agent:
url: "https://github.com/kata-containers/kata-containers"
name: "kata-agent"
version: "2.3.2"
agent-is-init-daemon: "no"
```
---
# Initrd details
No initrd
---
# Logfiles
## Runtime logs
Recent runtime problems found in system journal:
```
time="2022-02-18T14:00:03.370867402+08:00" level=error msg="hook error" arch=amd64 command=delete container=4853f0473e78e32366c774bbcad436172ff8000bf0aa86c77e5459fb7a285863 error="exit status 1: stdout: , stderr: time=\"2022-02-18T14:00:03+08:00\" level=fatal msg=\"state dir must be set\"\n" hook-type=post-stop name=kata-runtime pid=3759 sandbox=4853f0473e78e32366c774bbcad436172ff8000bf0aa86c77e5459fb7a285863 source=katautils subsystem=hook
time="2022-02-18T14:00:03.370997439+08:00" level=error msg="exit status 1: stdout: , stderr: time=\"2022-02-18T14:00:03+08:00\" level=fatal msg=\"state dir must be set\"\n" arch=amd64 command=delete container=4853f0473e78e32366c774bbcad436172ff8000bf0aa86c77e5459fb7a285863 name=kata-runtime pid=3759 sandbox=4853f0473e78e32366c774bbcad436172ff8000bf0aa86c77e5459fb7a285863 source=runtime
time="2022-02-18T14:00:03.473140607+08:00" level=error msg="open /run/vc/sbs/4853f0473e78e32366c774bbcad436172ff8000bf0aa86c77e5459fb7a285863: no such file or directory" arch=amd64 command=delete container=4853f0473e78e32366c774bbcad436172ff8000bf0aa86c77e5459fb7a285863 name=kata-runtime pid=3866 source=runtime
time="2022-02-18T14:05:14.311474791+08:00" level=warning msg="Agent did not stop sandbox" arch=amd64 command=delete container=c74d82b818348100606cc1f94e6658c778b32c76e50dac649807874737bcdd49 error="Proxy is not running" name=kata-runtime pid=27496 sandbox=c74d82b818348100606cc1f94e6658c778b32c76e50dac649807874737bcdd49 sandboxid=c74d82b818348100606cc1f94e6658c778b32c76e50dac649807874737bcdd49 source=virtcontainers subsystem=sandbox
time="2022-02-18T14:05:14.311592726+08:00" level=warning msg="Unable to connect to unix socket (/run/vc/vm/c74d82b818348100606cc1f94e6658c778b32c76e50dac649807874737bcdd49/qmp.sock): dial unix /run/vc/vm/c74d82b818348100606cc1f94e6658c778b32c76e50dac649807874737bcdd49/qmp.sock: connect: no such file or directory" arch=amd64 command=delete container=c74d82b818348100606cc1f94e6658c778b32c76e50dac649807874737bcdd49 name=kata-runtime pid=27496 sandbox=c74d82b818348100606cc1f94e6658c778b32c76e50dac649807874737bcdd49 source=virtcontainers subsystem=qmp
time="2022-02-18T14:05:14.311627343+08:00" level=error msg="Failed to connect to QEMU instance" arch=amd64 command=delete container=c74d82b818348100606cc1f94e6658c778b32c76e50dac649807874737bcdd49 error="dial unix /run/vc/vm/c74d82b818348100606cc1f94e6658c778b32c76e50dac649807874737bcdd49/qmp.sock: connect: no such file or directory" name=kata-runtime pid=27496 sandbox=c74d82b818348100606cc1f94e6658c778b32c76e50dac649807874737bcdd49 source=virtcontainers subsystem=qemu
time="2022-02-18T14:05:14.311671885+08:00" level=warning msg="failed to resolve vm path" arch=amd64 command=delete container=c74d82b818348100606cc1f94e6658c778b32c76e50dac649807874737bcdd49 dir=/run/vc/vm/c74d82b818348100606cc1f94e6658c778b32c76e50dac649807874737bcdd49 error="lstat /run/vc/vm/c74d82b818348100606cc1f94e6658c778b32c76e50dac649807874737bcdd49: no such file or directory" name=kata-runtime pid=27496 sandbox=c74d82b818348100606cc1f94e6658c778b32c76e50dac649807874737bcdd49 source=virtcontainers subsystem=qemu
time="2022-02-18T14:05:14.311835541+08:00" level=warning msg="Agent did not stop sandbox" arch=amd64 command=delete container=247ce9e309411c2570ad2de9ef8dfe7949a851fc53c71508d543e2e28abc3766 error="Proxy is not running" name=kata-runtime pid=27498 sandbox=247ce9e309411c2570ad2de9ef8dfe7949a851fc53c71508d543e2e28abc3766 sandboxid=247ce9e309411c2570ad2de9ef8dfe7949a851fc53c71508d543e2e28abc3766 source=virtcontainers subsystem=sandbox
time="2022-02-18T14:05:14.312016189+08:00" level=warning msg="Unable to connect to unix socket (/run/vc/vm/247ce9e309411c2570ad2de9ef8dfe7949a851fc53c71508d543e2e28abc3766/qmp.sock): dial unix /run/vc/vm/247ce9e309411c2570ad2de9ef8dfe7949a851fc53c71508d543e2e28abc3766/qmp.sock: connect: no such file or directory" arch=amd64 command=delete container=247ce9e309411c2570ad2de9ef8dfe7949a851fc53c71508d543e2e28abc3766 name=kata-runtime pid=27498 sandbox=247ce9e309411c2570ad2de9ef8dfe7949a851fc53c71508d543e2e28abc3766 source=virtcontainers subsystem=qmp
time="2022-02-18T14:05:14.312080173+08:00" level=error msg="Failed to connect to QEMU instance" arch=amd64 command=delete container=247ce9e309411c2570ad2de9ef8dfe7949a851fc53c71508d543e2e28abc3766 error="dial unix /run/vc/vm/247ce9e309411c2570ad2de9ef8dfe7949a851fc53c71508d543e2e28abc3766/qmp.sock: connect: no such file or directory" name=kata-runtime pid=27498 sandbox=247ce9e309411c2570ad2de9ef8dfe7949a851fc53c71508d543e2e28abc3766 source=virtcontainers subsystem=qemu
time="2022-02-18T14:05:14.312157215+08:00" level=warning msg="failed to resolve vm path" arch=amd64 command=delete container=247ce9e309411c2570ad2de9ef8dfe7949a851fc53c71508d543e2e28abc3766 dir=/run/vc/vm/247ce9e309411c2570ad2de9ef8dfe7949a851fc53c71508d543e2e28abc3766 error="lstat /run/vc/vm/247ce9e309411c2570ad2de9ef8dfe7949a851fc53c71508d543e2e28abc3766: no such file or directory" name=kata-runtime pid=27498 sandbox=247ce9e309411c2570ad2de9ef8dfe7949a851fc53c71508d543e2e28abc3766 source=virtcontainers subsystem=qemu
time="2022-02-18T14:05:14.315346009+08:00" level=warning msg="Agent did not stop sandbox" arch=amd64 command=delete container=1b78923b6f246f0debab93cd9324228893997701b7bc240c2d1fc78579a1af02 error="Proxy is not running" name=kata-runtime pid=27497 sandbox=1b78923b6f246f0debab93cd9324228893997701b7bc240c2d1fc78579a1af02 sandboxid=1b78923b6f246f0debab93cd9324228893997701b7bc240c2d1fc78579a1af02 source=virtcontainers subsystem=sandbox
time="2022-02-18T14:05:14.315456512+08:00" level=warning msg="Unable to connect to unix socket (/run/vc/vm/1b78923b6f246f0debab93cd9324228893997701b7bc240c2d1fc78579a1af02/qmp.sock): dial unix /run/vc/vm/1b78923b6f246f0debab93cd9324228893997701b7bc240c2d1fc78579a1af02/qmp.sock: connect: no such file or directory" arch=amd64 command=delete container=1b78923b6f246f0debab93cd9324228893997701b7bc240c2d1fc78579a1af02 name=kata-runtime pid=27497 sandbox=1b78923b6f246f0debab93cd9324228893997701b7bc240c2d1fc78579a1af02 source=virtcontainers subsystem=qmp
time="2022-02-18T14:05:14.315494833+08:00" level=error msg="Failed to connect to QEMU instance" arch=amd64 command=delete container=1b78923b6f246f0debab93cd9324228893997701b7bc240c2d1fc78579a1af02 error="dial unix /run/vc/vm/1b78923b6f246f0debab93cd9324228893997701b7bc240c2d1fc78579a1af02/qmp.sock: connect: no such file or directory" name=kata-runtime pid=27497 sandbox=1b78923b6f246f0debab93cd9324228893997701b7bc240c2d1fc78579a1af02 source=virtcontainers subsystem=qemu
time="2022-02-18T14:05:14.315538151+08:00" level=warning msg="failed to resolve vm path" arch=amd64 command=delete container=1b78923b6f246f0debab93cd9324228893997701b7bc240c2d1fc78579a1af02 dir=/run/vc/vm/1b78923b6f246f0debab93cd9324228893997701b7bc240c2d1fc78579a1af02 error="lstat /run/vc/vm/1b78923b6f246f0debab93cd9324228893997701b7bc240c2d1fc78579a1af02: no such file or directory" name=kata-runtime pid=27497 sandbox=1b78923b6f246f0debab93cd9324228893997701b7bc240c2d1fc78579a1af02 source=virtcontainers subsystem=qemu
time="2022-02-18T14:05:14.318359301+08:00" level=warning msg="Agent did not stop sandbox" arch=amd64 command=delete container=f7b14dd56cbb4342a6bb28621f3ecb030dc8093ef860c22db788e6d6a91cc70e error="Proxy is not running" name=kata-runtime pid=27499 sandbox=f7b14dd56cbb4342a6bb28621f3ecb030dc8093ef860c22db788e6d6a91cc70e sandboxid=f7b14dd56cbb4342a6bb28621f3ecb030dc8093ef860c22db788e6d6a91cc70e source=virtcontainers subsystem=sandbox
time="2022-02-18T14:05:14.318456405+08:00" level=warning msg="Unable to connect to unix socket (/run/vc/vm/f7b14dd56cbb4342a6bb28621f3ecb030dc8093ef860c22db788e6d6a91cc70e/qmp.sock): dial unix /run/vc/vm/f7b14dd56cbb4342a6bb28621f3ecb030dc8093ef860c22db788e6d6a91cc70e/qmp.sock: connect: no such file or directory" arch=amd64 command=delete container=f7b14dd56cbb4342a6bb28621f3ecb030dc8093ef860c22db788e6d6a91cc70e name=kata-runtime pid=27499 sandbox=f7b14dd56cbb4342a6bb28621f3ecb030dc8093ef860c22db788e6d6a91cc70e source=virtcontainers subsystem=qmp
time="2022-02-18T14:05:14.318509233+08:00" level=error msg="Failed to connect to QEMU instance" arch=amd64 command=delete container=f7b14dd56cbb4342a6bb28621f3ecb030dc8093ef860c22db788e6d6a91cc70e error="dial unix /run/vc/vm/f7b14dd56cbb4342a6bb28621f3ecb030dc8093ef860c22db788e6d6a91cc70e/qmp.sock: connect: no such file or directory" name=kata-runtime pid=27499 sandbox=f7b14dd56cbb4342a6bb28621f3ecb030dc8093ef860c22db788e6d6a91cc70e source=virtcontainers subsystem=qemu
time="2022-02-18T14:05:14.318553734+08:00" level=warning msg="failed to resolve vm path" arch=amd64 command=delete container=f7b14dd56cbb4342a6bb28621f3ecb030dc8093ef860c22db788e6d6a91cc70e dir=/run/vc/vm/f7b14dd56cbb4342a6bb28621f3ecb030dc8093ef860c22db788e6d6a91cc70e error="lstat /run/vc/vm/f7b14dd56cbb4342a6bb28621f3ecb030dc8093ef860c22db788e6d6a91cc70e: no such file or directory" name=kata-runtime pid=27499 sandbox=f7b14dd56cbb4342a6bb28621f3ecb030dc8093ef860c22db788e6d6a91cc70e source=virtcontainers subsystem=qemu
time="2022-02-18T14:05:14.592456061+08:00" level=error msg="Could not read qemu pid file" arch=amd64 command=delete container=247ce9e309411c2570ad2de9ef8dfe7949a851fc53c71508d543e2e28abc3766 error="open /run/vc/vm/247ce9e309411c2570ad2de9ef8dfe7949a851fc53c71508d543e2e28abc3766/pid: no such file or directory" name=kata-runtime pid=27498 sandbox=247ce9e309411c2570ad2de9ef8dfe7949a851fc53c71508d543e2e28abc3766 source=virtcontainers subsystem=qemu
time="2022-02-18T14:05:14.592797432+08:00" level=warning msg="sandbox cgroups path is empty" arch=amd64 command=delete container=247ce9e309411c2570ad2de9ef8dfe7949a851fc53c71508d543e2e28abc3766 name=kata-runtime pid=27498 sandbox=247ce9e309411c2570ad2de9ef8dfe7949a851fc53c71508d543e2e28abc3766 source=virtcontainers subsystem=sandbox
time="2022-02-18T14:05:14.664024782+08:00" level=error msg="Could not read qemu pid file" arch=amd64 command=delete container=1b78923b6f246f0debab93cd9324228893997701b7bc240c2d1fc78579a1af02 error="open /run/vc/vm/1b78923b6f246f0debab93cd9324228893997701b7bc240c2d1fc78579a1af02/pid: no such file or directory" name=kata-runtime pid=27497 sandbox=1b78923b6f246f0debab93cd9324228893997701b7bc240c2d1fc78579a1af02 source=virtcontainers subsystem=qemu
time="2022-02-18T14:05:14.664267697+08:00" level=error msg="Could not read qemu pid file" arch=amd64 command=delete container=c74d82b818348100606cc1f94e6658c778b32c76e50dac649807874737bcdd49 error="open /run/vc/vm/c74d82b818348100606cc1f94e6658c778b32c76e50dac649807874737bcdd49/pid: no such file or directory" name=kata-runtime pid=27496 sandbox=c74d82b818348100606cc1f94e6658c778b32c76e50dac649807874737bcdd49 source=virtcontainers subsystem=qemu
time="2022-02-18T14:05:14.664344833+08:00" level=error msg="Could not read qemu pid file" arch=amd64 command=delete container=f7b14dd56cbb4342a6bb28621f3ecb030dc8093ef860c22db788e6d6a91cc70e error="open /run/vc/vm/f7b14dd56cbb4342a6bb28621f3ecb030dc8093ef860c22db788e6d6a91cc70e/pid: no such file or directory" name=kata-runtime pid=27499 sandbox=f7b14dd56cbb4342a6bb28621f3ecb030dc8093ef860c22db788e6d6a91cc70e source=virtcontainers subsystem=qemu
time="2022-02-18T14:05:14.664399239+08:00" level=warning msg="sandbox cgroups path is empty" arch=amd64 command=delete container=1b78923b6f246f0debab93cd9324228893997701b7bc240c2d1fc78579a1af02 name=kata-runtime pid=27497 sandbox=1b78923b6f246f0debab93cd9324228893997701b7bc240c2d1fc78579a1af02 source=virtcontainers subsystem=sandbox
time="2022-02-18T14:05:14.664629927+08:00" level=warning msg="sandbox cgroups path is empty" arch=amd64 command=delete container=c74d82b818348100606cc1f94e6658c778b32c76e50dac649807874737bcdd49 name=kata-runtime pid=27496 sandbox=c74d82b818348100606cc1f94e6658c778b32c76e50dac649807874737bcdd49 source=virtcontainers subsystem=sandbox
time="2022-02-18T14:05:14.66475581+08:00" level=warning msg="sandbox cgroups path is empty" arch=amd64 command=delete container=f7b14dd56cbb4342a6bb28621f3ecb030dc8093ef860c22db788e6d6a91cc70e name=kata-runtime pid=27499 sandbox=f7b14dd56cbb4342a6bb28621f3ecb030dc8093ef860c22db788e6d6a91cc70e source=virtcontainers subsystem=sandbox
time="2022-02-18T14:05:36.68616841+08:00" level=warning msg="Agent did not stop sandbox" arch=amd64 command=delete container=1597208cc210370f2d6d0fd3ed2ae73c79b89d771056bbe615d700c4e3bbc1ef error="Proxy is not running" name=kata-runtime pid=28965 sandbox=1597208cc210370f2d6d0fd3ed2ae73c79b89d771056bbe615d700c4e3bbc1ef sandboxid=1597208cc210370f2d6d0fd3ed2ae73c79b89d771056bbe615d700c4e3bbc1ef source=virtcontainers subsystem=sandbox
time="2022-02-18T14:05:36.68637189+08:00" level=warning msg="Unable to connect to unix socket (/run/vc/vm/1597208cc210370f2d6d0fd3ed2ae73c79b89d771056bbe615d700c4e3bbc1ef/qmp.sock): dial unix /run/vc/vm/1597208cc210370f2d6d0fd3ed2ae73c79b89d771056bbe615d700c4e3bbc1ef/qmp.sock: connect: no such file or directory" arch=amd64 command=delete container=1597208cc210370f2d6d0fd3ed2ae73c79b89d771056bbe615d700c4e3bbc1ef name=kata-runtime pid=28965 sandbox=1597208cc210370f2d6d0fd3ed2ae73c79b89d771056bbe615d700c4e3bbc1ef source=virtcontainers subsystem=qmp
time="2022-02-18T14:05:36.686442391+08:00" level=error msg="Failed to connect to QEMU instance" arch=amd64 command=delete container=1597208cc210370f2d6d0fd3ed2ae73c79b89d771056bbe615d700c4e3bbc1ef error="dial unix /run/vc/vm/1597208cc210370f2d6d0fd3ed2ae73c79b89d771056bbe615d700c4e3bbc1ef/qmp.sock: connect: no such file or directory" name=kata-runtime pid=28965 sandbox=1597208cc210370f2d6d0fd3ed2ae73c79b89d771056bbe615d700c4e3bbc1ef source=virtcontainers subsystem=qemu
time="2022-02-18T14:05:36.686525078+08:00" level=warning msg="failed to resolve vm path" arch=amd64 command=delete container=1597208cc210370f2d6d0fd3ed2ae73c79b89d771056bbe615d700c4e3bbc1ef dir=/run/vc/vm/1597208cc210370f2d6d0fd3ed2ae73c79b89d771056bbe615d700c4e3bbc1ef error="lstat /run/vc/vm/1597208cc210370f2d6d0fd3ed2ae73c79b89d771056bbe615d700c4e3bbc1ef: no such file or directory" name=kata-runtime pid=28965 sandbox=1597208cc210370f2d6d0fd3ed2ae73c79b89d771056bbe615d700c4e3bbc1ef source=virtcontainers subsystem=qemu
time="2022-02-18T14:05:36.686718697+08:00" level=error msg="dial unix /run/vc/vm/1597208cc210370f2d6d0fd3ed2ae73c79b89d771056bbe615d700c4e3bbc1ef/qmp.sock: connect: no such file or directory" arch=amd64 command=delete container=1597208cc210370f2d6d0fd3ed2ae73c79b89d771056bbe615d700c4e3bbc1ef name=kata-runtime pid=28965 sandbox=1597208cc210370f2d6d0fd3ed2ae73c79b89d771056bbe615d700c4e3bbc1ef source=runtime
time="2022-02-18T14:05:55.270018698+08:00" level=warning msg="Agent did not stop sandbox" arch=amd64 command=delete container=1597208cc210370f2d6d0fd3ed2ae73c79b89d771056bbe615d700c4e3bbc1ef error="Proxy is not running" name=kata-runtime pid=30383 sandbox=1597208cc210370f2d6d0fd3ed2ae73c79b89d771056bbe615d700c4e3bbc1ef sandboxid=1597208cc210370f2d6d0fd3ed2ae73c79b89d771056bbe615d700c4e3bbc1ef source=virtcontainers subsystem=sandbox
time="2022-02-18T14:05:55.270362576+08:00" level=warning msg="Unable to connect to unix socket (/run/vc/vm/1597208cc210370f2d6d0fd3ed2ae73c79b89d771056bbe615d700c4e3bbc1ef/qmp.sock): dial unix /run/vc/vm/1597208cc210370f2d6d0fd3ed2ae73c79b89d771056bbe615d700c4e3bbc1ef/qmp.sock: connect: no such file or directory" arch=amd64 command=delete container=1597208cc210370f2d6d0fd3ed2ae73c79b89d771056bbe615d700c4e3bbc1ef name=kata-runtime pid=30383 sandbox=1597208cc210370f2d6d0fd3ed2ae73c79b89d771056bbe615d700c4e3bbc1ef source=virtcontainers subsystem=qmp
time="2022-02-18T14:05:55.270411959+08:00" level=error msg="Failed to connect to QEMU instance" arch=amd64 command=delete container=1597208cc210370f2d6d0fd3ed2ae73c79b89d771056bbe615d700c4e3bbc1ef error="dial unix /run/vc/vm/1597208cc210370f2d6d0fd3ed2ae73c79b89d771056bbe615d700c4e3bbc1ef/qmp.sock: connect: no such file or directory" name=kata-runtime pid=30383 sandbox=1597208cc210370f2d6d0fd3ed2ae73c79b89d771056bbe615d700c4e3bbc1ef source=virtcontainers subsystem=qemu
time="2022-02-18T14:05:55.270462889+08:00" level=warning msg="failed to resolve vm path" arch=amd64 command=delete container=1597208cc210370f2d6d0fd3ed2ae73c79b89d771056bbe615d700c4e3bbc1ef dir=/run/vc/vm/1597208cc210370f2d6d0fd3ed2ae73c79b89d771056bbe615d700c4e3bbc1ef error="lstat /run/vc/vm/1597208cc210370f2d6d0fd3ed2ae73c79b89d771056bbe615d700c4e3bbc1ef: no such file or directory" name=kata-runtime pid=30383 sandbox=1597208cc210370f2d6d0fd3ed2ae73c79b89d771056bbe615d700c4e3bbc1ef source=virtcontainers subsystem=qemu
time="2022-02-18T14:05:55.270557107+08:00" level=error msg="dial unix /run/vc/vm/1597208cc210370f2d6d0fd3ed2ae73c79b89d771056bbe615d700c4e3bbc1ef/qmp.sock: connect: no such file or directory" arch=amd64 command=delete container=1597208cc210370f2d6d0fd3ed2ae73c79b89d771056bbe615d700c4e3bbc1ef name=kata-runtime pid=30383 sandbox=1597208cc210370f2d6d0fd3ed2ae73c79b89d771056bbe615d700c4e3bbc1ef source=runtime
time="2022-02-18T14:07:09.459114578+08:00" level=error msg="open /run/vc/sbs/4853f0473e78e32366c774bbcad436172ff8000bf0aa86c77e5459fb7a285863: no such file or directory" arch=amd64 command=delete container=4853f0473e78e32366c774bbcad436172ff8000bf0aa86c77e5459fb7a285863 name=kata-runtime pid=35418 source=runtime
time="2022-02-18T14:07:39.553078023+08:00" level=error msg="open /run/vc/sbs/4853f0473e78e32366c774bbcad436172ff8000bf0aa86c77e5459fb7a285863: no such file or directory" arch=amd64 command=kill container=4853f0473e78e32366c774bbcad436172ff8000bf0aa86c77e5459fb7a285863 name=kata-runtime pid=37754 source=runtime
time="2022-02-18T14:07:53.732429418+08:00" level=error msg="open /run/vc/sbs/4853f0473e78e32366c774bbcad436172ff8000bf0aa86c77e5459fb7a285863: no such file or directory" arch=amd64 command=kill container=4853f0473e78e32366c774bbcad436172ff8000bf0aa86c77e5459fb7a285863 name=kata-runtime pid=38619 source=runtime
time="2022-02-18T14:08:22.092217658+08:00" level=warning msg="Agent did not stop sandbox" arch=amd64 command=delete container=1597208cc210370f2d6d0fd3ed2ae73c79b89d771056bbe615d700c4e3bbc1ef error="Proxy is not running" name=kata-runtime pid=40743 sandbox=1597208cc210370f2d6d0fd3ed2ae73c79b89d771056bbe615d700c4e3bbc1ef sandboxid=1597208cc210370f2d6d0fd3ed2ae73c79b89d771056bbe615d700c4e3bbc1ef source=virtcontainers subsystem=sandbox
time="2022-02-18T14:08:22.092407487+08:00" level=warning msg="Unable to connect to unix socket (/run/vc/vm/1597208cc210370f2d6d0fd3ed2ae73c79b89d771056bbe615d700c4e3bbc1ef/qmp.sock): dial unix /run/vc/vm/1597208cc210370f2d6d0fd3ed2ae73c79b89d771056bbe615d700c4e3bbc1ef/qmp.sock: connect: no such file or directory" arch=amd64 command=delete container=1597208cc210370f2d6d0fd3ed2ae73c79b89d771056bbe615d700c4e3bbc1ef name=kata-runtime pid=40743 sandbox=1597208cc210370f2d6d0fd3ed2ae73c79b89d771056bbe615d700c4e3bbc1ef source=virtcontainers subsystem=qmp
time="2022-02-18T14:08:22.092461781+08:00" level=error msg="Failed to connect to QEMU instance" arch=amd64 command=delete container=1597208cc210370f2d6d0fd3ed2ae73c79b89d771056bbe615d700c4e3bbc1ef error="dial unix /run/vc/vm/1597208cc210370f2d6d0fd3ed2ae73c79b89d771056bbe615d700c4e3bbc1ef/qmp.sock: connect: no such file or directory" name=kata-runtime pid=40743 sandbox=1597208cc210370f2d6d0fd3ed2ae73c79b89d771056bbe615d700c4e3bbc1ef source=virtcontainers subsystem=qemu
time="2022-02-18T14:08:22.092530735+08:00" level=warning msg="failed to resolve vm path" arch=amd64 command=delete container=1597208cc210370f2d6d0fd3ed2ae73c79b89d771056bbe615d700c4e3bbc1ef dir=/run/vc/vm/1597208cc210370f2d6d0fd3ed2ae73c79b89d771056bbe615d700c4e3bbc1ef error="lstat /run/vc/vm/1597208cc210370f2d6d0fd3ed2ae73c79b89d771056bbe615d700c4e3bbc1ef: no such file or directory" name=kata-runtime pid=40743 sandbox=1597208cc210370f2d6d0fd3ed2ae73c79b89d771056bbe615d700c4e3bbc1ef source=virtcontainers subsystem=qemu
time="2022-02-18T14:08:22.092826829+08:00" level=error msg="dial unix /run/vc/vm/1597208cc210370f2d6d0fd3ed2ae73c79b89d771056bbe615d700c4e3bbc1ef/qmp.sock: connect: no such file or directory" arch=amd64 command=delete container=1597208cc210370f2d6d0fd3ed2ae73c79b89d771056bbe615d700c4e3bbc1ef name=kata-runtime pid=40743 sandbox=1597208cc210370f2d6d0fd3ed2ae73c79b89d771056bbe615d700c4e3bbc1ef source=runtime
time="2022-02-18T14:08:41.55359234+08:00" level=warning msg="Agent did not stop sandbox" arch=amd64 command=delete container=1597208cc210370f2d6d0fd3ed2ae73c79b89d771056bbe615d700c4e3bbc1ef error="Proxy is not running" name=kata-runtime pid=42002 sandbox=1597208cc210370f2d6d0fd3ed2ae73c79b89d771056bbe615d700c4e3bbc1ef sandboxid=1597208cc210370f2d6d0fd3ed2ae73c79b89d771056bbe615d700c4e3bbc1ef source=virtcontainers subsystem=sandbox
time="2022-02-18T14:08:41.553711218+08:00" level=warning msg="Unable to connect to unix socket (/run/vc/vm/1597208cc210370f2d6d0fd3ed2ae73c79b89d771056bbe615d700c4e3bbc1ef/qmp.sock): dial unix /run/vc/vm/1597208cc210370f2d6d0fd3ed2ae73c79b89d771056bbe615d700c4e3bbc1ef/qmp.sock: connect: no such file or directory" arch=amd64 command=delete container=1597208cc210370f2d6d0fd3ed2ae73c79b89d771056bbe615d700c4e3bbc1ef name=kata-runtime pid=42002 sandbox=1597208cc210370f2d6d0fd3ed2ae73c79b89d771056bbe615d700c4e3bbc1ef source=virtcontainers subsystem=qmp
time="2022-02-18T14:08:41.553745221+08:00" level=error msg="Failed to connect to QEMU instance" arch=amd64 command=delete container=1597208cc210370f2d6d0fd3ed2ae73c79b89d771056bbe615d700c4e3bbc1ef error="dial unix /run/vc/vm/1597208cc210370f2d6d0fd3ed2ae73c79b89d771056bbe615d700c4e3bbc1ef/qmp.sock: connect: no such file or directory" name=kata-runtime pid=42002 sandbox=1597208cc210370f2d6d0fd3ed2ae73c79b89d771056bbe615d700c4e3bbc1ef source=virtcontainers subsystem=qemu
time="2022-02-18T14:08:41.553808853+08:00" level=warning msg="failed to resolve vm path" arch=amd64 command=delete container=1597208cc210370f2d6d0fd3ed2ae73c79b89d771056bbe615d700c4e3bbc1ef dir=/run/vc/vm/1597208cc210370f2d6d0fd3ed2ae73c79b89d771056bbe615d700c4e3bbc1ef error="lstat /run/vc/vm/1597208cc210370f2d6d0fd3ed2ae73c79b89d771056bbe615d700c4e3bbc1ef: no such file or directory" name=kata-runtime pid=42002 sandbox=1597208cc210370f2d6d0fd3ed2ae73c79b89d771056bbe615d700c4e3bbc1ef source=virtcontainers subsystem=qemu
time="2022-02-18T14:08:41.553905354+08:00" level=error msg="dial unix /run/vc/vm/1597208cc210370f2d6d0fd3ed2ae73c79b89d771056bbe615d700c4e3bbc1ef/qmp.sock: connect: no such file or directory" arch=amd64 command=delete container=1597208cc210370f2d6d0fd3ed2ae73c79b89d771056bbe615d700c4e3bbc1ef name=kata-runtime pid=42002 sandbox=1597208cc210370f2d6d0fd3ed2ae73c79b89d771056bbe615d700c4e3bbc1ef source=runtime
```
No recent throttler problems found in system journal.
Recent problems found in system journal:
```
time="2022-02-23T10:09:17.207345001+08:00" level=warning error="no such file or directory" name=containerd-shim-v2 pid=72153 rootfs-dir=/run/kata-containers/shared/sandboxes/bfdc8cf8373a9fe3c17951a786d1fb7628d6326bb089cff46bb63dacc9029a53/mounts/ccb0288159eb0a219e93aa33136931299d2bf3bfc8645d7e11c4038d7540ef02/rootfs sandbox=bfdc8cf8373a9fe3c17951a786d1fb7628d6326bb089cff46bb63dacc9029a53 source=virtcontainers subsystem=mount
time="2022-02-23T10:14:22.271561148+08:00" level=error msg="container create failed" error="QMP command failed: The device is not writable: Permission denied" name=containerd-shim-v2 pid=72153 sandbox=bfdc8cf8373a9fe3c17951a786d1fb7628d6326bb089cff46bb63dacc9029a53 source=virtcontainers subsystem=container
time="2022-02-23T10:14:23.824732387+08:00" level=warning error="no such file or directory" name=containerd-shim-v2 pid=72153 rootfs-dir=/run/kata-containers/shared/sandboxes/bfdc8cf8373a9fe3c17951a786d1fb7628d6326bb089cff46bb63dacc9029a53/mounts/76aaf3fad6c9f3ad9807e618e9a20d1fc01340c36f6cc447fd6219510ce17539/rootfs sandbox=bfdc8cf8373a9fe3c17951a786d1fb7628d6326bb089cff46bb63dacc9029a53 source=virtcontainers subsystem=mount
time="2022-02-23T10:16:31.760040914+08:00" level=warning msg="failed to get OOM event from sandbox" error="ttrpc: closed" name=containerd-shim-v2 pid=72153 sandbox=bfdc8cf8373a9fe3c17951a786d1fb7628d6326bb089cff46bb63dacc9029a53 source=containerd-kata-shim-v2
time="2022-02-23T10:16:31.760071969+08:00" level=error msg="Wait for process failed" container=bfdc8cf8373a9fe3c17951a786d1fb7628d6326bb089cff46bb63dacc9029a53 error="ttrpc: closed" name=containerd-shim-v2 pid=bfdc8cf8373a9fe3c17951a786d1fb7628d6326bb089cff46bb63dacc9029a53 sandbox=bfdc8cf8373a9fe3c17951a786d1fb7628d6326bb089cff46bb63dacc9029a53 source=containerd-kata-shim-v2
time="2022-02-23T10:16:31.836742616+08:00" level=warning msg="Agent did not stop sandbox" error="ttrpc: closed" name=containerd-shim-v2 pid=72153 sandbox=bfdc8cf8373a9fe3c17951a786d1fb7628d6326bb089cff46bb63dacc9029a53 sandboxid=bfdc8cf8373a9fe3c17951a786d1fb7628d6326bb089cff46bb63dacc9029a53 source=virtcontainers subsystem=sandbox
time="2022-02-23T10:16:31.836845004+08:00" level=error msg="Fail to execute qmp QUIT" error="exitting QMP loop, command cancelled" name=containerd-shim-v2 pid=72153 sandbox=bfdc8cf8373a9fe3c17951a786d1fb7628d6326bb089cff46bb63dacc9029a53 source=virtcontainers subsystem=qemu
time="2022-02-23T10:16:31.837849307+08:00" level=error msg="failed to Cleanup cgroups" error="cgroups: cgroup deleted" name=containerd-shim-v2 pid=72153 sandbox=bfdc8cf8373a9fe3c17951a786d1fb7628d6326bb089cff46bb63dacc9029a53 source=virtcontainers subsystem=sandbox
time="2022-02-23T10:16:33.640374583+08:00" level=error msg="container create failed" error="QMP command failed: The device is not writable: Permission denied" name=containerd-shim-v2 pid=73921 sandbox=fc2d55dc9208d6e57b5c72ad2574fca94fb90eaf89bf71f88f84765a0a695a05 source=virtcontainers subsystem=container
time="2022-02-23T10:16:35.204624925+08:00" level=warning error="no such file or directory" name=containerd-shim-v2 pid=73921 rootfs-dir=/run/kata-containers/shared/sandboxes/fc2d55dc9208d6e57b5c72ad2574fca94fb90eaf89bf71f88f84765a0a695a05/mounts/8afe09ec8316c7c1439ea399296f2d4de986fc9f478ed02f3cecc2089515328d/rootfs sandbox=fc2d55dc9208d6e57b5c72ad2574fca94fb90eaf89bf71f88f84765a0a695a05 source=virtcontainers subsystem=mount
time="2022-02-23T10:16:36.033239554+08:00" level=warning msg="failed to get OOM event from sandbox" error="rpc error: code = Internal desc = " name=containerd-shim-v2 pid=73921 sandbox=fc2d55dc9208d6e57b5c72ad2574fca94fb90eaf89bf71f88f84765a0a695a05 source=containerd-kata-shim-v2
time="2022-02-23T10:22:05.7846467+08:00" level=error msg="container create failed" error="QMP command failed: The device is not writable: Permission denied" name=containerd-shim-v2 pid=98685 sandbox=6d42f20995c48475af30cc2aa05e0deef9594796dc57eebfeae25f1b27b26ef5 source=virtcontainers subsystem=container
time="2022-02-23T10:22:07.331374125+08:00" level=warning error="no such file or directory" name=containerd-shim-v2 pid=98685 rootfs-dir=/run/kata-containers/shared/sandboxes/6d42f20995c48475af30cc2aa05e0deef9594796dc57eebfeae25f1b27b26ef5/mounts/716bf291dd92462ae1ad915364908c1f5b3123431bc267463123279c001c9046/rootfs sandbox=6d42f20995c48475af30cc2aa05e0deef9594796dc57eebfeae25f1b27b26ef5 source=virtcontainers subsystem=mount
time="2022-02-23T10:22:08.798432712+08:00" level=error msg="container create failed" error="QMP command failed: The device is not writable: Permission denied" name=containerd-shim-v2 pid=98685 sandbox=6d42f20995c48475af30cc2aa05e0deef9594796dc57eebfeae25f1b27b26ef5 source=virtcontainers subsystem=container
time="2022-02-23T10:22:10.347567409+08:00" level=warning error="no such file or directory" name=containerd-shim-v2 pid=98685 rootfs-dir=/run/kata-containers/shared/sandboxes/6d42f20995c48475af30cc2aa05e0deef9594796dc57eebfeae25f1b27b26ef5/mounts/7d0b35bce7107c045725b367c0518ff47b8a60a2892cc0beda6e54528ae654e8/rootfs sandbox=6d42f20995c48475af30cc2aa05e0deef9594796dc57eebfeae25f1b27b26ef5 source=virtcontainers subsystem=mount
time="2022-02-23T10:22:24.261006172+08:00" level=error msg="container create failed" error="QMP command failed: The device is not writable: Permission denied" name=containerd-shim-v2 pid=98685 sandbox=6d42f20995c48475af30cc2aa05e0deef9594796dc57eebfeae25f1b27b26ef5 source=virtcontainers subsystem=container
time="2022-02-23T10:22:25.796773465+08:00" level=warning error="no such file or directory" name=containerd-shim-v2 pid=98685 rootfs-dir=/run/kata-containers/shared/sandboxes/6d42f20995c48475af30cc2aa05e0deef9594796dc57eebfeae25f1b27b26ef5/mounts/b3026bd24b079b0d72bdf217685c0c598b1b1e9901a62f69427d15d3c88479a4/rootfs sandbox=6d42f20995c48475af30cc2aa05e0deef9594796dc57eebfeae25f1b27b26ef5 source=virtcontainers subsystem=mount
time="2022-02-23T10:22:51.382564769+08:00" level=error msg="container create failed" error="QMP command failed: The device is not writable: Permission denied" name=containerd-shim-v2 pid=98685 sandbox=6d42f20995c48475af30cc2aa05e0deef9594796dc57eebfeae25f1b27b26ef5 source=virtcontainers subsystem=container
time="2022-02-23T10:22:52.932471163+08:00" level=warning error="no such file or directory" name=containerd-shim-v2 pid=98685 rootfs-dir=/run/kata-containers/shared/sandboxes/6d42f20995c48475af30cc2aa05e0deef9594796dc57eebfeae25f1b27b26ef5/mounts/482b7c7a049d1e03a928ae1063f129ad5feb6ccbc692044b806543612cbcecdb/rootfs sandbox=6d42f20995c48475af30cc2aa05e0deef9594796dc57eebfeae25f1b27b26ef5 source=virtcontainers subsystem=mount
time="2022-02-23T10:23:37.257091621+08:00" level=error msg="container create failed" error="QMP command failed: The device is not writable: Permission denied" name=containerd-shim-v2 pid=98685 sandbox=6d42f20995c48475af30cc2aa05e0deef9594796dc57eebfeae25f1b27b26ef5 source=virtcontainers subsystem=container
time="2022-02-23T10:23:38.790015133+08:00" level=warning error="no such file or directory" name=containerd-shim-v2 pid=98685 rootfs-dir=/run/kata-containers/shared/sandboxes/6d42f20995c48475af30cc2aa05e0deef9594796dc57eebfeae25f1b27b26ef5/mounts/ad533c145adb91b3a24451fda9de9d29f33cd0e4a500119d8e65a833386b8adf/rootfs sandbox=6d42f20995c48475af30cc2aa05e0deef9594796dc57eebfeae25f1b27b26ef5 source=virtcontainers subsystem=mount
time="2022-02-23T10:24:53.724091836+08:00" level=warning msg="failed to get OOM event from sandbox" error="ttrpc: closed" name=containerd-shim-v2 pid=98685 sandbox=6d42f20995c48475af30cc2aa05e0deef9594796dc57eebfeae25f1b27b26ef5 source=containerd-kata-shim-v2
time="2022-02-23T10:24:53.724083838+08:00" level=error msg="Wait for process failed" container=6d42f20995c48475af30cc2aa05e0deef9594796dc57eebfeae25f1b27b26ef5 error="ttrpc: closed" name=containerd-shim-v2 pid=6d42f20995c48475af30cc2aa05e0deef9594796dc57eebfeae25f1b27b26ef5 sandbox=6d42f20995c48475af30cc2aa05e0deef9594796dc57eebfeae25f1b27b26ef5 source=containerd-kata-shim-v2
time="2022-02-23T10:24:53.852635093+08:00" level=warning msg="Agent did not stop sandbox" error="Dead agent" name=containerd-shim-v2 pid=98685 sandbox=6d42f20995c48475af30cc2aa05e0deef9594796dc57eebfeae25f1b27b26ef5 sandboxid=6d42f20995c48475af30cc2aa05e0deef9594796dc57eebfeae25f1b27b26ef5 source=virtcontainers subsystem=sandbox
time="2022-02-23T10:24:53.852741862+08:00" level=error msg="Fail to execute qmp QUIT" error="exitting QMP loop, command cancelled" name=containerd-shim-v2 pid=98685 sandbox=6d42f20995c48475af30cc2aa05e0deef9594796dc57eebfeae25f1b27b26ef5 source=virtcontainers subsystem=qemu
time="2022-02-23T10:24:53.853709769+08:00" level=error msg="failed to Cleanup cgroups" error="cgroups: cgroup deleted" name=containerd-shim-v2 pid=98685 sandbox=6d42f20995c48475af30cc2aa05e0deef9594796dc57eebfeae25f1b27b26ef5 source=virtcontainers subsystem=sandbox
time="2022-02-23T10:25:13.697489555+08:00" level=error msg="container create failed" error="QMP command failed: The device is not writable: Permission denied" name=containerd-shim-v2 pid=112958 sandbox=b299220b4916a20b8ba3715f91b18ac9614eaa29f394244d68c64c6d4dbc3613 source=virtcontainers subsystem=container
time="2022-02-23T10:25:15.249164695+08:00" level=warning error="no such file or directory" name=containerd-shim-v2 pid=112958 rootfs-dir=/run/kata-containers/shared/sandboxes/b299220b4916a20b8ba3715f91b18ac9614eaa29f394244d68c64c6d4dbc3613/mounts/552c8c369531bc07f604b2297f32863d353e587e2b8568e2e631a1c3ed7a7742/rootfs sandbox=b299220b4916a20b8ba3715f91b18ac9614eaa29f394244d68c64c6d4dbc3613 source=virtcontainers subsystem=mount
time="2022-02-23T10:25:16.482106207+08:00" level=error msg="container create failed" error="QMP command failed: The device is not writable: Permission denied" name=containerd-shim-v2 pid=112958 sandbox=b299220b4916a20b8ba3715f91b18ac9614eaa29f394244d68c64c6d4dbc3613 source=virtcontainers subsystem=container
time="2022-02-23T10:25:18.025986552+08:00" level=warning error="no such file or directory" name=containerd-shim-v2 pid=112958 rootfs-dir=/run/kata-containers/shared/sandboxes/b299220b4916a20b8ba3715f91b18ac9614eaa29f394244d68c64c6d4dbc3613/mounts/6dc9078c8df2008bbb913df02e20f31e10f8fcfee9d708303fc70113b2f465c6/rootfs sandbox=b299220b4916a20b8ba3715f91b18ac9614eaa29f394244d68c64c6d4dbc3613 source=virtcontainers subsystem=mount
time="2022-02-23T10:25:34.557006325+08:00" level=error msg="container create failed" error="QMP command failed: The device is not writable: Permission denied" name=containerd-shim-v2 pid=112958 sandbox=b299220b4916a20b8ba3715f91b18ac9614eaa29f394244d68c64c6d4dbc3613 source=virtcontainers subsystem=container
time="2022-02-23T10:25:36.131748846+08:00" level=warning error="no such file or directory" name=containerd-shim-v2 pid=112958 rootfs-dir=/run/kata-containers/shared/sandboxes/b299220b4916a20b8ba3715f91b18ac9614eaa29f394244d68c64c6d4dbc3613/mounts/cf6602be16bfbd93148f436ae182f4347efc8fa0d9a370236b3a65b3e379df73/rootfs sandbox=b299220b4916a20b8ba3715f91b18ac9614eaa29f394244d68c64c6d4dbc3613 source=virtcontainers subsystem=mount
time="2022-02-23T10:26:00.336898355+08:00" level=error msg="container create failed" error="QMP command failed: The device is not writable: Permission denied" name=containerd-shim-v2 pid=116472 sandbox=8f71e46f708e9f41058f633b03809a49b34940d3290532da321f0cd51c607c6a source=virtcontainers subsystem=container
time="2022-02-23T10:26:01.929307615+08:00" level=warning error="no such file or directory" name=containerd-shim-v2 pid=116472 rootfs-dir=/run/kata-containers/shared/sandboxes/8f71e46f708e9f41058f633b03809a49b34940d3290532da321f0cd51c607c6a/mounts/2adefa1d2d8b40f1823cf80b8ba0f74f61badc164e72f11116bc4ca3d6d8d1f9/rootfs sandbox=8f71e46f708e9f41058f633b03809a49b34940d3290532da321f0cd51c607c6a source=virtcontainers subsystem=mount
time="2022-02-23T10:26:02.120393997+08:00" level=error msg="container create failed" error="QMP command failed: The device is not writable: Permission denied" name=containerd-shim-v2 pid=112958 sandbox=b299220b4916a20b8ba3715f91b18ac9614eaa29f394244d68c64c6d4dbc3613 source=virtcontainers subsystem=container
time="2022-02-23T10:26:03.780315814+08:00" level=error msg="container create failed" error="QMP command failed: The device is not writable: Permission denied" name=containerd-shim-v2 pid=116472 sandbox=8f71e46f708e9f41058f633b03809a49b34940d3290532da321f0cd51c607c6a source=virtcontainers subsystem=container
time="2022-02-23T10:26:03.814651881+08:00" level=warning error="no such file or directory" name=containerd-shim-v2 pid=112958 rootfs-dir=/run/kata-containers/shared/sandboxes/b299220b4916a20b8ba3715f91b18ac9614eaa29f394244d68c64c6d4dbc3613/mounts/b9f8ee45af62fb1aea546686f1a849a054e36fbd90b50315a8c86ed1b7387083/rootfs sandbox=b299220b4916a20b8ba3715f91b18ac9614eaa29f394244d68c64c6d4dbc3613 source=virtcontainers subsystem=mount
time="2022-02-23T10:26:05.329647077+08:00" level=warning error="no such file or directory" name=containerd-shim-v2 pid=116472 rootfs-dir=/run/kata-containers/shared/sandboxes/8f71e46f708e9f41058f633b03809a49b34940d3290532da321f0cd51c607c6a/mounts/4919215930542b202ccc73880f562cdae3de5cb737e139ca5d8e39b3e21d2ff8/rootfs sandbox=8f71e46f708e9f41058f633b03809a49b34940d3290532da321f0cd51c607c6a source=virtcontainers subsystem=mount
time="2022-02-23T10:26:18.252626852+08:00" level=error msg="container create failed" error="QMP command failed: The device is not writable: Permission denied" name=containerd-shim-v2 pid=116472 sandbox=8f71e46f708e9f41058f633b03809a49b34940d3290532da321f0cd51c607c6a source=virtcontainers subsystem=container
time="2022-02-23T10:26:19.815526942+08:00" level=warning error="no such file or directory" name=containerd-shim-v2 pid=116472 rootfs-dir=/run/kata-containers/shared/sandboxes/8f71e46f708e9f41058f633b03809a49b34940d3290532da321f0cd51c607c6a/mounts/a60eb5d2e3df812e366701fb4573cb8841cb283107b86056e04f97500b4dfbab/rootfs sandbox=8f71e46f708e9f41058f633b03809a49b34940d3290532da321f0cd51c607c6a source=virtcontainers subsystem=mount
time="2022-02-23T10:26:42.260328783+08:00" level=error msg="container create failed" error="QMP command failed: The device is not writable: Permission denied" name=containerd-shim-v2 pid=116472 sandbox=8f71e46f708e9f41058f633b03809a49b34940d3290532da321f0cd51c607c6a source=virtcontainers subsystem=container
time="2022-02-23T10:26:43.813221321+08:00" level=warning error="no such file or directory" name=containerd-shim-v2 pid=116472 rootfs-dir=/run/kata-containers/shared/sandboxes/8f71e46f708e9f41058f633b03809a49b34940d3290532da321f0cd51c607c6a/mounts/2e6184494b790d91ab0c502ba957877b4f30a6699b3a1c2e2e7b7a49a63f3d4c/rootfs sandbox=8f71e46f708e9f41058f633b03809a49b34940d3290532da321f0cd51c607c6a source=virtcontainers subsystem=mount
time="2022-02-23T10:26:46.333407997+08:00" level=error msg="container create failed" error="QMP command failed: The device is not writable: Permission denied" name=containerd-shim-v2 pid=112958 sandbox=b299220b4916a20b8ba3715f91b18ac9614eaa29f394244d68c64c6d4dbc3613 source=virtcontainers subsystem=container
time="2022-02-23T10:26:47.880058409+08:00" level=warning error="no such file or directory" name=containerd-shim-v2 pid=112958 rootfs-dir=/run/kata-containers/shared/sandboxes/b299220b4916a20b8ba3715f91b18ac9614eaa29f394244d68c64c6d4dbc3613/mounts/c50c0244f6e282ed7ed93f7442a25a96e1b7083acf1b7ff5b6b5969f0747aa6b/rootfs sandbox=b299220b4916a20b8ba3715f91b18ac9614eaa29f394244d68c64c6d4dbc3613 source=virtcontainers subsystem=mount
time="2022-02-23T10:27:27.26082225+08:00" level=error msg="container create failed" error="QMP command failed: The device is not writable: Permission denied" name=containerd-shim-v2 pid=116472 sandbox=8f71e46f708e9f41058f633b03809a49b34940d3290532da321f0cd51c607c6a source=virtcontainers subsystem=container
time="2022-02-23T10:27:28.838648436+08:00" level=warning error="no such file or directory" name=containerd-shim-v2 pid=116472 rootfs-dir=/run/kata-containers/shared/sandboxes/8f71e46f708e9f41058f633b03809a49b34940d3290532da321f0cd51c607c6a/mounts/efd88f31d79b1494bdc5e2ae5c12b6dcd8ba2c792ba51748873354274b4f3f6e/rootfs sandbox=8f71e46f708e9f41058f633b03809a49b34940d3290532da321f0cd51c607c6a source=virtcontainers subsystem=mount
time="2022-02-23T10:28:11.2823546+08:00" level=error msg="container create failed" error="QMP command failed: The device is not writable: Permission denied" name=containerd-shim-v2 pid=112958 sandbox=b299220b4916a20b8ba3715f91b18ac9614eaa29f394244d68c64c6d4dbc3613 source=virtcontainers subsystem=container
time="2022-02-23T10:28:12.86744634+08:00" level=warning error="no such file or directory" name=containerd-shim-v2 pid=112958 rootfs-dir=/run/kata-containers/shared/sandboxes/b299220b4916a20b8ba3715f91b18ac9614eaa29f394244d68c64c6d4dbc3613/mounts/4c7f2f0cfcf860a7a74b4e72ebc5a8e901aac4d2a650d1cbc1d64efff21a9dd0/rootfs sandbox=b299220b4916a20b8ba3715f91b18ac9614eaa29f394244d68c64c6d4dbc3613 source=virtcontainers subsystem=mount
time="2022-02-23T10:28:51.282126564+08:00" level=error msg="container create failed" error="QMP command failed: The device is not writable: Permission denied" name=containerd-shim-v2 pid=116472 sandbox=8f71e46f708e9f41058f633b03809a49b34940d3290532da321f0cd51c607c6a source=virtcontainers subsystem=container
time="2022-02-23T10:28:52.875392316+08:00" level=warning error="no such file or directory" name=containerd-shim-v2 pid=116472 rootfs-dir=/run/kata-containers/shared/sandboxes/8f71e46f708e9f41058f633b03809a49b34940d3290532da321f0cd51c607c6a/mounts/88b3fe6f2ed2058b62eb7f70631821c748a0211e22472c7ee8340b40a1fa8212/rootfs sandbox=8f71e46f708e9f41058f633b03809a49b34940d3290532da321f0cd51c607c6a source=virtcontainers subsystem=mount
```
# Container manager details
## Docker
```
Client: Docker Engine - Community
Version: 19.03.12
API version: 1.40
Go version: go1.13.10
Git commit: 48a66213fe
Built: Mon Jun 22 15:45:52 2020
OS/Arch: linux/amd64
Experimental: false
Server: Docker Engine - Community
Engine:
Version: 19.03.12
API version: 1.40 (minimum version 1.12)
Go version: go1.13.10
Git commit: 48a66213fe
Built: Mon Jun 22 15:44:23 2020
OS/Arch: linux/amd64
Experimental: false
containerd:
Version: v1.5.9
GitCommit: 1407cab509ff0d96baa4f0eb6ff9980270e6e620
nvidia:
Version: 1.0.0-rc10
GitCommit: dc9208a3303feef5b3839f4323d9beb36df0a9dd-dirty
docker-init:
Version: 0.18.0
GitCommit: fec3683
```
```
Client:
Debug Mode: false
Server:
Containers: 181
Running: 0
Paused: 0
Stopped: 181
Images: 392
Server Version: 19.03.12
Storage Driver: overlay2
Backing Filesystem: extfs
Supports d_type: true
Native Overlay Diff: true
Logging Driver: json-file
Cgroup Driver: cgroupfs
Plugins:
Volume: local
Network: bridge host ipvlan macvlan null overlay
Log: awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog
Swarm: inactive
Runtimes: kata-clh kata-fc kata-qemu kata-qemu-virtiofs kata-runtime nvidia runc
Default Runtime: nvidia
Init Binary: docker-init
containerd version: 1407cab509ff0d96baa4f0eb6ff9980270e6e620
runc version: dc9208a3303feef5b3839f4323d9beb36df0a9dd-dirty
init version: fec3683
Security Options:
seccomp
Profile: default
Kernel Version: 4.9.0-13-amd64
Operating System: Debian GNU/Linux 9 (stretch)
OSType: linux
Architecture: x86_64
CPUs: 56
Total Memory: 376.6GiB
Name: ai-2080ti-27
ID: GEJY:THC5:4ODB:FCYA:4ESH:YYFK:PDJK:WH4L:7DX4:R5EM:K3LJ:HRIU
Docker Root Dir: /var/lib/docker
Debug Mode: false
HTTP Proxy: http://yyy.xxx.com:xxxx
HTTPS Proxy: http://yyy.xxx.com:xxxx
No Proxy: hub.xxx.com,hub.xxx.com,ai-1080ti-06.xxx.com,127.0.0.1,localhost
Registry: https://index.docker.io/v1/
Labels:
Experimental: false
Insecure Registries:
127.0.0.0/8
Live Restore Enabled: false
```
```
Type=notify
Restart=always
NotifyAccess=main
RestartUSec=2s
TimeoutStartUSec=infinity
TimeoutStopUSec=infinity
RuntimeMaxUSec=infinity
WatchdogUSec=0
WatchdogTimestamp=Thu 2022-02-17 10:42:19 CST
WatchdogTimestampMonotonic=1988554560139
FailureAction=none
PermissionsStartOnly=no
RootDirectoryStartOnly=no
RemainAfterExit=no
GuessMainPID=yes
MainPID=167198
ControlPID=0
FileDescriptorStoreMax=0
NFileDescriptorStore=0
StatusErrno=0
Result=success
UID=4294967295
GID=4294967295
ExecMainStartTimestamp=Thu 2022-02-17 10:42:17 CST
ExecMainStartTimestampMonotonic=1988553272998
ExecMainExitTimestampMonotonic=0
ExecMainPID=167198
ExecMainCode=0
ExecMainStatus=0
ExecStart={ path=/usr/bin/dockerd ; argv[]=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock ; ignore_errors=no ; start_time=[n/a] ; stop_time=[n/a] ; pid=0 ; code=(null) ; status=0/0 }
ExecReload={ path=/bin/kill ; argv[]=/bin/kill -s HUP $MAINPID ; ignore_errors=no ; start_time=[n/a] ; stop_time=[n/a] ; pid=0 ; code=(null) ; status=0/0 }
Slice=system.slice
ControlGroup=/system.slice/docker.service
MemoryCurrent=110043136
CPUUsageNSec=10144765576282
TasksCurrent=198
Delegate=yes
CPUAccounting=no
CPUWeight=18446744073709551615
StartupCPUWeight=18446744073709551615
CPUShares=18446744073709551615
StartupCPUShares=18446744073709551615
CPUQuotaPerSecUSec=infinity
IOAccounting=no
IOWeight=18446744073709551615
StartupIOWeight=18446744073709551615
BlockIOAccounting=no
BlockIOWeight=18446744073709551615
StartupBlockIOWeight=18446744073709551615
MemoryAccounting=no
MemoryLow=0
MemoryHigh=18446744073709551615
MemoryMax=18446744073709551615
MemorySwapMax=18446744073709551615
MemoryLimit=18446744073709551615
DevicePolicy=auto
TasksAccounting=yes
TasksMax=18446744073709551615
Environment=HTTP_PROXY=http://yyy.xxx.com:xxxx HTTPS_PROXY=http://yyy.xxx.com:xxxx NO_PROXY=hub.xxx.com,hub.xxx.com,ai-1080ti-06.xxx.com,127.0.0.1,localhost
UMask=0022
LimitCPU=18446744073709551615
LimitCPUSoft=18446744073709551615
LimitFSIZE=18446744073709551615
LimitFSIZESoft=18446744073709551615
LimitDATA=18446744073709551615
LimitDATASoft=18446744073709551615
LimitSTACK=18446744073709551615
LimitSTACKSoft=8388608
LimitCORE=18446744073709551615
LimitCORESoft=18446744073709551615
LimitRSS=18446744073709551615
LimitRSSSoft=18446744073709551615
LimitNOFILE=18446744073709551615
LimitNOFILESoft=18446744073709551615
LimitAS=18446744073709551615
LimitASSoft=18446744073709551615
LimitNPROC=18446744073709551615
LimitNPROCSoft=18446744073709551615
LimitMEMLOCK=65536
LimitMEMLOCKSoft=65536
LimitLOCKS=18446744073709551615
LimitLOCKSSoft=18446744073709551615
LimitSIGPENDING=1542336
LimitSIGPENDINGSoft=1542336
LimitMSGQUEUE=819200
LimitMSGQUEUESoft=819200
LimitNICE=0
LimitNICESoft=0
LimitRTPRIO=0
LimitRTPRIOSoft=0
LimitRTTIME=18446744073709551615
LimitRTTIMESoft=18446744073709551615
OOMScoreAdjust=0
Nice=0
IOScheduling=0
CPUSchedulingPolicy=0
CPUSchedulingPriority=0
TimerSlackNSec=50000
CPUSchedulingResetOnFork=no
NonBlocking=no
StandardInput=null
StandardOutput=journal
StandardError=inherit
TTYReset=no
TTYVHangup=no
TTYVTDisallocate=no
SyslogPriority=30
SyslogLevelPrefix=yes
SyslogLevel=6
SyslogFacility=3
SecureBits=0
CapabilityBoundingSet=18446744073709551615
AmbientCapabilities=0
DynamicUser=no
RemoveIPC=no
MountFlags=0
PrivateTmp=no
PrivateDevices=no
ProtectKernelTunables=no
ProtectKernelModules=no
ProtectControlGroups=no
PrivateNetwork=no
PrivateUsers=no
ProtectHome=no
ProtectSystem=no
SameProcessGroup=no
UtmpMode=init
IgnoreSIGPIPE=yes
NoNewPrivileges=no
SystemCallErrorNumber=0
RuntimeDirectoryMode=0755
MemoryDenyWriteExecute=no
RestrictRealtime=no
RestrictNamespace=2114060288
KillMode=process
KillSignal=15
SendSIGKILL=yes
SendSIGHUP=no
Id=docker.service
Names=docker.service
Requires=docker.socket sysinit.target system.slice
Wants=network-online.target
BindsTo=containerd.service
WantedBy=multi-user.target
ConsistsOf=docker.socket
Conflicts=shutdown.target
Before=shutdown.target multi-user.target
After=network-online.target sysinit.target systemd-journald.socket containerd.service system.slice firewalld.service docker.socket basic.target
TriggeredBy=docker.socket
Documentation=https://docs.docker.com
Description=Docker Application Container Engine
LoadState=loaded
ActiveState=active
SubState=running
FragmentPath=/lib/systemd/system/docker.service
DropInPaths=/etc/systemd/system/docker.service.d/kata-containers.conf
UnitFileState=enabled
UnitFilePreset=enabled
StateChangeTimestamp=Thu 2022-02-17 10:42:19 CST
StateChangeTimestampMonotonic=1988554560141
InactiveExitTimestamp=Thu 2022-02-17 10:42:17 CST
InactiveExitTimestampMonotonic=1988553273054
ActiveEnterTimestamp=Thu 2022-02-17 10:42:19 CST
ActiveEnterTimestampMonotonic=1988554560141
ActiveExitTimestamp=Thu 2022-02-17 10:42:17 CST
ActiveExitTimestampMonotonic=1988553182362
InactiveEnterTimestamp=Thu 2022-02-17 10:42:17 CST
InactiveEnterTimestampMonotonic=1988553203370
CanStart=yes
CanStop=yes
CanReload=yes
CanIsolate=no
StopWhenUnneeded=no
RefuseManualStart=no
RefuseManualStop=no
AllowIsolate=no
DefaultDependencies=yes
OnFailureJobMode=replace
IgnoreOnIsolate=no
NeedDaemonReload=no
JobTimeoutUSec=infinity
JobTimeoutAction=none
ConditionResult=yes
AssertResult=yes
ConditionTimestamp=Thu 2022-02-17 10:42:17 CST
ConditionTimestampMonotonic=1988553271811
AssertTimestamp=Thu 2022-02-17 10:42:17 CST
AssertTimestampMonotonic=1988553271811
Transient=no
Perpetual=no
StartLimitIntervalSec=60000000
StartLimitBurst=3
StartLimitAction=none
InvocationID=405f2c6606b441119b3e90e373916414
```
## Kubernetes
```
Client Version: version.Info{Major:"1", Minor:"18", GitVersion:"v1.18.3", GitCommit:"2e7996e3e2712684bc73f0dec0200d64eec7fe40", GitTreeState:"clean", BuildDate:"2020-05-20T12:52:00Z", GoVersion:"go1.13.9", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"18", GitVersion:"v1.18.3", GitCommit:"2e7996e3e2712684bc73f0dec0200d64eec7fe40", GitTreeState:"clean", BuildDate:"2020-05-20T12:43:34Z", GoVersion:"go1.13.9", Compiler:"gc", Platform:"linux/amd64"}
```
```
apiVersion: v1
clusters:
- cluster:
certificate-authority-data: DATA+OMITTED
server: https://10.90.9.94:6443
name: k8s-xxx-test
contexts:
- context:
cluster: k8s-xxx-test
user: kubernetes-admin
name: kubernetes-admin@k8s-xxx-test
current-context: kubernetes-admin@k8s-xxx-test
kind: Config
preferences: {}
users:
- name: kubernetes-admin
user:
client-certificate-data: REDACTED
client-key-data: REDACTED
```
```
Type=simple
Restart=always
NotifyAccess=none
RestartUSec=10s
TimeoutStartUSec=1min 30s
TimeoutStopUSec=1min 30s
RuntimeMaxUSec=infinity
WatchdogUSec=0
WatchdogTimestamp=Thu 2022-02-17 10:43:26 CST
WatchdogTimestampMonotonic=1988621434309
FailureAction=none
PermissionsStartOnly=no
RootDirectoryStartOnly=no
RemainAfterExit=no
GuessMainPID=yes
MainPID=1106
ControlPID=0
FileDescriptorStoreMax=0
NFileDescriptorStore=0
StatusErrno=0
Result=success
UID=4294967295
GID=4294967295
ExecMainStartTimestamp=Thu 2022-02-17 10:43:26 CST
ExecMainStartTimestampMonotonic=1988621434268
ExecMainExitTimestampMonotonic=0
ExecMainPID=1106
ExecMainCode=0
ExecMainStatus=0
ExecStartPre={ path=/bin/mkdir ; argv[]=/bin/mkdir -p /sys/fs/cgroup/cpu/system.slice/kubelet.service ; ignore_errors=no ; start_time=[n/a] ; stop_time=[n/a] ; pid=0 ; code=(null) ; status=0/0 }
ExecStartPre={ path=/bin/mkdir ; argv[]=/bin/mkdir -p /sys/fs/cgroup/cpuset/system.slice/kubelet.service ; ignore_errors=no ; start_time=[n/a] ; stop_time=[n/a] ; pid=0 ; code=(null) ; status=0/0 }
ExecStartPre={ path=/bin/mkdir ; argv[]=/bin/mkdir -p /sys/fs/cgroup/cpuacct/system.slice/kubelet.service ; ignore_errors=no ; start_time=[n/a] ; stop_time=[n/a] ; pid=0 ; code=(null) ; status=0/0 }
ExecStartPre={ path=/bin/mkdir ; argv[]=/bin/mkdir -p /sys/fs/cgroup/cpu,cpuacct/system.slice/kubelet.service ; ignore_errors=no ; start_time=[n/a] ; stop_time=[n/a] ; pid=0 ; code=(null) ; status=0/0 }
ExecStartPre={ path=/bin/mkdir ; argv[]=/bin/mkdir -p /sys/fs/cgroup/memory/system.slice/kubelet.service ; ignore_errors=no ; start_time=[n/a] ; stop_time=[n/a] ; pid=0 ; code=(null) ; status=0/0 }
ExecStartPre={ path=/bin/mkdir ; argv[]=/bin/mkdir -p /sys/fs/cgroup/pids/system.slice/kubelet.service ; ignore_errors=no ; start_time=[n/a] ; stop_time=[n/a] ; pid=0 ; code=(null) ; status=0/0 }
ExecStartPre={ path=/bin/mkdir ; argv[]=/bin/mkdir -p /sys/fs/cgroup/devices/system.slice/kubelet.service ; ignore_errors=no ; start_time=[n/a] ; stop_time=[n/a] ; pid=0 ; code=(null) ; status=0/0 }
ExecStartPre={ path=/bin/mkdir ; argv[]=/bin/mkdir -p /sys/fs/cgroup/blkio/system.slice/kubelet.service ; ignore_errors=no ; start_time=[n/a] ; stop_time=[n/a] ; pid=0 ; code=(null) ; status=0/0 }
ExecStartPre={ path=/bin/mkdir ; argv[]=/bin/mkdir -p /sys/fs/cgroup/systemd/system.slice/kubelet.service ; ignore_errors=no ; start_time=[n/a] ; stop_time=[n/a] ; pid=0 ; code=(null) ; status=0/0 }
ExecStart={ path=/usr/bin/kubelet ; argv[]=/usr/bin/kubelet $KUBELET_KUBECONFIG_ARGS $KUBELET_CONFIG_ARGS $KUBELET_KUBEADM_ARGS $KUBELET_EXTRA_ARGS ; ignore_errors=no ; start_time=[n/a] ; stop_time=[n/a] ; pid=0 ; code=(null) ; status=0/0 }
Slice=system.slice
ControlGroup=/system.slice/kubelet.service
MemoryCurrent=218877952
CPUUsageNSec=202802436982953
TasksCurrent=145
Delegate=no
CPUAccounting=no
CPUWeight=18446744073709551615
StartupCPUWeight=18446744073709551615
CPUShares=18446744073709551615
StartupCPUShares=18446744073709551615
CPUQuotaPerSecUSec=infinity
IOAccounting=no
IOWeight=18446744073709551615
StartupIOWeight=18446744073709551615
BlockIOAccounting=no
BlockIOWeight=18446744073709551615
StartupBlockIOWeight=18446744073709551615
MemoryAccounting=no
MemoryLow=0
MemoryHigh=18446744073709551615
MemoryMax=18446744073709551615
MemorySwapMax=18446744073709551615
MemoryLimit=18446744073709551615
DevicePolicy=auto
TasksAccounting=yes
TasksMax=25804
Environment=KUBELET_EXTRA_ARGS=--container-runtime=remote\x20--runtime-request-timeout=15m\x20--container-runtime-endpoint=unix:///run/containerd/containerd.sock KUBELET_KUBECONFIG_ARGS=--bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf\x20--kubeconfig=/etc/kubernetes/kubelet.conf KUBELET_CONFIG_ARGS=--config=/var/lib/kubelet/config.yaml
EnvironmentFile=/var/lib/kubelet/kubeadm-flags.env (ignore_errors=yes)
EnvironmentFile=/etc/default/kubelet (ignore_errors=yes)
UMask=0022
LimitCPU=18446744073709551615
LimitCPUSoft=18446744073709551615
LimitFSIZE=18446744073709551615
LimitFSIZESoft=18446744073709551615
LimitDATA=18446744073709551615
LimitDATASoft=18446744073709551615
LimitSTACK=18446744073709551615
LimitSTACKSoft=8388608
LimitCORE=18446744073709551615
LimitCORESoft=18446744073709551615
LimitRSS=18446744073709551615
LimitRSSSoft=18446744073709551615
LimitNOFILE=655350
LimitNOFILESoft=655350
LimitAS=18446744073709551615
LimitASSoft=18446744073709551615
LimitNPROC=1542336
LimitNPROCSoft=1542336
LimitMEMLOCK=65536
LimitMEMLOCKSoft=65536
LimitLOCKS=18446744073709551615
LimitLOCKSSoft=18446744073709551615
LimitSIGPENDING=1542336
LimitSIGPENDINGSoft=1542336
LimitMSGQUEUE=819200
LimitMSGQUEUESoft=819200
LimitNICE=0
LimitNICESoft=0
LimitRTPRIO=0
LimitRTPRIOSoft=0
LimitRTTIME=18446744073709551615
LimitRTTIMESoft=18446744073709551615
OOMScoreAdjust=0
Nice=0
IOScheduling=0
CPUSchedulingPolicy=0
CPUSchedulingPriority=0
TimerSlackNSec=50000
CPUSchedulingResetOnFork=no
NonBlocking=no
StandardInput=null
StandardOutput=journal
StandardError=inherit
TTYReset=no
TTYVHangup=no
TTYVTDisallocate=no
SyslogPriority=30
SyslogLevelPrefix=yes
SyslogLevel=6
SyslogFacility=3
SecureBits=0
CapabilityBoundingSet=18446744073709551615
AmbientCapabilities=0
DynamicUser=no
RemoveIPC=no
MountFlags=0
PrivateTmp=no
PrivateDevices=no
ProtectKernelTunables=no
ProtectKernelModules=no
ProtectControlGroups=no
PrivateNetwork=no
PrivateUsers=no
ProtectHome=no
ProtectSystem=no
SameProcessGroup=no
UtmpMode=init
IgnoreSIGPIPE=yes
NoNewPrivileges=no
SystemCallErrorNumber=0
RuntimeDirectoryMode=0755
MemoryDenyWriteExecute=no
RestrictRealtime=no
RestrictNamespace=2114060288
KillMode=control-group
KillSignal=15
SendSIGKILL=yes
SendSIGHUP=no
Id=kubelet.service
Names=kubelet.service
Requires=sysinit.target system.slice
WantedBy=multi-user.target
Conflicts=shutdown.target
Before=shutdown.target multi-user.target
After=sysinit.target systemd-journald.socket system.slice basic.target
Documentation=https://kubernetes.io/docs/home/
Description=kubelet: The Kubernetes Node Agent
LoadState=loaded
ActiveState=active
SubState=running
FragmentPath=/lib/systemd/system/kubelet.service
DropInPaths=/etc/systemd/system/kubelet.service.d/0-containerd.conf /etc/systemd/system/kubelet.service.d/10-kubeadm.conf
UnitFileState=enabled
UnitFilePreset=enabled
StateChangeTimestamp=Thu 2022-02-17 10:43:26 CST
StateChangeTimestampMonotonic=1988621434312
InactiveExitTimestamp=Thu 2022-02-17 10:43:25 CST
InactiveExitTimestampMonotonic=1988621365915
ActiveEnterTimestamp=Thu 2022-02-17 10:43:26 CST
ActiveEnterTimestampMonotonic=1988621434312
ActiveExitTimestamp=Thu 2022-02-17 10:43:25 CST
ActiveExitTimestampMonotonic=1988621349574
InactiveEnterTimestamp=Thu 2022-02-17 10:43:25 CST
InactiveEnterTimestampMonotonic=1988621360456
CanStart=yes
CanStop=yes
CanReload=no
CanIsolate=no
StopWhenUnneeded=no
RefuseManualStart=no
RefuseManualStop=no
AllowIsolate=no
DefaultDependencies=yes
OnFailureJobMode=replace
IgnoreOnIsolate=no
NeedDaemonReload=no
JobTimeoutUSec=infinity
JobTimeoutAction=none
ConditionResult=yes
AssertResult=yes
ConditionTimestamp=Thu 2022-02-17 10:43:25 CST
ConditionTimestampMonotonic=1988621362918
AssertTimestamp=Thu 2022-02-17 10:43:25 CST
AssertTimestampMonotonic=1988621362919
Transient=no
Perpetual=no
StartLimitIntervalSec=0
StartLimitBurst=5
StartLimitAction=none
InvocationID=863deb9538eb4c0caa5a489d931e5f11
```
## containerd
```
containerd github.com/containerd/containerd v1.5.9 1407cab509ff0d96baa4f0eb6ff9980270e6e620
```
```
Type=simple
Restart=no
NotifyAccess=none
RestartUSec=100ms
TimeoutStartUSec=1min 30s
TimeoutStopUSec=1min 30s
RuntimeMaxUSec=infinity
WatchdogUSec=0
WatchdogTimestamp=Thu 2022-02-17 10:42:17 CST
WatchdogTimestampMonotonic=1988553270104
FailureAction=none
PermissionsStartOnly=no
RootDirectoryStartOnly=no
RemainAfterExit=no
GuessMainPID=yes
MainPID=167197
ControlPID=0
FileDescriptorStoreMax=0
NFileDescriptorStore=0
StatusErrno=0
Result=success
UID=4294967295
GID=4294967295
ExecMainStartTimestamp=Thu 2022-02-17 10:42:17 CST
ExecMainStartTimestampMonotonic=1988553270071
ExecMainExitTimestampMonotonic=0
ExecMainPID=167197
ExecMainCode=0
ExecMainStatus=0
ExecStartPre={ path=/sbin/modprobe ; argv[]=/sbin/modprobe overlay ; ignore_errors=yes ; start_time=[n/a] ; stop_time=[n/a] ; pid=0 ; code=(null) ; status=0/0 }
ExecStart={ path=/usr/bin/containerd ; argv[]=/usr/bin/containerd ; ignore_errors=no ; start_time=[n/a] ; stop_time=[n/a] ; pid=0 ; code=(null) ; status=0/0 }
Slice=system.slice
ControlGroup=/system.slice/containerd.service
MemoryCurrent=2356613120
CPUUsageNSec=101006220486598
TasksCurrent=444
Delegate=yes
CPUAccounting=no
CPUWeight=18446744073709551615
StartupCPUWeight=18446744073709551615
CPUShares=18446744073709551615
StartupCPUShares=18446744073709551615
CPUQuotaPerSecUSec=infinity
IOAccounting=no
IOWeight=18446744073709551615
StartupIOWeight=18446744073709551615
BlockIOAccounting=no
BlockIOWeight=18446744073709551615
StartupBlockIOWeight=18446744073709551615
MemoryAccounting=no
MemoryLow=0
MemoryHigh=18446744073709551615
MemoryMax=18446744073709551615
MemorySwapMax=18446744073709551615
MemoryLimit=18446744073709551615
DevicePolicy=auto
TasksAccounting=yes
TasksMax=18446744073709551615
Environment=HTTP_PROXY=http://yyy.xxx.com:xxxx HTTPS_PROXY=http://yyy.xxx.com:xxxx NO_PROXY=hub.xxx.com,hub.xxx.com,ai-1080ti-06.xxx.com,127.0.0.1,localhost,10.96.0.1
UMask=0022
LimitCPU=18446744073709551615
LimitCPUSoft=18446744073709551615
LimitFSIZE=18446744073709551615
LimitFSIZESoft=18446744073709551615
LimitDATA=18446744073709551615
LimitDATASoft=18446744073709551615
LimitSTACK=18446744073709551615
LimitSTACKSoft=8388608
LimitCORE=18446744073709551615
LimitCORESoft=18446744073709551615
LimitRSS=18446744073709551615
LimitRSSSoft=18446744073709551615
LimitNOFILE=1048576
LimitNOFILESoft=1048576
LimitAS=18446744073709551615
LimitASSoft=18446744073709551615
LimitNPROC=18446744073709551615
LimitNPROCSoft=18446744073709551615
LimitMEMLOCK=65536
LimitMEMLOCKSoft=65536
LimitLOCKS=18446744073709551615
LimitLOCKSSoft=18446744073709551615
LimitSIGPENDING=1542336
LimitSIGPENDINGSoft=1542336
LimitMSGQUEUE=819200
LimitMSGQUEUESoft=819200
LimitNICE=0
LimitNICESoft=0
LimitRTPRIO=0
LimitRTPRIOSoft=0
LimitRTTIME=18446744073709551615
LimitRTTIMESoft=18446744073709551615
OOMScoreAdjust=0
Nice=0
IOScheduling=0
CPUSchedulingPolicy=0
CPUSchedulingPriority=0
TimerSlackNSec=50000
CPUSchedulingResetOnFork=no
NonBlocking=no
StandardInput=null
StandardOutput=journal
StandardError=inherit
TTYReset=no
TTYVHangup=no
TTYVTDisallocate=no
SyslogPriority=30
SyslogLevelPrefix=yes
SyslogLevel=6
SyslogFacility=3
SecureBits=0
CapabilityBoundingSet=18446744073709551615
AmbientCapabilities=0
DynamicUser=no
RemoveIPC=no
MountFlags=0
PrivateTmp=no
PrivateDevices=no
ProtectKernelTunables=no
ProtectKernelModules=no
ProtectControlGroups=no
PrivateNetwork=no
PrivateUsers=no
ProtectHome=no
ProtectSystem=no
SameProcessGroup=no
UtmpMode=init
IgnoreSIGPIPE=yes
NoNewPrivileges=no
SystemCallErrorNumber=0
RuntimeDirectoryMode=0755
MemoryDenyWriteExecute=no
RestrictRealtime=no
RestrictNamespace=2114060288
KillMode=process
KillSignal=15
SendSIGKILL=yes
SendSIGHUP=no
Id=containerd.service
Names=containerd.service
Requires=sysinit.target system.slice
WantedBy=multi-user.target
BoundBy=docker.service
Conflicts=shutdown.target
Before=shutdown.target multi-user.target docker.service
After=system.slice sysinit.target basic.target systemd-journald.socket network.target
Documentation=https://containerd.io
Description=containerd container runtime
LoadState=loaded
ActiveState=active
SubState=running
FragmentPath=/lib/systemd/system/containerd.service
UnitFileState=enabled
UnitFilePreset=enabled
StateChangeTimestamp=Thu 2022-02-17 10:42:17 CST
StateChangeTimestampMonotonic=1988553270105
InactiveExitTimestamp=Thu 2022-02-17 10:42:17 CST
InactiveExitTimestampMonotonic=1988553220446
ActiveEnterTimestamp=Thu 2022-02-17 10:42:17 CST
ActiveEnterTimestampMonotonic=1988553270105
ActiveExitTimestamp=Thu 2022-02-17 10:42:17 CST
ActiveExitTimestampMonotonic=1988553205645
InactiveEnterTimestamp=Thu 2022-02-17 10:42:17 CST
InactiveEnterTimestampMonotonic=1988553217568
CanStart=yes
CanStop=yes
CanReload=no
CanIsolate=no
StopWhenUnneeded=no
RefuseManualStart=no
RefuseManualStop=no
AllowIsolate=no
DefaultDependencies=yes
OnFailureJobMode=replace
IgnoreOnIsolate=no
NeedDaemonReload=no
JobTimeoutUSec=infinity
JobTimeoutAction=none
ConditionResult=yes
AssertResult=yes
ConditionTimestamp=Thu 2022-02-17 10:42:17 CST
ConditionTimestampMonotonic=1988553218697
AssertTimestamp=Thu 2022-02-17 10:42:17 CST
AssertTimestampMonotonic=1988553218697
Transient=no
Perpetual=no
StartLimitIntervalSec=10000000
StartLimitBurst=5
StartLimitAction=none
InvocationID=43ef429cb31541ae9c4b451068770515
```
```toml
disabled_plugins = []
imports = []
oom_score = 0
plugin_dir = ""
required_plugins = []
root = "/var/lib/containerd"
state = "/run/containerd"
version = 2
[cgroup]
path = ""
[debug]
address = ""
format = ""
gid = 0
level = ""
uid = 0
[grpc]
address = "/run/containerd/containerd.sock"
gid = 0
max_recv_message_size = 16777216
max_send_message_size = 16777216
tcp_address = ""
tcp_tls_cert = ""
tcp_tls_key = ""
uid = 0
[metrics]
address = ""
grpc_histogram = false
[plugins]
[plugins."io.containerd.gc.v1.scheduler"]
deletion_threshold = 0
mutation_threshold = 100
pause_threshold = 0.02
schedule_delay = "0s"
startup_delay = "100ms"
[plugins."io.containerd.grpc.v1.cri"]
disable_apparmor = false
disable_cgroup = false
disable_hugetlb_controller = true
disable_proc_mount = false
disable_tcp_service = true
enable_selinux = false
enable_tls_streaming = false
ignore_image_defined_volumes = false
max_concurrent_downloads = 3
max_container_log_line_size = 16384
netns_mounts_under_state_dir = false
restrict_oom_score_adj = false
sandbox_image = "k8s.gcr.io/pause:3.5"
selinux_category_range = 1024
stats_collect_period = 10
stream_idle_timeout = "4h0m0s"
stream_server_address = "127.0.0.1"
stream_server_port = "0"
systemd_cgroup = false
tolerate_missing_hugetlb_controller = true
unset_seccomp_profile = ""
[plugins."io.containerd.grpc.v1.cri".cni]
bin_dir = "/opt/cni/bin"
conf_dir = "/etc/cni/net.d"
conf_template = ""
max_conf_num = 1
[plugins."io.containerd.grpc.v1.cri".containerd]
default_runtime_name = "runc"
disable_snapshot_annotations = true
discard_unpacked_layers = false
no_pivot = false
snapshotter = "overlayfs"
[plugins."io.containerd.grpc.v1.cri".containerd.default_runtime]
base_runtime_spec = ""
container_annotations = []
pod_annotations = []
privileged_without_host_devices = false
runtime_engine = ""
runtime_root = ""
runtime_type = "io.containerd.runc.v2"
[plugins."io.containerd.grpc.v1.cri".containerd.default_runtime.options]
[plugins."io.containerd.grpc.v1.cri".containerd.runtimes]
[plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc]
base_runtime_spec = ""
container_annotations = []
pod_annotations = []
privileged_without_host_devices = false
runtime_engine = ""
runtime_root = ""
runtime_type = "io.containerd.runc.v2"
[plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc.options]
BinaryName = ""
CriuImagePath = ""
CriuPath = ""
CriuWorkPath = ""
IoGid = 0
IoUid = 0
NoNewKeyring = false
NoPivotRoot = false
Root = ""
ShimCgroup = ""
SystemdCgroup = false
[plugins."io.containerd.grpc.v1.cri".containerd.runtimes.kata]
runtime_type = "io.containerd.kata.v2"
[plugins."io.containerd.grpc.v1.cri".containerd.untrusted_workload_runtime]
base_runtime_spec = ""
container_annotations = []
pod_annotations = []
privileged_without_host_devices = false
runtime_engine = ""
runtime_root = ""
runtime_type = "io.containerd.kata.v2"
[plugins."io.containerd.grpc.v1.cri".containerd.untrusted_workload_runtime.options]
[plugins."io.containerd.grpc.v1.cri".image_decryption]
key_model = "node"
[plugins."io.containerd.grpc.v1.cri".registry]
config_path = ""
[plugins."io.containerd.grpc.v1.cri".registry.auths]
[plugins."io.containerd.grpc.v1.cri".registry.configs]
[plugins."io.containerd.grpc.v1.cri".registry.headers]
[plugins."io.containerd.grpc.v1.cri".registry.mirrors]
[plugins."io.containerd.grpc.v1.cri".x509_key_pair_streaming]
tls_cert_file = ""
tls_key_file = ""
[plugins."io.containerd.internal.v1.opt"]
path = "/opt/containerd"
[plugins."io.containerd.internal.v1.restart"]
interval = "10s"
[plugins."io.containerd.metadata.v1.bolt"]
content_sharing_policy = "shared"
[plugins."io.containerd.monitor.v1.cgroups"]
no_prometheus = false
[plugins."io.containerd.runtime.v1.linux"]
no_shim = false
runtime = "runc"
runtime_root = ""
shim = "containerd-shim"
shim_debug = false
[plugins."io.containerd.runtime.v2.task"]
platforms = ["linux/amd64"]
[plugins."io.containerd.service.v1.diff-service"]
default = ["walking"]
[plugins."io.containerd.snapshotter.v1.aufs"]
root_path = ""
[plugins."io.containerd.snapshotter.v1.btrfs"]
root_path = ""
[plugins."io.containerd.snapshotter.v1.devmapper"]
async_remove = false
base_image_size = ""
pool_name = ""
root_path = ""
[plugins."io.containerd.snapshotter.v1.native"]
root_path = ""
[plugins."io.containerd.snapshotter.v1.overlayfs"]
root_path = ""
[plugins."io.containerd.snapshotter.v1.zfs"]
root_path = ""
[proxy_plugins]
[stream_processors]
[stream_processors."io.containerd.ocicrypt.decoder.v1.tar"]
accepts = ["application/vnd.oci.image.layer.v1.tar+encrypted"]
args = ["--decryption-keys-path", "/etc/containerd/ocicrypt/keys"]
env = ["OCICRYPT_KEYPROVIDER_CONFIG=/etc/containerd/ocicrypt/ocicrypt_keyprovider.conf"]
path = "ctd-decoder"
returns = "application/vnd.oci.image.layer.v1.tar"
[stream_processors."io.containerd.ocicrypt.decoder.v1.tar.gzip"]
accepts = ["application/vnd.oci.image.layer.v1.tar+gzip+encrypted"]
args = ["--decryption-keys-path", "/etc/containerd/ocicrypt/keys"]
env = ["OCICRYPT_KEYPROVIDER_CONFIG=/etc/containerd/ocicrypt/ocicrypt_keyprovider.conf"]
path = "ctd-decoder"
returns = "application/vnd.oci.image.layer.v1.tar+gzip"
[timeouts]
"io.containerd.timeout.shim.cleanup" = "5s"
"io.containerd.timeout.shim.load" = "5s"
"io.containerd.timeout.shim.shutdown" = "3s"
"io.containerd.timeout.task.state" = "2s"
[ttrpc]
address = ""
gid = 0
uid = 0
```
# Packages
Have `dpkg`
```
ii kata-containers-image 1.12.1-3 amd64 Kata containers image
ii kata-ksm-throttler 1.12.1-3 amd64
ii kata-linux-container 5.4.60.89-3 amd64 linux kernel optimised for container-like workloads.
ii kata-runtime 1.12.1-3 amd64
ii qemu-vanilla 5.0.0+git.fdd76fecdd-3 amd64 linux kernel optimised for container-like workloads.
```
```
```
Kata Monitor `kata-monitor`.
```
kata-monitor
Version: 0.2.0
Go version: go1.17.3
Git commit: 1af292c9e693e9bc8e8324a9eb860dad45306fb5
OS/Arch: linux/amd64
```
Runtime
/usr/bin/kata-runtime kata-env
Runtime config files
cat "/opt/kata/share/defaults/kata-containers/configuration.toml"
cat "/usr/share/defaults/kata-containers/configuration.toml"
Containerd shim v2
containerd-shim-kata-v2 --version
KSM throttler
/usr/libexec/kata-ksm-throttler/kata-ksm-throttler --version
Image details
Initrd details
Logfiles
Runtime logs
Throttler logs
Kata Containerd Shim v2
Container manager details
Docker
docker version
docker info
systemctl show docker
Kubernetes
kubectl version
kubectl config view
systemctl show kubelet
containerd
containerd --version
systemctl show containerd
cat /etc/containerd/config.toml
Packages
dpkg -l|egrep "(cc-oci-runtime|cc-runtime|runv|kata-runtime|kata-ksm-throttler|kata-containers-image|linux-container|qemu-)"
rpm -qa|egrep "(cc-oci-runtime|cc-runtime|runv|kata-runtime|kata-ksm-throttler|kata-containers-image|linux-container|qemu-)"
Kata Monitor
kata-monitor --version
when I use the kubevirt-gpu-device-plugin to pass through NVIDIA GPU to the kata container, it seems also report the similar error.
But I have checked that the device plugin code has set the file to rw mode.
https://github.com/NVIDIA/kubevirt-gpu-device-plugin/blob/531e81bb28738507315249ba5b27847ddadceeed/pkg/device_plugin/generic_device_plugin.go#L258
https://github.com/NVIDIA/kubevirt-gpu-device-plugin/blob/531e81bb28738507315249ba5b27847ddadceeed/pkg/device_plugin/generic_device_plugin.go#L263
Originally posted by @fighterhit in https://github.com/kata-containers/tests/issues/3002#issuecomment-1047790024