kata-containers / runtime

Kata Containers version 1.x runtime (for version 2.x see https://github.com/kata-containers/kata-containers).
https://katacontainers.io/
Apache License 2.0
2.1k stars 375 forks source link

kata container using vsock boots up much slower than using kata-proxy #2929

Closed lining2020x closed 4 years ago

lining2020x commented 4 years ago

Description of problem

Using vsock make container bootup much slower than kata-proxy

Container/VM boot up slower more than 2 seconds using vsock comparing to using kata-proxy.

Host info: centos 8 on bare metal

[root@375 bclinux-source]# uname -a
Linux 375 4.18.0-147.0.4.el8.x86_64 #1 SMP Thu Sep 3 11:17:22 CST 2020 x86_64 x86_64 x86_64 GNU/Linux

Using kata-proxy

# kata-proxy is default in config
[root@375 ~]# grep  use_vsock /etc/kata-containers/configuration.toml
#use_vsock = true

[root@375 ~]# time ctr run --rm --runtime io.containerd.kata.v2 docker.io/library/alpine:latest hello-kata uname -a
Linux clr-971ff46e7a5e4d0eb8055bbc6ba1479c 5.4.32-6.1.container #1 SMP Thu Jan 1 00:00:00 UTC 1970 x86_64 Linux

real    0m1.242s
user    0m0.023s
sys     0m0.015s

Using vsock

# checkout to using vsock
[root@375 ~]# sed -i 's/#\(use_vsock = true\)/\1/g' /etc/kata-containers/configuration.toml
[root@375 ~]# time ctr run --rm --runtime io.containerd.kata.v2 docker.io/library/alpine:latest hello-kata uname -a
Linux clr-3ee74b7d6f3042fba72647c58c350695 5.4.32-6.1.container #1 SMP Thu Jan 1 00:00:00 UTC 1970 x86_64 Linux

real    0m3.659s
user    0m0.024s
sys     0m0.014s

Let's do it once again to make sure.

checkout to kube-proxy again

[root@375 ~]# sed -i 's/\(use_vsock = true\)/#\1/g' /etc/kata-containers/configuration.toml
[root@375 ~]# grep  use_vsock /etc/kata-containers/configuration.toml
#use_vsock = true
[root@375 ~]# time ctr run --rm --runtime io.containerd.kata.v2 docker.io/library/alpine:latest hello-kata uname -a
Linux clr-b064233f00cc40ceb75f6a6593437c91 5.4.32-6.1.container #1 SMP Thu Jan 1 00:00:00 UTC 1970 x86_64 Linux

real    0m1.173s
user    0m0.023s
sys     0m0.015s

checkout to vsock again

[root@375 ~]# sed -i 's/#\(use_vsock = true\)/\1/g' /etc/kata-containers/configuration.toml
[root@375 ~]# time ctr run --rm --runtime io.containerd.kata.v2 docker.io/library/alpine:latest hello-kata uname -a
Linux clr-2eed0206bace4471b6a1cc0c5742d64e 5.4.32-6.1.container #1 SMP Thu Jan 1 00:00:00 UTC 1970 x86_64 Linux

real    0m3.502s
user    0m0.017s
sys     0m0.020s

I am not sure which kind this issue belongs to, bug or enhancement.

Show kata-collect-data.sh details

# Meta details Running `kata-collect-data.sh` version `1.11.2 (commit 1e5268328a34f0ec60a1a5654ce755a6aec89f29)` at `2020-09-03.16:16:53.223222933+0800`. --- Runtime is `/usr/bin/kata-runtime`. # `kata-env` Output of "`/usr/bin/kata-runtime kata-env`": ```toml [Meta] Version = "1.0.24" [Runtime] Debug = false Trace = false DisableGuestSeccomp = true DisableNewNetNs = false SandboxCgroupOnly = false Path = "/usr/bin/kata-runtime" [Runtime.Version] OCI = "1.0.1-dev" [Runtime.Version.Version] Semver = "1.11.2" Major = 1 Minor = 11 Patch = 2 Commit = "1e5268328a34f0ec60a1a5654ce755a6aec89f29" [Runtime.Config] Path = "/etc/kata-containers/configuration.toml" [Hypervisor] MachineType = "pc" Version = "QEMU emulator version 4.1.1\nCopyright (c) 2003-2019 Fabrice Bellard and the QEMU Project developers" Path = "/usr/bin/qemu-vanilla-system-x86_64" BlockDeviceDriver = "virtio-scsi" EntropySource = "/dev/urandom" SharedFS = "virtio-9p" VirtioFSDaemon = "/usr/bin/virtiofsd" Msize9p = 8192 MemorySlots = 10 PCIeRootPort = 0 HotplugVFIOOnRootBus = false Debug = false UseVSock = false [Image] Path = "/usr/share/kata-containers/kata-containers-image_clearlinux_1.11.2_agent_abb7149e49.img" [Kernel] Path = "/usr/share/kata-containers/vmlinuz-5.4.32.74-6.1.container" Parameters = "systemd.unit=kata-containers.target systemd.mask=systemd-networkd.service systemd.mask=systemd-networkd.socket scsi_mod.scan=none" [Initrd] Path = "" [Proxy] Type = "kataProxy" Path = "/usr/libexec/kata-containers/kata-proxy" Debug = false [Proxy.Version] Semver = "1.11.2-94f00aa" Major = 1 Minor = 11 Patch = 2 Commit = "<>" [Shim] Type = "kataShim" Path = "/usr/libexec/kata-containers/kata-shim" Debug = false [Shim.Version] Semver = "1.11.2-5ccc2cd" Major = 1 Minor = 11 Patch = 2 Commit = "<>" [Agent] Type = "kata" Debug = false Trace = false TraceMode = "" TraceType = "" [Host] Kernel = "4.18.0-147.0.4.el8.x86_64" Architecture = "amd64" VMContainerCapable = true SupportVSocks = true [Host.Distro] Name = "CentOS Linux" Version = "8" [Host.CPU] Vendor = "GenuineIntel" Model = "Intel(R) Xeon(R) CPU E5-2650 v2 @ 2.60GHz" [Netmon] Path = "/usr/libexec/kata-containers/kata-netmon" Debug = false Enable = false [Netmon.Version] Semver = "1.11.2" Major = 1 Minor = 11 Patch = 2 Commit = "<>" ``` --- # Runtime config files ## Runtime default config files ``` /etc/kata-containers/configuration.toml /usr/share/defaults/kata-containers/configuration.toml ``` ## Runtime config file contents Output of "`cat "/etc/kata-containers/configuration.toml"`": ```toml # Copyright (c) 2017-2019 Intel Corporation # # SPDX-License-Identifier: Apache-2.0 # # XXX: WARNING: this file is auto-generated. # XXX: # XXX: Source file: "cli/config/configuration-qemu.toml.in" # XXX: Project: # XXX: Name: Kata Containers # XXX: Type: kata [hypervisor.qemu] path = "/usr/bin/qemu-vanilla-system-x86_64" kernel = "/usr/share/kata-containers/vmlinuz.container" image = "/usr/share/kata-containers/kata-containers.img" machine_type = "pc" # Optional space-separated list of options to pass to the guest kernel. # For example, use `kernel_params = "vsyscall=emulate"` if you are having # trouble running pre-2.15 glibc. # # WARNING: - any parameter specified here will take priority over the default # parameter value of the same name used to start the virtual machine. # Do not set values here unless you understand the impact of doing so as you # may stop the virtual machine from booting. # To see the list of default parameters, enable hypervisor debug, create a # container and look for 'default-kernel-parameters' log entries. kernel_params = "" # Path to the firmware. # If you want that qemu uses the default firmware leave this option empty firmware = "" # Machine accelerators # comma-separated list of machine accelerators to pass to the hypervisor. # For example, `machine_accelerators = "nosmm,nosmbus,nosata,nopit,static-prt,nofw"` machine_accelerators="" # Default number of vCPUs per SB/VM: # unspecified or 0 --> will be set to 1 # < 0 --> will be set to the actual number of physical cores # > 0 <= number of physical cores --> will be set to the specified number # > number of physical cores --> will be set to the actual number of physical cores default_vcpus = 1 # Default maximum number of vCPUs per SB/VM: # unspecified or == 0 --> will be set to the actual number of physical cores or to the maximum number # of vCPUs supported by KVM if that number is exceeded # > 0 <= number of physical cores --> will be set to the specified number # > number of physical cores --> will be set to the actual number of physical cores or to the maximum number # of vCPUs supported by KVM if that number is exceeded # WARNING: Depending of the architecture, the maximum number of vCPUs supported by KVM is used when # the actual number of physical cores is greater than it. # WARNING: Be aware that this value impacts the virtual machine's memory footprint and CPU # the hotplug functionality. For example, `default_maxvcpus = 240` specifies that until 240 vCPUs # can be added to a SB/VM, but the memory footprint will be big. Another example, with # `default_maxvcpus = 8` the memory footprint will be small, but 8 will be the maximum number of # vCPUs supported by the SB/VM. In general, we recommend that you do not edit this variable, # unless you know what are you doing. default_maxvcpus = 0 # Bridges can be used to hot plug devices. # Limitations: # * Currently only pci bridges are supported # * Until 30 devices per bridge can be hot plugged. # * Until 5 PCI bridges can be cold plugged per VM. # This limitation could be a bug in qemu or in the kernel # Default number of bridges per SB/VM: # unspecified or 0 --> will be set to 1 # > 1 <= 5 --> will be set to the specified number # > 5 --> will be set to 5 default_bridges = 1 # Default memory size in MiB for SB/VM. # If unspecified then it will be set 2048 MiB. default_memory = 2048 # # Default memory slots per SB/VM. # If unspecified then it will be set 10. # This is will determine the times that memory will be hotadded to sandbox/VM. #memory_slots = 10 # The size in MiB will be plused to max memory of hypervisor. # It is the memory address space for the NVDIMM devie. # If set block storage driver (block_device_driver) to "nvdimm", # should set memory_offset to the size of block device. # Default 0 #memory_offset = 0 # Specifies virtio-mem will be enabled or not. # Please note that this option should be used with the command # "echo 1 > /proc/sys/vm/overcommit_memory". # Default false #enable_virtio_mem = true # Disable block device from being used for a container's rootfs. # In case of a storage driver like devicemapper where a container's # root file system is backed by a block device, the block device is passed # directly to the hypervisor for performance reasons. # This flag prevents the block device from being passed to the hypervisor, # 9pfs is used instead to pass the rootfs. disable_block_device_use = false # Shared file system type: # - virtio-9p (default) # - virtio-fs shared_fs = "virtio-9p" # Path to vhost-user-fs daemon. virtio_fs_daemon = "/usr/bin/virtiofsd" # Default size of DAX cache in MiB virtio_fs_cache_size = 1024 # Extra args for virtiofsd daemon # # Format example: # ["-o", "arg1=xxx,arg2", "-o", "hello world", "--arg3=yyy"] # # see `virtiofsd -h` for possible options. virtio_fs_extra_args = [] # Cache mode: # # - none # Metadata, data, and pathname lookup are not cached in guest. They are # always fetched from host and any changes are immediately pushed to host. # # - auto # Metadata and pathname lookup cache expires after a configured amount of # time (default is 1 second). Data is cached while the file is open (close # to open consistency). # # - always # Metadata, data, and pathname lookup are cached in guest and never expire. virtio_fs_cache = "always" # Block storage driver to be used for the hypervisor in case the container # rootfs is backed by a block device. This is virtio-scsi, virtio-blk # or nvdimm. block_device_driver = "virtio-scsi" # Specifies cache-related options will be set to block devices or not. # Default false #block_device_cache_set = true # Specifies cache-related options for block devices. # Denotes whether use of O_DIRECT (bypass the host page cache) is enabled. # Default false #block_device_cache_direct = true # Specifies cache-related options for block devices. # Denotes whether flush requests for the device are ignored. # Default false #block_device_cache_noflush = true # Enable iothreads (data-plane) to be used. This causes IO to be # handled in a separate IO thread. This is currently only implemented # for SCSI. # enable_iothreads = false # Enable pre allocation of VM RAM, default false # Enabling this will result in lower container density # as all of the memory will be allocated and locked # This is useful when you want to reserve all the memory # upfront or in the cases where you want memory latencies # to be very predictable # Default false #enable_mem_prealloc = true # Enable huge pages for VM RAM, default false # Enabling this will result in the VM memory # being allocated using huge pages. # This is useful when you want to use vhost-user network # stacks within the container. This will automatically # result in memory pre allocation #enable_hugepages = true # Enable vhost-user storage device, default false # Enabling this will result in some Linux reserved block type # major range 240-254 being chosen to represent vhost-user devices. enable_vhost_user_store = false # The base directory specifically used for vhost-user devices. # Its sub-path "block" is used for block devices; "block/sockets" is # where we expect vhost-user sockets to live; "block/devices" is where # simulated block device nodes for vhost-user devices to live. vhost_user_store_path = "/var/run/kata-containers/vhost-user" # Enable file based guest memory support. The default is an empty string which # will disable this feature. In the case of virtio-fs, this is enabled # automatically and '/dev/shm' is used as the backing folder. # This option will be ignored if VM templating is enabled. #file_mem_backend = "" # Enable swap of vm memory. Default false. # The behaviour is undefined if mem_prealloc is also set to true #enable_swap = true # This option changes the default hypervisor and kernel parameters # to enable debug output where available. This extra output is added # to the proxy logs, but only when proxy debug is also enabled. # # Default false #enable_debug = true # Disable the customizations done in the runtime when it detects # that it is running on top a VMM. This will result in the runtime # behaving as it would when running on bare metal. # #disable_nesting_checks = true # This is the msize used for 9p shares. It is the number of bytes # used for 9p packet payload. #msize_9p = 8192 # If true and vsocks are supported, use vsocks to communicate directly # with the agent and no proxy is started, otherwise use unix # sockets and start a proxy to communicate with the agent. # Default false #use_vsock = true # If false and nvdimm is supported, use nvdimm device to plug guest image. # Otherwise virtio-block device is used. # Default is false #disable_image_nvdimm = true # VFIO devices are hotplugged on a bridge by default. # Enable hotplugging on root bus. This may be required for devices with # a large PCI bar, as this is a current limitation with hotplugging on # a bridge. This value is valid for "pc" machine type. # Default false #hotplug_vfio_on_root_bus = true # Before hot plugging a PCIe device, you need to add a pcie_root_port device. # Use this parameter when using some large PCI bar devices, such as Nvidia GPU # The value means the number of pcie_root_port # This value is valid when hotplug_vfio_on_root_bus is true and machine_type is "q35" # Default 0 #pcie_root_port = 2 # If vhost-net backend for virtio-net is not desired, set to true. Default is false, which trades off # security (vhost-net runs ring0) for network I/O performance. #disable_vhost_net = true # # Default entropy source. # The path to a host source of entropy (including a real hardware RNG) # /dev/urandom and /dev/random are two main options. # Be aware that /dev/random is a blocking source of entropy. If the host # runs out of entropy, the VMs boot time will increase leading to get startup # timeouts. # The source of entropy /dev/urandom is non-blocking and provides a # generally acceptable source of entropy. It should work well for pretty much # all practical purposes. #entropy_source= "/dev/urandom" # Path to OCI hook binaries in the *guest rootfs*. # This does not affect host-side hooks which must instead be added to # the OCI spec passed to the runtime. # # You can create a rootfs with hooks by customizing the osbuilder scripts: # https://github.com/kata-containers/osbuilder # # Hooks must be stored in a subdirectory of guest_hook_path according to their # hook type, i.e. "guest_hook_path/{prestart,postart,poststop}". # The agent will scan these directories for executable files and add them, in # lexicographical order, to the lifecycle of the guest container. # Hooks are executed in the runtime namespace of the guest. See the official documentation: # https://github.com/opencontainers/runtime-spec/blob/v1.0.1/config.md#posix-platform-hooks # Warnings will be logged if any error is encountered will scanning for hooks, # but it will not abort container execution. #guest_hook_path = "/usr/share/oci/hooks" [factory] # VM templating support. Once enabled, new VMs are created from template # using vm cloning. They will share the same initial kernel, initramfs and # agent memory by mapping it readonly. It helps speeding up new container # creation and saves a lot of memory if there are many kata containers running # on the same host. # # When disabled, new VMs are created from scratch. # # Note: Requires "initrd=" to be set ("image=" is not supported). # # Default false #enable_template = true # Specifies the path of template. # # Default "/run/vc/vm/template" #template_path = "/run/vc/vm/template" # The number of caches of VMCache: # unspecified or == 0 --> VMCache is disabled # > 0 --> will be set to the specified number # # VMCache is a function that creates VMs as caches before using it. # It helps speed up new container creation. # The function consists of a server and some clients communicating # through Unix socket. The protocol is gRPC in protocols/cache/cache.proto. # The VMCache server will create some VMs and cache them by factory cache. # It will convert the VM to gRPC format and transport it when gets # requestion from clients. # Factory grpccache is the VMCache client. It will request gRPC format # VM and convert it back to a VM. If VMCache function is enabled, # kata-runtime will request VM from factory grpccache when it creates # a new sandbox. # # Default 0 #vm_cache_number = 0 # Specify the address of the Unix socket that is used by VMCache. # # Default /var/run/kata-containers/cache.sock #vm_cache_endpoint = "/var/run/kata-containers/cache.sock" [proxy.kata] path = "/usr/libexec/kata-containers/kata-proxy" # If enabled, proxy messages will be sent to the system log # (default: disabled) #enable_debug = true [shim.kata] path = "/usr/libexec/kata-containers/kata-shim" # If enabled, shim messages will be sent to the system log # (default: disabled) #enable_debug = true # If enabled, the shim will create opentracing.io traces and spans. # (See https://www.jaegertracing.io/docs/getting-started). # # Note: By default, the shim runs in a separate network namespace. Therefore, # to allow it to send trace details to the Jaeger agent running on the host, # it is necessary to set 'disable_new_netns=true' so that it runs in the host # network namespace. # # (default: disabled) #enable_tracing = true [agent.kata] # If enabled, make the agent display debug-level messages. # (default: disabled) #enable_debug = true # Enable agent tracing. # # If enabled, the default trace mode is "dynamic" and the # default trace type is "isolated". The trace mode and type are set # explicity with the `trace_type=` and `trace_mode=` options. # # Notes: # # - Tracing is ONLY enabled when `enable_tracing` is set: explicitly # setting `trace_mode=` and/or `trace_type=` without setting `enable_tracing` # will NOT activate agent tracing. # # - See https://github.com/kata-containers/agent/blob/master/TRACING.md for # full details. # # (default: disabled) #enable_tracing = true # #trace_mode = "dynamic" #trace_type = "isolated" # Comma separated list of kernel modules and their parameters. # These modules will be loaded in the guest kernel using modprobe(8). # The following example can be used to load two kernel modules with parameters # - kernel_modules=["e1000e InterruptThrottleRate=3000,3000,3000 EEE=1", "i915 enable_ppgtt=0"] # The first word is considered as the module name and the rest as its parameters. # Container will not be started when: # * A kernel module is specified and the modprobe command is not installed in the guest # or it fails loading the module. # * The module is not available in the guest or it doesn't met the guest kernel # requirements, like architecture and version. # kernel_modules=[] [netmon] # If enabled, the network monitoring process gets started when the # sandbox is created. This allows for the detection of some additional # network being added to the existing network namespace, after the # sandbox has been created. # (default: disabled) #enable_netmon = true # Specify the path to the netmon binary. path = "/usr/libexec/kata-containers/kata-netmon" # If enabled, netmon messages will be sent to the system log # (default: disabled) #enable_debug = true [runtime] # If enabled, the runtime will log additional debug messages to the # system log # (default: disabled) #enable_debug = true # # Internetworking model # Determines how the VM should be connected to the # the container network interface # Options: # # - macvtap # Used when the Container network interface can be bridged using # macvtap. # # - none # Used when customize network. Only creates a tap device. No veth pair. # # - tcfilter # Uses tc filter rules to redirect traffic from the network interface # provided by plugin to a tap interface connected to the VM. # internetworking_model="tcfilter" # disable guest seccomp # Determines whether container seccomp profiles are passed to the virtual # machine and applied by the kata agent. If set to true, seccomp is not applied # within the guest # (default: true) disable_guest_seccomp=true # If enabled, the runtime will create opentracing.io traces and spans. # (See https://www.jaegertracing.io/docs/getting-started). # (default: disabled) #enable_tracing = true # If enabled, the runtime will not create a network namespace for shim and hypervisor processes. # This option may have some potential impacts to your host. It should only be used when you know what you're doing. # `disable_new_netns` conflicts with `enable_netmon` # `disable_new_netns` conflicts with `internetworking_model=tcfilter` and `internetworking_model=macvtap`. It works only # with `internetworking_model=none`. The tap device will be in the host network namespace and can connect to a bridge # (like OVS) directly. # If you are using docker, `disable_new_netns` only works with `docker run --net=none` # (default: false) #disable_new_netns = true # if enabled, the runtime will add all the kata processes inside one dedicated cgroup. # The container cgroups in the host are not created, just one single cgroup per sandbox. # The runtime caller is free to restrict or collect cgroup stats of the overall Kata sandbox. # The sandbox cgroup path is the parent cgroup of a container with the PodSandbox annotation. # The sandbox cgroup is constrained if there is no container type annotation. # See: https://godoc.org/github.com/kata-containers/runtime/virtcontainers#ContainerType sandbox_cgroup_only=false # Enabled experimental feature list, format: ["a", "b"]. # Experimental features are features not stable enough for production, # they may break compatibility, and are prepared for a big version bump. # Supported experimental features: # (default: []) experimental=[] ``` Output of "`cat "/usr/share/defaults/kata-containers/configuration.toml"`": ```toml # Copyright (c) 2017-2019 Intel Corporation # # SPDX-License-Identifier: Apache-2.0 # # XXX: WARNING: this file is auto-generated. # XXX: # XXX: Source file: "cli/config/configuration-qemu.toml.in" # XXX: Project: # XXX: Name: Kata Containers # XXX: Type: kata [hypervisor.qemu] path = "/usr/bin/qemu-vanilla-system-x86_64" kernel = "/usr/share/kata-containers/vmlinuz.container" image = "/usr/share/kata-containers/kata-containers.img" machine_type = "pc" # Optional space-separated list of options to pass to the guest kernel. # For example, use `kernel_params = "vsyscall=emulate"` if you are having # trouble running pre-2.15 glibc. # # WARNING: - any parameter specified here will take priority over the default # parameter value of the same name used to start the virtual machine. # Do not set values here unless you understand the impact of doing so as you # may stop the virtual machine from booting. # To see the list of default parameters, enable hypervisor debug, create a # container and look for 'default-kernel-parameters' log entries. kernel_params = "" # Path to the firmware. # If you want that qemu uses the default firmware leave this option empty firmware = "" # Machine accelerators # comma-separated list of machine accelerators to pass to the hypervisor. # For example, `machine_accelerators = "nosmm,nosmbus,nosata,nopit,static-prt,nofw"` machine_accelerators="" # Default number of vCPUs per SB/VM: # unspecified or 0 --> will be set to 1 # < 0 --> will be set to the actual number of physical cores # > 0 <= number of physical cores --> will be set to the specified number # > number of physical cores --> will be set to the actual number of physical cores default_vcpus = 1 # Default maximum number of vCPUs per SB/VM: # unspecified or == 0 --> will be set to the actual number of physical cores or to the maximum number # of vCPUs supported by KVM if that number is exceeded # > 0 <= number of physical cores --> will be set to the specified number # > number of physical cores --> will be set to the actual number of physical cores or to the maximum number # of vCPUs supported by KVM if that number is exceeded # WARNING: Depending of the architecture, the maximum number of vCPUs supported by KVM is used when # the actual number of physical cores is greater than it. # WARNING: Be aware that this value impacts the virtual machine's memory footprint and CPU # the hotplug functionality. For example, `default_maxvcpus = 240` specifies that until 240 vCPUs # can be added to a SB/VM, but the memory footprint will be big. Another example, with # `default_maxvcpus = 8` the memory footprint will be small, but 8 will be the maximum number of # vCPUs supported by the SB/VM. In general, we recommend that you do not edit this variable, # unless you know what are you doing. default_maxvcpus = 0 # Bridges can be used to hot plug devices. # Limitations: # * Currently only pci bridges are supported # * Until 30 devices per bridge can be hot plugged. # * Until 5 PCI bridges can be cold plugged per VM. # This limitation could be a bug in qemu or in the kernel # Default number of bridges per SB/VM: # unspecified or 0 --> will be set to 1 # > 1 <= 5 --> will be set to the specified number # > 5 --> will be set to 5 default_bridges = 1 # Default memory size in MiB for SB/VM. # If unspecified then it will be set 2048 MiB. default_memory = 2048 # # Default memory slots per SB/VM. # If unspecified then it will be set 10. # This is will determine the times that memory will be hotadded to sandbox/VM. #memory_slots = 10 # The size in MiB will be plused to max memory of hypervisor. # It is the memory address space for the NVDIMM devie. # If set block storage driver (block_device_driver) to "nvdimm", # should set memory_offset to the size of block device. # Default 0 #memory_offset = 0 # Specifies virtio-mem will be enabled or not. # Please note that this option should be used with the command # "echo 1 > /proc/sys/vm/overcommit_memory". # Default false #enable_virtio_mem = true # Disable block device from being used for a container's rootfs. # In case of a storage driver like devicemapper where a container's # root file system is backed by a block device, the block device is passed # directly to the hypervisor for performance reasons. # This flag prevents the block device from being passed to the hypervisor, # 9pfs is used instead to pass the rootfs. disable_block_device_use = false # Shared file system type: # - virtio-9p (default) # - virtio-fs shared_fs = "virtio-9p" # Path to vhost-user-fs daemon. virtio_fs_daemon = "/usr/bin/virtiofsd" # Default size of DAX cache in MiB virtio_fs_cache_size = 1024 # Extra args for virtiofsd daemon # # Format example: # ["-o", "arg1=xxx,arg2", "-o", "hello world", "--arg3=yyy"] # # see `virtiofsd -h` for possible options. virtio_fs_extra_args = [] # Cache mode: # # - none # Metadata, data, and pathname lookup are not cached in guest. They are # always fetched from host and any changes are immediately pushed to host. # # - auto # Metadata and pathname lookup cache expires after a configured amount of # time (default is 1 second). Data is cached while the file is open (close # to open consistency). # # - always # Metadata, data, and pathname lookup are cached in guest and never expire. virtio_fs_cache = "always" # Block storage driver to be used for the hypervisor in case the container # rootfs is backed by a block device. This is virtio-scsi, virtio-blk # or nvdimm. block_device_driver = "virtio-scsi" # Specifies cache-related options will be set to block devices or not. # Default false #block_device_cache_set = true # Specifies cache-related options for block devices. # Denotes whether use of O_DIRECT (bypass the host page cache) is enabled. # Default false #block_device_cache_direct = true # Specifies cache-related options for block devices. # Denotes whether flush requests for the device are ignored. # Default false #block_device_cache_noflush = true # Enable iothreads (data-plane) to be used. This causes IO to be # handled in a separate IO thread. This is currently only implemented # for SCSI. # enable_iothreads = false # Enable pre allocation of VM RAM, default false # Enabling this will result in lower container density # as all of the memory will be allocated and locked # This is useful when you want to reserve all the memory # upfront or in the cases where you want memory latencies # to be very predictable # Default false #enable_mem_prealloc = true # Enable huge pages for VM RAM, default false # Enabling this will result in the VM memory # being allocated using huge pages. # This is useful when you want to use vhost-user network # stacks within the container. This will automatically # result in memory pre allocation #enable_hugepages = true # Enable vhost-user storage device, default false # Enabling this will result in some Linux reserved block type # major range 240-254 being chosen to represent vhost-user devices. enable_vhost_user_store = false # The base directory specifically used for vhost-user devices. # Its sub-path "block" is used for block devices; "block/sockets" is # where we expect vhost-user sockets to live; "block/devices" is where # simulated block device nodes for vhost-user devices to live. vhost_user_store_path = "/var/run/kata-containers/vhost-user" # Enable file based guest memory support. The default is an empty string which # will disable this feature. In the case of virtio-fs, this is enabled # automatically and '/dev/shm' is used as the backing folder. # This option will be ignored if VM templating is enabled. #file_mem_backend = "" # Enable swap of vm memory. Default false. # The behaviour is undefined if mem_prealloc is also set to true #enable_swap = true # This option changes the default hypervisor and kernel parameters # to enable debug output where available. This extra output is added # to the proxy logs, but only when proxy debug is also enabled. # # Default false #enable_debug = true # Disable the customizations done in the runtime when it detects # that it is running on top a VMM. This will result in the runtime # behaving as it would when running on bare metal. # #disable_nesting_checks = true # This is the msize used for 9p shares. It is the number of bytes # used for 9p packet payload. #msize_9p = 8192 # If true and vsocks are supported, use vsocks to communicate directly # with the agent and no proxy is started, otherwise use unix # sockets and start a proxy to communicate with the agent. # Default false #use_vsock = true # If false and nvdimm is supported, use nvdimm device to plug guest image. # Otherwise virtio-block device is used. # Default is false #disable_image_nvdimm = true # VFIO devices are hotplugged on a bridge by default. # Enable hotplugging on root bus. This may be required for devices with # a large PCI bar, as this is a current limitation with hotplugging on # a bridge. This value is valid for "pc" machine type. # Default false #hotplug_vfio_on_root_bus = true # Before hot plugging a PCIe device, you need to add a pcie_root_port device. # Use this parameter when using some large PCI bar devices, such as Nvidia GPU # The value means the number of pcie_root_port # This value is valid when hotplug_vfio_on_root_bus is true and machine_type is "q35" # Default 0 #pcie_root_port = 2 # If vhost-net backend for virtio-net is not desired, set to true. Default is false, which trades off # security (vhost-net runs ring0) for network I/O performance. #disable_vhost_net = true # # Default entropy source. # The path to a host source of entropy (including a real hardware RNG) # /dev/urandom and /dev/random are two main options. # Be aware that /dev/random is a blocking source of entropy. If the host # runs out of entropy, the VMs boot time will increase leading to get startup # timeouts. # The source of entropy /dev/urandom is non-blocking and provides a # generally acceptable source of entropy. It should work well for pretty much # all practical purposes. #entropy_source= "/dev/urandom" # Path to OCI hook binaries in the *guest rootfs*. # This does not affect host-side hooks which must instead be added to # the OCI spec passed to the runtime. # # You can create a rootfs with hooks by customizing the osbuilder scripts: # https://github.com/kata-containers/osbuilder # # Hooks must be stored in a subdirectory of guest_hook_path according to their # hook type, i.e. "guest_hook_path/{prestart,postart,poststop}". # The agent will scan these directories for executable files and add them, in # lexicographical order, to the lifecycle of the guest container. # Hooks are executed in the runtime namespace of the guest. See the official documentation: # https://github.com/opencontainers/runtime-spec/blob/v1.0.1/config.md#posix-platform-hooks # Warnings will be logged if any error is encountered will scanning for hooks, # but it will not abort container execution. #guest_hook_path = "/usr/share/oci/hooks" [factory] # VM templating support. Once enabled, new VMs are created from template # using vm cloning. They will share the same initial kernel, initramfs and # agent memory by mapping it readonly. It helps speeding up new container # creation and saves a lot of memory if there are many kata containers running # on the same host. # # When disabled, new VMs are created from scratch. # # Note: Requires "initrd=" to be set ("image=" is not supported). # # Default false #enable_template = true # Specifies the path of template. # # Default "/run/vc/vm/template" #template_path = "/run/vc/vm/template" # The number of caches of VMCache: # unspecified or == 0 --> VMCache is disabled # > 0 --> will be set to the specified number # # VMCache is a function that creates VMs as caches before using it. # It helps speed up new container creation. # The function consists of a server and some clients communicating # through Unix socket. The protocol is gRPC in protocols/cache/cache.proto. # The VMCache server will create some VMs and cache them by factory cache. # It will convert the VM to gRPC format and transport it when gets # requestion from clients. # Factory grpccache is the VMCache client. It will request gRPC format # VM and convert it back to a VM. If VMCache function is enabled, # kata-runtime will request VM from factory grpccache when it creates # a new sandbox. # # Default 0 #vm_cache_number = 0 # Specify the address of the Unix socket that is used by VMCache. # # Default /var/run/kata-containers/cache.sock #vm_cache_endpoint = "/var/run/kata-containers/cache.sock" [proxy.kata] path = "/usr/libexec/kata-containers/kata-proxy" # If enabled, proxy messages will be sent to the system log # (default: disabled) #enable_debug = true [shim.kata] path = "/usr/libexec/kata-containers/kata-shim" # If enabled, shim messages will be sent to the system log # (default: disabled) #enable_debug = true # If enabled, the shim will create opentracing.io traces and spans. # (See https://www.jaegertracing.io/docs/getting-started). # # Note: By default, the shim runs in a separate network namespace. Therefore, # to allow it to send trace details to the Jaeger agent running on the host, # it is necessary to set 'disable_new_netns=true' so that it runs in the host # network namespace. # # (default: disabled) #enable_tracing = true [agent.kata] # If enabled, make the agent display debug-level messages. # (default: disabled) #enable_debug = true # Enable agent tracing. # # If enabled, the default trace mode is "dynamic" and the # default trace type is "isolated". The trace mode and type are set # explicity with the `trace_type=` and `trace_mode=` options. # # Notes: # # - Tracing is ONLY enabled when `enable_tracing` is set: explicitly # setting `trace_mode=` and/or `trace_type=` without setting `enable_tracing` # will NOT activate agent tracing. # # - See https://github.com/kata-containers/agent/blob/master/TRACING.md for # full details. # # (default: disabled) #enable_tracing = true # #trace_mode = "dynamic" #trace_type = "isolated" # Comma separated list of kernel modules and their parameters. # These modules will be loaded in the guest kernel using modprobe(8). # The following example can be used to load two kernel modules with parameters # - kernel_modules=["e1000e InterruptThrottleRate=3000,3000,3000 EEE=1", "i915 enable_ppgtt=0"] # The first word is considered as the module name and the rest as its parameters. # Container will not be started when: # * A kernel module is specified and the modprobe command is not installed in the guest # or it fails loading the module. # * The module is not available in the guest or it doesn't met the guest kernel # requirements, like architecture and version. # kernel_modules=[] [netmon] # If enabled, the network monitoring process gets started when the # sandbox is created. This allows for the detection of some additional # network being added to the existing network namespace, after the # sandbox has been created. # (default: disabled) #enable_netmon = true # Specify the path to the netmon binary. path = "/usr/libexec/kata-containers/kata-netmon" # If enabled, netmon messages will be sent to the system log # (default: disabled) #enable_debug = true [runtime] # If enabled, the runtime will log additional debug messages to the # system log # (default: disabled) #enable_debug = true # # Internetworking model # Determines how the VM should be connected to the # the container network interface # Options: # # - macvtap # Used when the Container network interface can be bridged using # macvtap. # # - none # Used when customize network. Only creates a tap device. No veth pair. # # - tcfilter # Uses tc filter rules to redirect traffic from the network interface # provided by plugin to a tap interface connected to the VM. # internetworking_model="tcfilter" # disable guest seccomp # Determines whether container seccomp profiles are passed to the virtual # machine and applied by the kata agent. If set to true, seccomp is not applied # within the guest # (default: true) disable_guest_seccomp=true # If enabled, the runtime will create opentracing.io traces and spans. # (See https://www.jaegertracing.io/docs/getting-started). # (default: disabled) #enable_tracing = true # If enabled, the runtime will not create a network namespace for shim and hypervisor processes. # This option may have some potential impacts to your host. It should only be used when you know what you're doing. # `disable_new_netns` conflicts with `enable_netmon` # `disable_new_netns` conflicts with `internetworking_model=tcfilter` and `internetworking_model=macvtap`. It works only # with `internetworking_model=none`. The tap device will be in the host network namespace and can connect to a bridge # (like OVS) directly. # If you are using docker, `disable_new_netns` only works with `docker run --net=none` # (default: false) #disable_new_netns = true # if enabled, the runtime will add all the kata processes inside one dedicated cgroup. # The container cgroups in the host are not created, just one single cgroup per sandbox. # The runtime caller is free to restrict or collect cgroup stats of the overall Kata sandbox. # The sandbox cgroup path is the parent cgroup of a container with the PodSandbox annotation. # The sandbox cgroup is constrained if there is no container type annotation. # See: https://godoc.org/github.com/kata-containers/runtime/virtcontainers#ContainerType sandbox_cgroup_only=false # Enabled experimental feature list, format: ["a", "b"]. # Experimental features are features not stable enough for production, # they may break compatibility, and are prepared for a big version bump. # Supported experimental features: # (default: []) experimental=[] ``` --- # KSM throttler ## version Output of "`/usr/libexec/kata-ksm-throttler/kata-ksm-throttler --version`": ``` kata-ksm-throttler version 1.11.2-73ed014 ``` Output of "`/usr/lib/systemd/system/kata-ksm-throttler.service --version`": ``` /usr/bin/kata-collect-data.sh: line 178: /usr/lib/systemd/system/kata-ksm-throttler.service: Permission denied ``` ## systemd service # Image details ```yaml --- osbuilder: url: "https://github.com/kata-containers/osbuilder" version: "unknown" rootfs-creation-time: "2020-07-03T04:31:56.042236365+0000Z" description: "osbuilder rootfs" file-format-version: "0.0.2" architecture: "x86_64" base-distro: name: "Clear" version: "33460" packages: default: - "chrony" - "iptables-bin" - "kmod-bin" - "libudev0-shim" - "systemd" - "util-linux-bin" extra: agent: url: "https://github.com/kata-containers/agent" name: "kata-agent" version: "1.11.2-abb7149e49ea3b6bbb23526e8562d6aa9c181e35" agent-is-init-daemon: "no" ``` --- # Initrd details No initrd --- # Logfiles ## Runtime logs Recent runtime problems found in system journal: ``` time="2020-09-03T14:52:08.889608221+08:00" level=error msg="Could not read qemu pid file" arch=amd64 command=delete container=ac9216c990239efce94f07dc46009c91663db1bf10a80d7bec13e53228bf7c6f error="open /run/vc/vm/ac9216c990239efce94f07dc46009c91663db1bf10a80d7bec13e53228bf7c6f/pid: no such file or directory" name=kata-runtime pid=10724 sandbox=ac9216c990239efce94f07dc46009c91663db1bf10a80d7bec13e53228bf7c6f source=virtcontainers subsystem=qemu time="2020-09-03T14:52:08.889875351+08:00" level=error msg="Could not read qemu pid file" arch=amd64 command=delete container=ac9216c990239efce94f07dc46009c91663db1bf10a80d7bec13e53228bf7c6f error="open /run/vc/vm/ac9216c990239efce94f07dc46009c91663db1bf10a80d7bec13e53228bf7c6f/pid: no such file or directory" name=kata-runtime pid=10724 sandbox=ac9216c990239efce94f07dc46009c91663db1bf10a80d7bec13e53228bf7c6f source=virtcontainers subsystem=qemu time="2020-09-03T14:52:08.890167576+08:00" level=warning msg="failed to get sandbox config from old store: open /var/lib/vc/sbs/ac9216c990239efce94f07dc46009c91663db1bf10a80d7bec13e53228bf7c6f/config.json: no such file or directory" arch=amd64 command=delete container=ac9216c990239efce94f07dc46009c91663db1bf10a80d7bec13e53228bf7c6f name=kata-runtime pid=10724 sandbox=ac9216c990239efce94f07dc46009c91663db1bf10a80d7bec13e53228bf7c6f source=virtcontainers time="2020-09-03T14:52:08.896212226+08:00" level=error msg="Could not read qemu pid file" arch=amd64 command=delete container=ac9216c990239efce94f07dc46009c91663db1bf10a80d7bec13e53228bf7c6f error="open /run/vc/vm/ac9216c990239efce94f07dc46009c91663db1bf10a80d7bec13e53228bf7c6f/pid: no such file or directory" name=kata-runtime pid=10724 sandbox=ac9216c990239efce94f07dc46009c91663db1bf10a80d7bec13e53228bf7c6f source=virtcontainers subsystem=qemu time="2020-09-03T14:52:08.896450934+08:00" level=warning msg="sandbox cgroups path is empty" arch=amd64 command=delete container=ac9216c990239efce94f07dc46009c91663db1bf10a80d7bec13e53228bf7c6f name=kata-runtime pid=10724 sandbox=ac9216c990239efce94f07dc46009c91663db1bf10a80d7bec13e53228bf7c6f source=virtcontainers subsystem=sandbox time="2020-09-03T14:52:17.799591253+08:00" level=info msg="sanner return error: read unix @->/run/vc/vm/6ecf98989d5ac083318da7aa455b0d0fa924ea8b70cd85c6b7dece00920d7f61/qmp.sock: use of closed network connection" arch=amd64 command=create container=6ecf98989d5ac083318da7aa455b0d0fa924ea8b70cd85c6b7dece00920d7f61 name=kata-runtime pid=10779 source=virtcontainers subsystem=qmp time="2020-09-03T14:52:20.750895721+08:00" level=warning msg="sandbox's cgroup won't be updated: cgroup path is empty" arch=amd64 command=create container=6ecf98989d5ac083318da7aa455b0d0fa924ea8b70cd85c6b7dece00920d7f61 name=kata-runtime pid=10779 sandbox=6ecf98989d5ac083318da7aa455b0d0fa924ea8b70cd85c6b7dece00920d7f61 source=virtcontainers subsystem=sandbox time="2020-09-03T14:52:20.751692532+08:00" level=info msg="sanner return error: read unix @->/run/vc/vm/6ecf98989d5ac083318da7aa455b0d0fa924ea8b70cd85c6b7dece00920d7f61/qmp.sock: use of closed network connection" arch=amd64 command=create container=6ecf98989d5ac083318da7aa455b0d0fa924ea8b70cd85c6b7dece00920d7f61 name=kata-runtime pid=10779 source=virtcontainers subsystem=qmp time="2020-09-03T14:52:20.792521569+08:00" level=warning msg="failed to get sandbox config from old store: open /var/lib/vc/sbs/6ecf98989d5ac083318da7aa455b0d0fa924ea8b70cd85c6b7dece00920d7f61/config.json: no such file or directory" arch=amd64 command=start container=6ecf98989d5ac083318da7aa455b0d0fa924ea8b70cd85c6b7dece00920d7f61 name=kata-runtime pid=10836 source=virtcontainers time="2020-09-03T14:52:20.80838421+08:00" level=warning msg="failed to get sandbox config from old store: open /var/lib/vc/sbs/6ecf98989d5ac083318da7aa455b0d0fa924ea8b70cd85c6b7dece00920d7f61/config.json: no such file or directory" arch=amd64 command=start container=6ecf98989d5ac083318da7aa455b0d0fa924ea8b70cd85c6b7dece00920d7f61 name=kata-runtime pid=10836 sandbox=6ecf98989d5ac083318da7aa455b0d0fa924ea8b70cd85c6b7dece00920d7f61 source=virtcontainers time="2020-09-03T14:52:20.932777038+08:00" level=warning msg="failed to get sandbox config from old store: open /var/lib/vc/sbs/6ecf98989d5ac083318da7aa455b0d0fa924ea8b70cd85c6b7dece00920d7f61/config.json: no such file or directory" arch=amd64 command=delete container=6ecf98989d5ac083318da7aa455b0d0fa924ea8b70cd85c6b7dece00920d7f61 name=kata-runtime pid=10855 source=virtcontainers time="2020-09-03T14:52:21.049594639+08:00" level=warning msg="Could not remove container share dir" arch=amd64 command=delete container=6ecf98989d5ac083318da7aa455b0d0fa924ea8b70cd85c6b7dece00920d7f61 error="no such file or directory" name=kata-runtime pid=10855 sandbox=6ecf98989d5ac083318da7aa455b0d0fa924ea8b70cd85c6b7dece00920d7f61 share-dir=/run/kata-containers/shared/sandboxes/6ecf98989d5ac083318da7aa455b0d0fa924ea8b70cd85c6b7dece00920d7f61/6ecf98989d5ac083318da7aa455b0d0fa924ea8b70cd85c6b7dece00920d7f61 source=virtcontainers subsystem=container time="2020-09-03T14:52:21.050518471+08:00" level=warning msg="failed to get sandbox config from old store: open /var/lib/vc/sbs/6ecf98989d5ac083318da7aa455b0d0fa924ea8b70cd85c6b7dece00920d7f61/config.json: no such file or directory" arch=amd64 command=delete container=6ecf98989d5ac083318da7aa455b0d0fa924ea8b70cd85c6b7dece00920d7f61 name=kata-runtime pid=10855 sandbox=6ecf98989d5ac083318da7aa455b0d0fa924ea8b70cd85c6b7dece00920d7f61 source=virtcontainers time="2020-09-03T14:52:21.053856328+08:00" level=warning msg="failed to get sandbox config from old store: open /var/lib/vc/sbs/6ecf98989d5ac083318da7aa455b0d0fa924ea8b70cd85c6b7dece00920d7f61/config.json: no such file or directory" arch=amd64 command=delete container=6ecf98989d5ac083318da7aa455b0d0fa924ea8b70cd85c6b7dece00920d7f61 name=kata-runtime pid=10855 sandbox=6ecf98989d5ac083318da7aa455b0d0fa924ea8b70cd85c6b7dece00920d7f61 source=virtcontainers time="2020-09-03T14:52:21.110816245+08:00" level=info msg="sanner return error: " arch=amd64 command=delete container=6ecf98989d5ac083318da7aa455b0d0fa924ea8b70cd85c6b7dece00920d7f61 name=kata-runtime pid=10855 sandbox=6ecf98989d5ac083318da7aa455b0d0fa924ea8b70cd85c6b7dece00920d7f61 source=virtcontainers subsystem=qmp time="2020-09-03T14:52:21.124613859+08:00" level=error msg="Could not read qemu pid file" arch=amd64 command=delete container=6ecf98989d5ac083318da7aa455b0d0fa924ea8b70cd85c6b7dece00920d7f61 error="open /run/vc/vm/6ecf98989d5ac083318da7aa455b0d0fa924ea8b70cd85c6b7dece00920d7f61/pid: no such file or directory" name=kata-runtime pid=10855 sandbox=6ecf98989d5ac083318da7aa455b0d0fa924ea8b70cd85c6b7dece00920d7f61 source=virtcontainers subsystem=qemu time="2020-09-03T14:52:21.124880471+08:00" level=error msg="Could not read qemu pid file" arch=amd64 command=delete container=6ecf98989d5ac083318da7aa455b0d0fa924ea8b70cd85c6b7dece00920d7f61 error="open /run/vc/vm/6ecf98989d5ac083318da7aa455b0d0fa924ea8b70cd85c6b7dece00920d7f61/pid: no such file or directory" name=kata-runtime pid=10855 sandbox=6ecf98989d5ac083318da7aa455b0d0fa924ea8b70cd85c6b7dece00920d7f61 source=virtcontainers subsystem=qemu time="2020-09-03T14:52:21.125158599+08:00" level=warning msg="failed to get sandbox config from old store: open /var/lib/vc/sbs/6ecf98989d5ac083318da7aa455b0d0fa924ea8b70cd85c6b7dece00920d7f61/config.json: no such file or directory" arch=amd64 command=delete container=6ecf98989d5ac083318da7aa455b0d0fa924ea8b70cd85c6b7dece00920d7f61 name=kata-runtime pid=10855 sandbox=6ecf98989d5ac083318da7aa455b0d0fa924ea8b70cd85c6b7dece00920d7f61 source=virtcontainers time="2020-09-03T14:52:21.131179992+08:00" level=error msg="Could not read qemu pid file" arch=amd64 command=delete container=6ecf98989d5ac083318da7aa455b0d0fa924ea8b70cd85c6b7dece00920d7f61 error="open /run/vc/vm/6ecf98989d5ac083318da7aa455b0d0fa924ea8b70cd85c6b7dece00920d7f61/pid: no such file or directory" name=kata-runtime pid=10855 sandbox=6ecf98989d5ac083318da7aa455b0d0fa924ea8b70cd85c6b7dece00920d7f61 source=virtcontainers subsystem=qemu time="2020-09-03T14:52:21.131400159+08:00" level=warning msg="sandbox cgroups path is empty" arch=amd64 command=delete container=6ecf98989d5ac083318da7aa455b0d0fa924ea8b70cd85c6b7dece00920d7f61 name=kata-runtime pid=10855 sandbox=6ecf98989d5ac083318da7aa455b0d0fa924ea8b70cd85c6b7dece00920d7f61 source=virtcontainers subsystem=sandbox time="2020-09-03T14:52:31.373694246+08:00" level=info msg="sanner return error: read unix @->/run/vc/vm/68136ab6bec4ee9673ddd261d304dff4c846cb26112e7a5934c9498a12978a51/qmp.sock: use of closed network connection" arch=amd64 command=create container=68136ab6bec4ee9673ddd261d304dff4c846cb26112e7a5934c9498a12978a51 name=kata-runtime pid=10907 source=virtcontainers subsystem=qmp time="2020-09-03T14:52:34.316132391+08:00" level=warning msg="sandbox's cgroup won't be updated: cgroup path is empty" arch=amd64 command=create container=68136ab6bec4ee9673ddd261d304dff4c846cb26112e7a5934c9498a12978a51 name=kata-runtime pid=10907 sandbox=68136ab6bec4ee9673ddd261d304dff4c846cb26112e7a5934c9498a12978a51 source=virtcontainers subsystem=sandbox time="2020-09-03T14:52:34.316782575+08:00" level=info msg="sanner return error: read unix @->/run/vc/vm/68136ab6bec4ee9673ddd261d304dff4c846cb26112e7a5934c9498a12978a51/qmp.sock: use of closed network connection" arch=amd64 command=create container=68136ab6bec4ee9673ddd261d304dff4c846cb26112e7a5934c9498a12978a51 name=kata-runtime pid=10907 source=virtcontainers subsystem=qmp time="2020-09-03T14:52:34.356303272+08:00" level=warning msg="failed to get sandbox config from old store: open /var/lib/vc/sbs/68136ab6bec4ee9673ddd261d304dff4c846cb26112e7a5934c9498a12978a51/config.json: no such file or directory" arch=amd64 command=start container=68136ab6bec4ee9673ddd261d304dff4c846cb26112e7a5934c9498a12978a51 name=kata-runtime pid=10971 source=virtcontainers time="2020-09-03T14:52:34.373399789+08:00" level=warning msg="failed to get sandbox config from old store: open /var/lib/vc/sbs/68136ab6bec4ee9673ddd261d304dff4c846cb26112e7a5934c9498a12978a51/config.json: no such file or directory" arch=amd64 command=start container=68136ab6bec4ee9673ddd261d304dff4c846cb26112e7a5934c9498a12978a51 name=kata-runtime pid=10971 sandbox=68136ab6bec4ee9673ddd261d304dff4c846cb26112e7a5934c9498a12978a51 source=virtcontainers time="2020-09-03T14:52:34.509481393+08:00" level=warning msg="failed to get sandbox config from old store: open /var/lib/vc/sbs/68136ab6bec4ee9673ddd261d304dff4c846cb26112e7a5934c9498a12978a51/config.json: no such file or directory" arch=amd64 command=delete container=68136ab6bec4ee9673ddd261d304dff4c846cb26112e7a5934c9498a12978a51 name=kata-runtime pid=10990 source=virtcontainers time="2020-09-03T14:52:34.622574023+08:00" level=warning msg="Could not remove container share dir" arch=amd64 command=delete container=68136ab6bec4ee9673ddd261d304dff4c846cb26112e7a5934c9498a12978a51 error="no such file or directory" name=kata-runtime pid=10990 sandbox=68136ab6bec4ee9673ddd261d304dff4c846cb26112e7a5934c9498a12978a51 share-dir=/run/kata-containers/shared/sandboxes/68136ab6bec4ee9673ddd261d304dff4c846cb26112e7a5934c9498a12978a51/68136ab6bec4ee9673ddd261d304dff4c846cb26112e7a5934c9498a12978a51 source=virtcontainers subsystem=container time="2020-09-03T14:52:34.623619236+08:00" level=warning msg="failed to get sandbox config from old store: open /var/lib/vc/sbs/68136ab6bec4ee9673ddd261d304dff4c846cb26112e7a5934c9498a12978a51/config.json: no such file or directory" arch=amd64 command=delete container=68136ab6bec4ee9673ddd261d304dff4c846cb26112e7a5934c9498a12978a51 name=kata-runtime pid=10990 sandbox=68136ab6bec4ee9673ddd261d304dff4c846cb26112e7a5934c9498a12978a51 source=virtcontainers time="2020-09-03T14:52:34.626170001+08:00" level=warning msg="failed to get sandbox config from old store: open /var/lib/vc/sbs/68136ab6bec4ee9673ddd261d304dff4c846cb26112e7a5934c9498a12978a51/config.json: no such file or directory" arch=amd64 command=delete container=68136ab6bec4ee9673ddd261d304dff4c846cb26112e7a5934c9498a12978a51 name=kata-runtime pid=10990 sandbox=68136ab6bec4ee9673ddd261d304dff4c846cb26112e7a5934c9498a12978a51 source=virtcontainers time="2020-09-03T14:52:34.667741538+08:00" level=info msg="sanner return error: " arch=amd64 command=delete container=68136ab6bec4ee9673ddd261d304dff4c846cb26112e7a5934c9498a12978a51 name=kata-runtime pid=10990 sandbox=68136ab6bec4ee9673ddd261d304dff4c846cb26112e7a5934c9498a12978a51 source=virtcontainers subsystem=qmp time="2020-09-03T14:52:34.691570051+08:00" level=error msg="Could not read qemu pid file" arch=amd64 command=delete container=68136ab6bec4ee9673ddd261d304dff4c846cb26112e7a5934c9498a12978a51 error="open /run/vc/vm/68136ab6bec4ee9673ddd261d304dff4c846cb26112e7a5934c9498a12978a51/pid: no such file or directory" name=kata-runtime pid=10990 sandbox=68136ab6bec4ee9673ddd261d304dff4c846cb26112e7a5934c9498a12978a51 source=virtcontainers subsystem=qemu time="2020-09-03T14:52:34.691825447+08:00" level=error msg="Could not read qemu pid file" arch=amd64 command=delete container=68136ab6bec4ee9673ddd261d304dff4c846cb26112e7a5934c9498a12978a51 error="open /run/vc/vm/68136ab6bec4ee9673ddd261d304dff4c846cb26112e7a5934c9498a12978a51/pid: no such file or directory" name=kata-runtime pid=10990 sandbox=68136ab6bec4ee9673ddd261d304dff4c846cb26112e7a5934c9498a12978a51 source=virtcontainers subsystem=qemu time="2020-09-03T14:52:34.692120506+08:00" level=warning msg="failed to get sandbox config from old store: open /var/lib/vc/sbs/68136ab6bec4ee9673ddd261d304dff4c846cb26112e7a5934c9498a12978a51/config.json: no such file or directory" arch=amd64 command=delete container=68136ab6bec4ee9673ddd261d304dff4c846cb26112e7a5934c9498a12978a51 name=kata-runtime pid=10990 sandbox=68136ab6bec4ee9673ddd261d304dff4c846cb26112e7a5934c9498a12978a51 source=virtcontainers time="2020-09-03T14:52:34.698063832+08:00" level=error msg="Could not read qemu pid file" arch=amd64 command=delete container=68136ab6bec4ee9673ddd261d304dff4c846cb26112e7a5934c9498a12978a51 error="open /run/vc/vm/68136ab6bec4ee9673ddd261d304dff4c846cb26112e7a5934c9498a12978a51/pid: no such file or directory" name=kata-runtime pid=10990 sandbox=68136ab6bec4ee9673ddd261d304dff4c846cb26112e7a5934c9498a12978a51 source=virtcontainers subsystem=qemu time="2020-09-03T14:52:34.698853495+08:00" level=warning msg="sandbox cgroups path is empty" arch=amd64 command=delete container=68136ab6bec4ee9673ddd261d304dff4c846cb26112e7a5934c9498a12978a51 name=kata-runtime pid=10990 sandbox=68136ab6bec4ee9673ddd261d304dff4c846cb26112e7a5934c9498a12978a51 source=virtcontainers subsystem=sandbox time="2020-09-03T14:52:44.731244148+08:00" level=info msg="sanner return error: read unix @->/run/vc/vm/ec0ffbe14749b15f03dbd5e1a33fe59081cb3a59dc0a05c8ff6da598fe43791e/qmp.sock: use of closed network connection" arch=amd64 command=create container=ec0ffbe14749b15f03dbd5e1a33fe59081cb3a59dc0a05c8ff6da598fe43791e name=kata-runtime pid=11040 source=virtcontainers subsystem=qmp time="2020-09-03T14:52:47.672942314+08:00" level=warning msg="sandbox's cgroup won't be updated: cgroup path is empty" arch=amd64 command=create container=ec0ffbe14749b15f03dbd5e1a33fe59081cb3a59dc0a05c8ff6da598fe43791e name=kata-runtime pid=11040 sandbox=ec0ffbe14749b15f03dbd5e1a33fe59081cb3a59dc0a05c8ff6da598fe43791e source=virtcontainers subsystem=sandbox time="2020-09-03T14:52:47.673728827+08:00" level=info msg="sanner return error: read unix @->/run/vc/vm/ec0ffbe14749b15f03dbd5e1a33fe59081cb3a59dc0a05c8ff6da598fe43791e/qmp.sock: use of closed network connection" arch=amd64 command=create container=ec0ffbe14749b15f03dbd5e1a33fe59081cb3a59dc0a05c8ff6da598fe43791e name=kata-runtime pid=11040 source=virtcontainers subsystem=qmp time="2020-09-03T14:52:47.713753465+08:00" level=warning msg="failed to get sandbox config from old store: open /var/lib/vc/sbs/ec0ffbe14749b15f03dbd5e1a33fe59081cb3a59dc0a05c8ff6da598fe43791e/config.json: no such file or directory" arch=amd64 command=start container=ec0ffbe14749b15f03dbd5e1a33fe59081cb3a59dc0a05c8ff6da598fe43791e name=kata-runtime pid=11100 source=virtcontainers time="2020-09-03T14:52:47.729467165+08:00" level=warning msg="failed to get sandbox config from old store: open /var/lib/vc/sbs/ec0ffbe14749b15f03dbd5e1a33fe59081cb3a59dc0a05c8ff6da598fe43791e/config.json: no such file or directory" arch=amd64 command=start container=ec0ffbe14749b15f03dbd5e1a33fe59081cb3a59dc0a05c8ff6da598fe43791e name=kata-runtime pid=11100 sandbox=ec0ffbe14749b15f03dbd5e1a33fe59081cb3a59dc0a05c8ff6da598fe43791e source=virtcontainers time="2020-09-03T14:52:47.857911208+08:00" level=warning msg="failed to get sandbox config from old store: open /var/lib/vc/sbs/ec0ffbe14749b15f03dbd5e1a33fe59081cb3a59dc0a05c8ff6da598fe43791e/config.json: no such file or directory" arch=amd64 command=delete container=ec0ffbe14749b15f03dbd5e1a33fe59081cb3a59dc0a05c8ff6da598fe43791e name=kata-runtime pid=11119 source=virtcontainers time="2020-09-03T14:52:47.98558459+08:00" level=warning msg="Could not remove container share dir" arch=amd64 command=delete container=ec0ffbe14749b15f03dbd5e1a33fe59081cb3a59dc0a05c8ff6da598fe43791e error="no such file or directory" name=kata-runtime pid=11119 sandbox=ec0ffbe14749b15f03dbd5e1a33fe59081cb3a59dc0a05c8ff6da598fe43791e share-dir=/run/kata-containers/shared/sandboxes/ec0ffbe14749b15f03dbd5e1a33fe59081cb3a59dc0a05c8ff6da598fe43791e/ec0ffbe14749b15f03dbd5e1a33fe59081cb3a59dc0a05c8ff6da598fe43791e source=virtcontainers subsystem=container time="2020-09-03T14:52:47.986173841+08:00" level=warning msg="failed to get sandbox config from old store: open /var/lib/vc/sbs/ec0ffbe14749b15f03dbd5e1a33fe59081cb3a59dc0a05c8ff6da598fe43791e/config.json: no such file or directory" arch=amd64 command=delete container=ec0ffbe14749b15f03dbd5e1a33fe59081cb3a59dc0a05c8ff6da598fe43791e name=kata-runtime pid=11119 sandbox=ec0ffbe14749b15f03dbd5e1a33fe59081cb3a59dc0a05c8ff6da598fe43791e source=virtcontainers time="2020-09-03T14:52:47.989480527+08:00" level=warning msg="failed to get sandbox config from old store: open /var/lib/vc/sbs/ec0ffbe14749b15f03dbd5e1a33fe59081cb3a59dc0a05c8ff6da598fe43791e/config.json: no such file or directory" arch=amd64 command=delete container=ec0ffbe14749b15f03dbd5e1a33fe59081cb3a59dc0a05c8ff6da598fe43791e name=kata-runtime pid=11119 sandbox=ec0ffbe14749b15f03dbd5e1a33fe59081cb3a59dc0a05c8ff6da598fe43791e source=virtcontainers time="2020-09-03T14:52:48.072820796+08:00" level=info msg="sanner return error: " arch=amd64 command=delete container=ec0ffbe14749b15f03dbd5e1a33fe59081cb3a59dc0a05c8ff6da598fe43791e name=kata-runtime pid=11119 sandbox=ec0ffbe14749b15f03dbd5e1a33fe59081cb3a59dc0a05c8ff6da598fe43791e source=virtcontainers subsystem=qmp time="2020-09-03T14:52:48.080563619+08:00" level=error msg="Could not read qemu pid file" arch=amd64 command=delete container=ec0ffbe14749b15f03dbd5e1a33fe59081cb3a59dc0a05c8ff6da598fe43791e error="open /run/vc/vm/ec0ffbe14749b15f03dbd5e1a33fe59081cb3a59dc0a05c8ff6da598fe43791e/pid: no such file or directory" name=kata-runtime pid=11119 sandbox=ec0ffbe14749b15f03dbd5e1a33fe59081cb3a59dc0a05c8ff6da598fe43791e source=virtcontainers subsystem=qemu time="2020-09-03T14:52:48.080783362+08:00" level=error msg="Could not read qemu pid file" arch=amd64 command=delete container=ec0ffbe14749b15f03dbd5e1a33fe59081cb3a59dc0a05c8ff6da598fe43791e error="open /run/vc/vm/ec0ffbe14749b15f03dbd5e1a33fe59081cb3a59dc0a05c8ff6da598fe43791e/pid: no such file or directory" name=kata-runtime pid=11119 sandbox=ec0ffbe14749b15f03dbd5e1a33fe59081cb3a59dc0a05c8ff6da598fe43791e source=virtcontainers subsystem=qemu time="2020-09-03T14:52:48.081087679+08:00" level=warning msg="failed to get sandbox config from old store: open /var/lib/vc/sbs/ec0ffbe14749b15f03dbd5e1a33fe59081cb3a59dc0a05c8ff6da598fe43791e/config.json: no such file or directory" arch=amd64 command=delete container=ec0ffbe14749b15f03dbd5e1a33fe59081cb3a59dc0a05c8ff6da598fe43791e name=kata-runtime pid=11119 sandbox=ec0ffbe14749b15f03dbd5e1a33fe59081cb3a59dc0a05c8ff6da598fe43791e source=virtcontainers time="2020-09-03T14:52:48.086146676+08:00" level=error msg="Could not read qemu pid file" arch=amd64 command=delete container=ec0ffbe14749b15f03dbd5e1a33fe59081cb3a59dc0a05c8ff6da598fe43791e error="open /run/vc/vm/ec0ffbe14749b15f03dbd5e1a33fe59081cb3a59dc0a05c8ff6da598fe43791e/pid: no such file or directory" name=kata-runtime pid=11119 sandbox=ec0ffbe14749b15f03dbd5e1a33fe59081cb3a59dc0a05c8ff6da598fe43791e source=virtcontainers subsystem=qemu time="2020-09-03T14:52:48.086336862+08:00" level=warning msg="sandbox cgroups path is empty" arch=amd64 command=delete container=ec0ffbe14749b15f03dbd5e1a33fe59081cb3a59dc0a05c8ff6da598fe43791e name=kata-runtime pid=11119 sandbox=ec0ffbe14749b15f03dbd5e1a33fe59081cb3a59dc0a05c8ff6da598fe43791e source=virtcontainers subsystem=sandbox ``` ## Proxy logs Recent proxy problems found in system journal: ``` time="2020-09-03T10:35:05.248096331+08:00" level=fatal msg="channel error" error="accept unix /run/vc/sbs/51dd0f5611d0dc973e9aa1eeec0143bac38a82313f56bb1dc50515a38456bea0/proxy.sock: use of closed network connection" name=kata-proxy pid=13688 sandbox=51dd0f5611d0dc973e9aa1eeec0143bac38a82313f56bb1dc50515a38456bea0 source=proxy time="2020-09-03T14:31:57.252525819+08:00" level=fatal msg="failed to handle exit signal" error="close unix @->/run/vc/vm/0311bc9b4caba3132b8134575c8e3ac46b4806b71ba10e77c434c646ffe4a5ea/kata.sock: use of closed network connection" name=kata-proxy pid=7535 sandbox=0311bc9b4caba3132b8134575c8e3ac46b4806b71ba10e77c434c646ffe4a5ea source=proxy time="2020-09-03T14:45:28.121765061+08:00" level=fatal msg="channel error" error="accept unix /run/vc/sbs/3530deb60590e7940bfc0b9b3de7353402ac462100d47a8c54161fa2758939e3/proxy.sock: use of closed network connection" name=kata-proxy pid=8880 sandbox=3530deb60590e7940bfc0b9b3de7353402ac462100d47a8c54161fa2758939e3 source=proxy time="2020-09-03T14:45:41.466902402+08:00" level=fatal msg="failed to handle exit signal" error="close unix @->/run/vc/vm/eabc51e48b016a169c0dd1813aff17f8462ce8f01ea4fef8b458c4f0a1e896a2/kata.sock: use of closed network connection" name=kata-proxy pid=9082 sandbox=eabc51e48b016a169c0dd1813aff17f8462ce8f01ea4fef8b458c4f0a1e896a2 source=proxy time="2020-09-03T14:45:48.246503763+08:00" level=fatal msg="failed to handle exit signal" error="close unix @->/run/vc/vm/d33bbefa6a9de6972b4d2e4d5a9a5d594b1700bd5cb923ae8addd020b9176053/kata.sock: use of closed network connection" name=kata-proxy pid=9224 sandbox=d33bbefa6a9de6972b4d2e4d5a9a5d594b1700bd5cb923ae8addd020b9176053 source=proxy time="2020-09-03T14:50:00.413066836+08:00" level=fatal msg="failed to handle exit signal" error="close unix @->/run/vc/vm/200a3c6d1d989de8f16a5428b89368106e95ed91daa866b611b8ec64ba81493d/kata.sock: use of closed network connection" name=kata-proxy pid=10150 sandbox=200a3c6d1d989de8f16a5428b89368106e95ed91daa866b611b8ec64ba81493d source=proxy time="2020-09-03T14:50:08.725862325+08:00" level=fatal msg="failed to handle exit signal" error="close unix @->/run/vc/vm/f91f8461c3028d78a4bbc8c79777c6d10a10b2624df1ea90964c9b7ae682db25/kata.sock: use of closed network connection" name=kata-proxy pid=10291 sandbox=f91f8461c3028d78a4bbc8c79777c6d10a10b2624df1ea90964c9b7ae682db25 source=proxy ``` ## Shim logs No recent shim problems found in system journal. ## Throttler logs No recent throttler problems found in system journal. --- # Container manager details Have `docker` ## Docker Output of "`docker version`": ``` Client: Version: 19.03.11 API version: 1.40 Go version: go1.13.4 Git commit: 42e35e61f3 Built: Thu Aug 20 08:33:54 2020 OS/Arch: linux/amd64 Experimental: false Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running? ``` Output of "`docker info`": ``` Client: Debug Mode: false Server: ERROR: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running? errors pretty printing info ``` Output of "`systemctl show docker`": ``` Type=notify Restart=always NotifyAccess=main RestartUSec=2s TimeoutStartUSec=infinity TimeoutStopUSec=infinity RuntimeMaxUSec=infinity WatchdogUSec=0 WatchdogTimestampMonotonic=0 PermissionsStartOnly=no RootDirectoryStartOnly=no RemainAfterExit=no GuessMainPID=yes MainPID=0 ControlPID=0 FileDescriptorStoreMax=0 NFileDescriptorStore=0 StatusErrno=0 Result=success UID=[not set] GID=[not set] NRestarts=0 ExecMainStartTimestamp=Thu 2020-09-03 14:33:57 CST ExecMainStartTimestampMonotonic=607720092 ExecMainExitTimestamp=Thu 2020-09-03 14:54:32 CST ExecMainExitTimestampMonotonic=1842854723 ExecMainPID=8351 ExecMainCode=1 ExecMainStatus=0 ExecStart={ path=/usr/bin/dockerd ; argv[]=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock ; ignore_errors=no ; start_time=[n/a] ; stop_time=[n/a] ; pid=0 ; code=(null) ; status=0/0 } ExecReload={ path=/bin/kill ; argv[]=/bin/kill -s HUP $MAINPID ; ignore_errors=no ; start_time=[n/a] ; stop_time=[n/a] ; pid=0 ; code=(null) ; status=0/0 } Slice=system.slice MemoryCurrent=[not set] CPUUsageNSec=[not set] TasksCurrent=[not set] IPIngressBytes=18446744073709551615 IPIngressPackets=18446744073709551615 IPEgressBytes=18446744073709551615 IPEgressPackets=18446744073709551615 Delegate=yes DelegateControllers=cpu cpuacct io blkio memory devices pids CPUAccounting=no CPUWeight=[not set] StartupCPUWeight=[not set] CPUShares=[not set] StartupCPUShares=[not set] CPUQuotaPerSecUSec=infinity IOAccounting=no IOWeight=[not set] StartupIOWeight=[not set] BlockIOAccounting=no BlockIOWeight=[not set] StartupBlockIOWeight=[not set] MemoryAccounting=yes MemoryLow=0 MemoryHigh=infinity MemoryMax=infinity MemorySwapMax=infinity MemoryLimit=infinity DevicePolicy=auto TasksAccounting=yes TasksMax=infinity IPAccounting=no UMask=0022 LimitCPU=infinity LimitCPUSoft=infinity LimitFSIZE=infinity LimitFSIZESoft=infinity LimitDATA=infinity LimitDATASoft=infinity LimitSTACK=infinity LimitSTACKSoft=8388608 LimitCORE=infinity LimitCORESoft=infinity LimitRSS=infinity LimitRSSSoft=infinity LimitNOFILE=infinity LimitNOFILESoft=infinity LimitAS=infinity LimitASSoft=infinity LimitNPROC=infinity LimitNPROCSoft=infinity LimitMEMLOCK=16777216 LimitMEMLOCKSoft=16777216 LimitLOCKS=infinity LimitLOCKSSoft=infinity LimitSIGPENDING=514435 LimitSIGPENDINGSoft=514435 LimitMSGQUEUE=819200 LimitMSGQUEUESoft=819200 LimitNICE=0 LimitNICESoft=0 LimitRTPRIO=0 LimitRTPRIOSoft=0 LimitRTTIME=infinity LimitRTTIMESoft=infinity OOMScoreAdjust=0 Nice=0 IOSchedulingClass=0 IOSchedulingPriority=0 CPUSchedulingPolicy=0 CPUSchedulingPriority=0 TimerSlackNSec=50000 CPUSchedulingResetOnFork=no NonBlocking=no StandardInput=null StandardInputData= StandardOutput=journal StandardError=inherit TTYReset=no TTYVHangup=no TTYVTDisallocate=no SyslogPriority=30 SyslogLevelPrefix=yes SyslogLevel=6 SyslogFacility=3 LogLevelMax=-1 SecureBits=0 CapabilityBoundingSet=cap_chown cap_dac_override cap_dac_read_search cap_fowner cap_fsetid cap_kill cap_setgid cap_setuid cap_setpcap cap_linux_immutable cap_net_bind_service cap_net_broadcast cap_net_admin cap_net_raw cap_ipc_lock cap_ipc_owner cap_sys_module cap_sys_rawio cap_sys_chroot cap_sys_ptrace cap_sys_pacct cap_sys_admin cap_sys_boot cap_sys_nice cap_sys_resource cap_sys_time cap_sys_tty_config cap_mknod cap_lease cap_audit_write cap_audit_control cap_setfcap cap_mac_override cap_mac_admin cap_syslog cap_wake_alarm cap_block_suspend AmbientCapabilities= DynamicUser=no RemoveIPC=no MountFlags= PrivateTmp=no PrivateDevices=no ProtectKernelTunables=no ProtectKernelModules=no ProtectControlGroups=no PrivateNetwork=no PrivateUsers=no PrivateMounts=no ProtectHome=no ProtectSystem=no SameProcessGroup=no UtmpMode=init IgnoreSIGPIPE=yes NoNewPrivileges=no SystemCallErrorNumber=0 LockPersonality=no RuntimeDirectoryPreserve=no RuntimeDirectoryMode=0755 StateDirectoryMode=0755 CacheDirectoryMode=0755 LogsDirectoryMode=0755 ConfigurationDirectoryMode=0755 MemoryDenyWriteExecute=no RestrictRealtime=no RestrictNamespaces=no MountAPIVFS=no KeyringMode=private KillMode=process KillSignal=15 SendSIGKILL=yes SendSIGHUP=no Id=docker.service Names=docker.service Requires=system.slice sysinit.target docker.socket Wants=network-online.target BindsTo=containerd.service WantedBy=multi-user.target ConsistsOf=docker.socket Conflicts=shutdown.target Before=shutdown.target multi-user.target After=systemd-journald.socket docker.socket containerd.service system.slice network-online.target firewalld.service basic.target sysinit.target TriggeredBy=docker.socket Documentation=https://docs.docker.com Description=Docker Application Container Engine LoadState=loaded ActiveState=inactive SubState=dead FragmentPath=/usr/lib/systemd/system/docker.service UnitFileState=enabled UnitFilePreset=disabled StateChangeTimestamp=Thu 2020-09-03 14:54:32 CST StateChangeTimestampMonotonic=1842854769 InactiveExitTimestamp=Thu 2020-09-03 14:33:57 CST InactiveExitTimestampMonotonic=607720123 ActiveEnterTimestamp=Thu 2020-09-03 14:33:59 CST ActiveEnterTimestampMonotonic=609552795 ActiveExitTimestamp=Thu 2020-09-03 14:54:32 CST ActiveExitTimestampMonotonic=1842849042 InactiveEnterTimestamp=Thu 2020-09-03 14:54:32 CST InactiveEnterTimestampMonotonic=1842854769 CanStart=yes CanStop=yes CanReload=yes CanIsolate=no StopWhenUnneeded=no RefuseManualStart=no RefuseManualStop=no AllowIsolate=no DefaultDependencies=yes OnFailureJobMode=replace IgnoreOnIsolate=no NeedDaemonReload=no JobTimeoutUSec=infinity JobRunningTimeoutUSec=infinity JobTimeoutAction=none ConditionResult=yes AssertResult=yes ConditionTimestamp=Thu 2020-09-03 14:33:57 CST ConditionTimestampMonotonic=607719280 AssertTimestamp=Thu 2020-09-03 14:33:57 CST AssertTimestampMonotonic=607719280 Transient=no Perpetual=no StartLimitIntervalUSec=1min StartLimitBurst=3 StartLimitAction=none FailureAction=none SuccessAction=none InvocationID=e8ea596da39648499095e51d93c0a444 CollectMode=inactive ``` No `kubectl` No `crio` Have `containerd` ## containerd Output of "`containerd --version`": ``` containerd containerd.io v1.2.13 7ad184331fa3e55e52b890ea95e65ba581ae3429 ``` Output of "`systemctl show containerd`": ``` Type=simple Restart=no NotifyAccess=none RestartUSec=100ms TimeoutStartUSec=1min 30s TimeoutStopUSec=1min 30s RuntimeMaxUSec=infinity WatchdogUSec=0 WatchdogTimestamp=Thu 2020-09-03 14:59:37 CST WatchdogTimestampMonotonic=2148199937 PermissionsStartOnly=no RootDirectoryStartOnly=no RemainAfterExit=no GuessMainPID=yes MainPID=11505 ControlPID=0 FileDescriptorStoreMax=0 NFileDescriptorStore=0 StatusErrno=0 Result=success UID=[not set] GID=[not set] NRestarts=0 ExecMainStartTimestamp=Thu 2020-09-03 14:59:37 CST ExecMainStartTimestampMonotonic=2148199902 ExecMainExitTimestampMonotonic=0 ExecMainPID=11505 ExecMainCode=0 ExecMainStatus=0 ExecStartPre={ path=/sbin/modprobe ; argv[]=/sbin/modprobe overlay ; ignore_errors=yes ; start_time=[n/a] ; stop_time=[n/a] ; pid=0 ; code=(null) ; status=0/0 } ExecStart={ path=/usr/bin/containerd ; argv[]=/usr/bin/containerd ; ignore_errors=no ; start_time=[n/a] ; stop_time=[n/a] ; pid=0 ; code=(null) ; status=0/0 } Slice=system.slice ControlGroup=/system.slice/containerd.service MemoryCurrent=93573120 CPUUsageNSec=[not set] TasksCurrent=46 IPIngressBytes=18446744073709551615 IPIngressPackets=18446744073709551615 IPEgressBytes=18446744073709551615 IPEgressPackets=18446744073709551615 Delegate=yes DelegateControllers=cpu cpuacct io blkio memory devices pids CPUAccounting=no CPUWeight=[not set] StartupCPUWeight=[not set] CPUShares=[not set] StartupCPUShares=[not set] CPUQuotaPerSecUSec=infinity IOAccounting=no IOWeight=[not set] StartupIOWeight=[not set] BlockIOAccounting=no BlockIOWeight=[not set] StartupBlockIOWeight=[not set] MemoryAccounting=yes MemoryLow=0 MemoryHigh=infinity MemoryMax=infinity MemorySwapMax=infinity MemoryLimit=infinity DevicePolicy=auto TasksAccounting=yes TasksMax=infinity IPAccounting=no UMask=0022 LimitCPU=infinity LimitCPUSoft=infinity LimitFSIZE=infinity LimitFSIZESoft=infinity LimitDATA=infinity LimitDATASoft=infinity LimitSTACK=infinity LimitSTACKSoft=8388608 LimitCORE=infinity LimitCORESoft=infinity LimitRSS=infinity LimitRSSSoft=infinity LimitNOFILE=1048576 LimitNOFILESoft=1048576 LimitAS=infinity LimitASSoft=infinity LimitNPROC=infinity LimitNPROCSoft=infinity LimitMEMLOCK=16777216 LimitMEMLOCKSoft=16777216 LimitLOCKS=infinity LimitLOCKSSoft=infinity LimitSIGPENDING=514435 LimitSIGPENDINGSoft=514435 LimitMSGQUEUE=819200 LimitMSGQUEUESoft=819200 LimitNICE=0 LimitNICESoft=0 LimitRTPRIO=0 LimitRTPRIOSoft=0 LimitRTTIME=infinity LimitRTTIMESoft=infinity OOMScoreAdjust=0 Nice=0 IOSchedulingClass=0 IOSchedulingPriority=0 CPUSchedulingPolicy=0 CPUSchedulingPriority=0 TimerSlackNSec=50000 CPUSchedulingResetOnFork=no NonBlocking=no StandardInput=null StandardInputData= StandardOutput=journal StandardError=inherit TTYReset=no TTYVHangup=no TTYVTDisallocate=no SyslogPriority=30 SyslogLevelPrefix=yes SyslogLevel=6 SyslogFacility=3 LogLevelMax=-1 SecureBits=0 CapabilityBoundingSet=cap_chown cap_dac_override cap_dac_read_search cap_fowner cap_fsetid cap_kill cap_setgid cap_setuid cap_setpcap cap_linux_immutable cap_net_bind_service cap_net_broadcast cap_net_admin cap_net_raw cap_ipc_lock cap_ipc_owner cap_sys_module cap_sys_rawio cap_sys_chroot cap_sys_ptrace cap_sys_pacct cap_sys_admin cap_sys_boot cap_sys_nice cap_sys_resource cap_sys_time cap_sys_tty_config cap_mknod cap_lease cap_audit_write cap_audit_control cap_setfcap cap_mac_override cap_mac_admin cap_syslog cap_wake_alarm cap_block_suspend AmbientCapabilities= DynamicUser=no RemoveIPC=no MountFlags= PrivateTmp=no PrivateDevices=no ProtectKernelTunables=no ProtectKernelModules=no ProtectControlGroups=no PrivateNetwork=no PrivateUsers=no PrivateMounts=no ProtectHome=no ProtectSystem=no SameProcessGroup=no UtmpMode=init IgnoreSIGPIPE=yes NoNewPrivileges=no SystemCallErrorNumber=0 LockPersonality=no RuntimeDirectoryPreserve=no RuntimeDirectoryMode=0755 StateDirectoryMode=0755 CacheDirectoryMode=0755 LogsDirectoryMode=0755 ConfigurationDirectoryMode=0755 MemoryDenyWriteExecute=no RestrictRealtime=no RestrictNamespaces=no MountAPIVFS=no KeyringMode=private KillMode=process KillSignal=15 SendSIGKILL=yes SendSIGHUP=no Id=containerd.service Names=containerd.service Requires=sysinit.target system.slice BoundBy=docker.service Conflicts=shutdown.target Before=docker.service shutdown.target After=system.slice systemd-journald.socket network.target basic.target sysinit.target Documentation=https://containerd.io Description=containerd container runtime LoadState=loaded ActiveState=active SubState=running FragmentPath=/usr/lib/systemd/system/containerd.service UnitFileState=disabled UnitFilePreset=disabled StateChangeTimestamp=Thu 2020-09-03 14:59:37 CST StateChangeTimestampMonotonic=2148199938 InactiveExitTimestamp=Thu 2020-09-03 14:59:37 CST InactiveExitTimestampMonotonic=2148197358 ActiveEnterTimestamp=Thu 2020-09-03 14:59:37 CST ActiveEnterTimestampMonotonic=2148199938 ActiveExitTimestamp=Thu 2020-09-03 14:54:44 CST ActiveExitTimestampMonotonic=1854910206 InactiveEnterTimestamp=Thu 2020-09-03 14:54:44 CST InactiveEnterTimestampMonotonic=1854912513 CanStart=yes CanStop=yes CanReload=no CanIsolate=no StopWhenUnneeded=no RefuseManualStart=no RefuseManualStop=no AllowIsolate=no DefaultDependencies=yes OnFailureJobMode=replace IgnoreOnIsolate=no NeedDaemonReload=no JobTimeoutUSec=infinity JobRunningTimeoutUSec=infinity JobTimeoutAction=none ConditionResult=yes AssertResult=yes ConditionTimestamp=Thu 2020-09-03 14:59:37 CST ConditionTimestampMonotonic=2148196357 AssertTimestamp=Thu 2020-09-03 14:59:37 CST AssertTimestampMonotonic=2148196358 Transient=no Perpetual=no StartLimitIntervalUSec=10s StartLimitBurst=5 StartLimitAction=none FailureAction=none SuccessAction=none InvocationID=34f96b2db0f24e0996a7b46506d0c917 CollectMode=inactive ``` Output of "`cat /etc/containerd/config.toml`": ``` root = "/var/lib/containerd" state = "/run/containerd" oom_score = 0 [grpc] address = "/run/containerd/containerd.sock" uid = 0 gid = 0 max_recv_message_size = 16777216 max_send_message_size = 16777216 [debug] address = "" uid = 0 gid = 0 level = "" [metrics] address = "" grpc_histogram = false [cgroup] path = "" [plugins] [plugins.cgroups] no_prometheus = false [plugins.cri] stream_server_address = "127.0.0.1" stream_server_port = "0" enable_selinux = false sandbox_image = "k8s.gcr.io/pause:3.1" stats_collect_period = 10 systemd_cgroup = false enable_tls_streaming = false max_container_log_line_size = 16384 disable_proc_mount = false [plugins.cri.containerd] snapshotter = "overlayfs" no_pivot = false [plugins.cri.containerd.default_runtime] runtime_type = "io.containerd.runtime.v1.linux" runtime_engine = "" runtime_root = "" [plugins.cri.containerd.untrusted_workload_runtime] runtime_type = "" runtime_engine = "" runtime_root = "" # support kata [plugins.cri.containerd.runtimes.kata] runtime_type = "io.containerd.kata.v2" [plugins.cri.cni] bin_dir = "/opt/cni/bin" conf_dir = "/etc/cni/net.d" conf_template = "" [plugins.cri.registry] [plugins.cri.registry.mirrors] [plugins.cri.registry.mirrors."docker.io"] endpoint = ["https://nrbewqda.mirror.aliyuncs.com", "https://registry-1.docker.io"] [plugins.cri.x509_key_pair_streaming] tls_cert_file = "" tls_key_file = "" [plugins.diff-service] default = ["walking"] [plugins.linux] shim = "containerd-shim" runtime = "runc" runtime_root = "" no_shim = false shim_debug = false [plugins.opt] path = "/opt/containerd" [plugins.restart] interval = "10s" [plugins.scheduler] pause_threshold = 0.02 deletion_threshold = 0 mutation_threshold = 100 schedule_delay = "0s" startup_delay = "100ms" ``` --- # Packages No `dpkg` Have `rpm` Output of "`rpm -qa|egrep "(cc-oci-runtimecc-runtimerunv|kata-proxy|kata-runtime|kata-shim|kata-ksm-throttler|kata-containers-image|linux-container|qemu-)"`": ``` qemu-vanilla-data-4.1.1+git.99c5874a9b-6.1.x86_64 kata-shim-1.11.2-6.1.x86_64 qemu-vanilla-4.1.1+git.99c5874a9b-6.1.x86_64 kata-linux-container-5.4.32.74-6.1.x86_64 kata-proxy-1.11.2-6.1.x86_64 kata-proxy-bin-1.11.2-6.1.x86_64 kata-containers-image-1.11.2-6.1.x86_64 kata-runtime-1.11.2-6.1.x86_64 kata-ksm-throttler-1.11.2-6.1.x86_64 kata-shim-bin-1.11.2-6.1.x86_64 qemu-vanilla-bin-4.1.1+git.99c5874a9b-6.1.x86_64 ``` ---

lining2020x commented 4 years ago

Here is the test result of using kata 2.0 (vsock + rust agent)

[root@375 ~]# time  ctr run --rm -t --runtime io.containerd.kata.v2 docker.io/library/alpine:latest hello-kata uname -a
Linux clr-46966a1be7544078b354e0ccc93e7387 5.4.32 #1 SMP Wed Jul 29 07:21:21 UTC 2020 x86_64 Linux

real    0m2.680s
user    0m0.015s
sys     0m0.019s
[root@375 ~]# time  ctr run --rm -t --runtime io.containerd.kata.v2 docker.io/library/alpine:latest hello-kata uname -a
Linux clr-61cc15f526634133954226545694afbe 5.4.32 #1 SMP Wed Jul 29 07:21:21 UTC 2020 x86_64 Linux

real    0m2.564s
user    0m0.019s
sys     0m0.013s
[root@375 ~]# time  ctr run --rm -t --runtime io.containerd.kata.v2 docker.io/library/alpine:latest hello-kata uname -a
Linux clr-022433d8e28347139c026d0ffd80d46a 5.4.32 #1 SMP Wed Jul 29 07:21:21 UTC 2020 x86_64 Linux

real    0m2.642s
user    0m0.020s
sys     0m0.013s
merwick commented 4 years ago

This looks like the same issue that is tracked in https://github.com/kata-containers/runtime/issues/1917

While that issue is still open, I believe a kernel fix was pushed that helped avoid the issue - at least this patch, https://github.com/kata-containers/packaging/blob/master/kernel/patches/5.4.x/0002-net-virtio_vsock-Fix-race-condition-between-bind-and.patch, which is df12eb6d6cd9 ("net: virtio_vsock: Enhance connection semantics") in Linux 5.7 gets rid of the slowdown for me.

lining2020x commented 4 years ago

@merwick Yes,It's indeed the same issue.

lining2020x commented 4 years ago

@merwick Many thanks !