a) Install ACRN v1.6.1 (acrn-2020w18.4-140000p) on Ubuntu 18.04.4 as documented in this tutorial.
b) Set up Kata containers on ACRN by following this tutorial.
c) The kata-manager installs version 1.11.0-rc0 as of writing.
d) Run a Kata container using ACRN VM, for example sudo docker run -ti --runtime=kata-runtime busybox sh, and check the networking inside the container is working?
Expected result
The networking of the Kata container should be bridged to the host.
Actual result
Show kata-collect-data.sh details
# Meta details
Running `kata-collect-data.sh` version `1.11.0-rc0 (commit )` at `2020-06-11.09:41:50.152514120+0800`.
---
Runtime is `/usr/bin/kata-runtime`.
# `kata-env`
Output of "`/usr/bin/kata-runtime kata-env`":
```toml
[Meta]
Version = "1.0.24"
[Runtime]
Debug = false
Trace = false
DisableGuestSeccomp = true
DisableNewNetNs = false
SandboxCgroupOnly = false
Path = "/usr/bin/kata-runtime"
[Runtime.Version]
OCI = "1.0.1-dev"
[Runtime.Version.Version]
Semver = "1.11.0-rc0"
Major = 1
Minor = 11
Patch = 0
Commit = ""
[Runtime.Config]
Path = "/etc/kata-containers/configuration.toml"
[Hypervisor]
MachineType = ""
Version = "DM version is: 1.6-2020w18.4.140000p_284 (daily tag:acrn-2020w18.4.140000p), build by mockbuild@2020-04-30 02:27:43"
Path = "/usr/bin/acrn-dm"
BlockDeviceDriver = "virtio-blk"
EntropySource = "/dev/urandom"
SharedFS = ""
VirtioFSDaemon = ""
Msize9p = 0
MemorySlots = 10
PCIeRootPort = 0
HotplugVFIOOnRootBus = false
Debug = false
UseVSock = false
[Image]
Path = "/usr/share/kata-containers/kata-containers-image_clearlinux_1.11.0-rc0_agent_d4df5d96ba.img"
[Kernel]
Path = "/usr/share/kata-containers/vmlinuz-5.4.32.73-47.container"
Parameters = "systemd.unit=kata-containers.target systemd.mask=systemd-networkd.service systemd.mask=systemd-networkd.socket"
[Initrd]
Path = ""
[Proxy]
Type = "kataProxy"
Path = "/usr/libexec/kata-containers/kata-proxy"
Debug = false
[Proxy.Version]
Semver = "1.11.0-rc0-a6f5534"
Major = 1
Minor = 11
Patch = 0
Commit = "a6f5534"
[Shim]
Type = "kataShim"
Path = "/usr/libexec/kata-containers/kata-shim"
Debug = false
[Shim.Version]
Semver = "1.11.0-rc0-ad49288"
Major = 1
Minor = 11
Patch = 0
Commit = "ad49288"
[Agent]
Type = "kata"
Debug = false
Trace = false
TraceMode = ""
TraceType = ""
[Host]
Kernel = "5.4.28-PKT-200203T060100Z-00002-gd7da1d772f85"
Architecture = "amd64"
VMContainerCapable = true
SupportVSocks = false
[Host.Distro]
Name = "Ubuntu"
Version = "18.04"
[Host.CPU]
Vendor = "GenuineIntel"
Model = "Intel(R) Core(TM) i5-7300U CPU @ 2.60GHz"
[Netmon]
Path = "/usr/libexec/kata-containers/kata-netmon"
Debug = false
Enable = false
[Netmon.Version]
Semver = "1.11.0-rc0"
Major = 1
Minor = 11
Patch = 0
Commit = "<>"
```
---
# Runtime config files
## Runtime default config files
```
/etc/kata-containers/configuration.toml
/usr/share/defaults/kata-containers/configuration.toml
```
## Runtime config file contents
Output of "`cat "/etc/kata-containers/configuration.toml"`":
```toml
# Copyright (c) 2017-2019 Intel Corporation
#
# SPDX-License-Identifier: Apache-2.0
#
# XXX: WARNING: this file is auto-generated.
# XXX:
# XXX: Source file: "cli/config/configuration-acrn.toml.in"
# XXX: Project:
# XXX: Name: Kata Containers
# XXX: Type: kata
[hypervisor.acrn]
path = "/usr/bin/acrn-dm"
ctlpath = "/usr/bin/acrnctl"
kernel = "/usr/share/kata-containers/vmlinuz.container"
image = "/usr/share/kata-containers/kata-containers.img"
# Optional space-separated list of options to pass to the guest kernel.
# For example, use `kernel_params = "vsyscall=emulate"` if you are having
# trouble running pre-2.15 glibc.
#
# WARNING: - any parameter specified here will take priority over the default
# parameter value of the same name used to start the virtual machine.
# Do not set values here unless you understand the impact of doing so as you
# may stop the virtual machine from booting.
# To see the list of default parameters, enable hypervisor debug, create a
# container and look for 'default-kernel-parameters' log entries.
kernel_params = ""
# Path to the firmware.
# If you want that acrn uses the default firmware leave this option empty
firmware = ""
# Default maximum number of vCPUs per SB/VM:
# unspecified or == 0 --> will be set to the actual number of physical cores or to the maximum number
# of vCPUs supported by KVM if that number is exceeded
# > 0 <= number of physical cores --> will be set to the specified number
# > number of physical cores --> will be set to the actual number of physical cores or to the maximum number
# of vCPUs supported by KVM if that number is exceeded
# WARNING: Depending of the architecture, the maximum number of vCPUs supported by KVM is used when
# the actual number of physical cores is greater than it.
# WARNING: Be aware that this value impacts the virtual machine's memory footprint and CPU
# the hotplug functionality. For example, `default_maxvcpus = 240` specifies that until 240 vCPUs
# can be added to a SB/VM, but the memory footprint will be big. Another example, with
# `default_maxvcpus = 8` the memory footprint will be small, but 8 will be the maximum number of
# vCPUs supported by the SB/VM. In general, we recommend that you do not edit this variable,
# unless you know what are you doing.
default_maxvcpus = 1
# Bridges can be used to hot plug devices.
# Limitations:
# * Currently only pci bridges are supported
# * Until 30 devices per bridge can be hot plugged.
# * Until 5 PCI bridges can be cold plugged per VM.
# This limitation could be a bug in the kernel
# Default number of bridges per SB/VM:
# unspecified or 0 --> will be set to 1
# > 1 <= 5 --> will be set to the specified number
# > 5 --> will be set to 5
default_bridges = 1
# Default memory size in MiB for SB/VM.
# If unspecified then it will be set 2048 MiB.
default_memory = 2048
# Block storage driver to be used for the hypervisor in case the container
# rootfs is backed by a block device. ACRN only supports virtio-blk.
block_device_driver = "virtio-blk"
# This option changes the default hypervisor and kernel parameters
# to enable debug output where available. This extra output is added
# to the proxy logs, but only when proxy debug is also enabled.
#
# Default false
#enable_debug = true
# Disable the customizations done in the runtime when it detects
# that it is running on top a VMM. This will result in the runtime
# behaving as it would when running on bare metal.
#
#disable_nesting_checks = true
# If host doesn't support vhost_net, set to true. Thus we won't create vhost fds for nics.
# Default false
#disable_vhost_net = true
# Path to OCI hook binaries in the *guest rootfs*.
# This does not affect host-side hooks which must instead be added to
# the OCI spec passed to the runtime.
#
# You can create a rootfs with hooks by customizing the osbuilder scripts:
# https://github.com/kata-containers/osbuilder
#
# Hooks must be stored in a subdirectory of guest_hook_path according to their
# hook type, i.e. "guest_hook_path/{prestart,postart,poststop}".
# The agent will scan these directories for executable files and add them, in
# lexicographical order, to the lifecycle of the guest container.
# Hooks are executed in the runtime namespace of the guest. See the official documentation:
# https://github.com/opencontainers/runtime-spec/blob/v1.0.1/config.md#posix-platform-hooks
# Warnings will be logged if any error is encountered will scanning for hooks,
# but it will not abort container execution.
#guest_hook_path = "/usr/share/oci/hooks"
[proxy.kata]
path = "/usr/libexec/kata-containers/kata-proxy"
# If enabled, proxy messages will be sent to the system log
# (default: disabled)
#enable_debug = true
[shim.kata]
path = "/usr/libexec/kata-containers/kata-shim"
# If enabled, shim messages will be sent to the system log
# (default: disabled)
#enable_debug = true
# If enabled, the shim will create opentracing.io traces and spans.
# (See https://www.jaegertracing.io/docs/getting-started).
#
# Note: By default, the shim runs in a separate network namespace. Therefore,
# to allow it to send trace details to the Jaeger agent running on the host,
# it is necessary to set 'disable_new_netns=true' so that it runs in the host
# network namespace.
#
# (default: disabled)
#enable_tracing = true
[agent.kata]
# If enabled, make the agent display debug-level messages.
# (default: disabled)
#enable_debug = true
# Enable agent tracing.
#
# If enabled, the default trace mode is "dynamic" and the
# default trace type is "isolated". The trace mode and type are set
# explicity with the `trace_type=` and `trace_mode=` options.
#
# Notes:
#
# - Tracing is ONLY enabled when `enable_tracing` is set: explicitly
# setting `trace_mode=` and/or `trace_type=` without setting `enable_tracing`
# will NOT activate agent tracing.
#
# - See https://github.com/kata-containers/agent/blob/master/TRACING.md for
# full details.
#
# (default: disabled)
#enable_tracing = true
#
#trace_mode = "dynamic"
#trace_type = "isolated"
[netmon]
# If enabled, the network monitoring process gets started when the
# sandbox is created. This allows for the detection of some additional
# network being added to the existing network namespace, after the
# sandbox has been created.
# (default: disabled)
#enable_netmon = true
# Specify the path to the netmon binary.
path = "/usr/libexec/kata-containers/kata-netmon"
# If enabled, netmon messages will be sent to the system log
# (default: disabled)
#enable_debug = true
[runtime]
# If enabled, the runtime will log additional debug messages to the
# system log
# (default: disabled)
#enable_debug = true
#
# Internetworking model
# Determines how the VM should be connected to the
# the container network interface
# Options:
#
# - bridged (Deprecated)
# Uses a linux bridge to interconnect the container interface to
# the VM. Works for most cases except macvlan and ipvlan.
# ***NOTE: This feature has been deprecated with plans to remove this
# feature in the future. Please use other network models listed below.
#
#
# - macvtap
# Used when the Container network interface can be bridged using
# macvtap.
#
# - none
# Used when customize network. Only creates a tap device. No veth pair.
#
# - tcfilter
# Uses tc filter rules to redirect traffic from the network interface
# provided by plugin to a tap interface connected to the VM.
#
internetworking_model="macvtap"
# disable guest seccomp
# Determines whether container seccomp profiles are passed to the virtual
# machine and applied by the kata agent. If set to true, seccomp is not applied
# within the guest
# (default: true)
disable_guest_seccomp=true
# If enabled, the runtime will create opentracing.io traces and spans.
# (See https://www.jaegertracing.io/docs/getting-started).
# (default: disabled)
#enable_tracing = true
# If enabled, the runtime will not create a network namespace for shim and hypervisor processes.
# This option may have some potential impacts to your host. It should only be used when you know what you're doing.
# `disable_new_netns` conflicts with `enable_netmon`
# `disable_new_netns` conflicts with `internetworking_model=bridged` and `internetworking_model=macvtap`. It works only
# with `internetworking_model=none`. The tap device will be in the host network namespace and can connect to a bridge
# (like OVS) directly.
# If you are using docker, `disable_new_netns` only works with `docker run --net=none`
# (default: false)
#disable_new_netns = true
# if enabled, the runtime will add all the kata processes inside one dedicated cgroup.
# The container cgroups in the host are not created, just one single cgroup per sandbox.
# The runtime caller is free to restrict or collect cgroup stats of the overall Kata sandbox.
# The sandbox cgroup path is the parent cgroup of a container with the PodSandbox annotation.
# The sandbox cgroup is constrained if there is no container type annotation.
# See: https://godoc.org/github.com/kata-containers/runtime/virtcontainers#ContainerType
sandbox_cgroup_only=false
# Enabled experimental feature list, format: ["a", "b"].
# Experimental features are features not stable enough for production,
# they may break compatibility, and are prepared for a big version bump.
# Supported experimental features:
# (default: [])
experimental=[]
```
Output of "`cat "/usr/share/defaults/kata-containers/configuration.toml"`":
```toml
# Copyright (c) 2017-2019 Intel Corporation
#
# SPDX-License-Identifier: Apache-2.0
#
# XXX: WARNING: this file is auto-generated.
# XXX:
# XXX: Source file: "cli/config/configuration-qemu.toml.in"
# XXX: Project:
# XXX: Name: Kata Containers
# XXX: Type: kata
[hypervisor.qemu]
path = "/usr/bin/qemu-vanilla-system-x86_64"
kernel = "/usr/share/kata-containers/vmlinuz.container"
image = "/usr/share/kata-containers/kata-containers.img"
machine_type = "pc"
# Optional space-separated list of options to pass to the guest kernel.
# For example, use `kernel_params = "vsyscall=emulate"` if you are having
# trouble running pre-2.15 glibc.
#
# WARNING: - any parameter specified here will take priority over the default
# parameter value of the same name used to start the virtual machine.
# Do not set values here unless you understand the impact of doing so as you
# may stop the virtual machine from booting.
# To see the list of default parameters, enable hypervisor debug, create a
# container and look for 'default-kernel-parameters' log entries.
kernel_params = ""
# Path to the firmware.
# If you want that qemu uses the default firmware leave this option empty
firmware = ""
# Machine accelerators
# comma-separated list of machine accelerators to pass to the hypervisor.
# For example, `machine_accelerators = "nosmm,nosmbus,nosata,nopit,static-prt,nofw"`
machine_accelerators=""
# Default number of vCPUs per SB/VM:
# unspecified or 0 --> will be set to 1
# < 0 --> will be set to the actual number of physical cores
# > 0 <= number of physical cores --> will be set to the specified number
# > number of physical cores --> will be set to the actual number of physical cores
default_vcpus = 1
# Default maximum number of vCPUs per SB/VM:
# unspecified or == 0 --> will be set to the actual number of physical cores or to the maximum number
# of vCPUs supported by KVM if that number is exceeded
# > 0 <= number of physical cores --> will be set to the specified number
# > number of physical cores --> will be set to the actual number of physical cores or to the maximum number
# of vCPUs supported by KVM if that number is exceeded
# WARNING: Depending of the architecture, the maximum number of vCPUs supported by KVM is used when
# the actual number of physical cores is greater than it.
# WARNING: Be aware that this value impacts the virtual machine's memory footprint and CPU
# the hotplug functionality. For example, `default_maxvcpus = 240` specifies that until 240 vCPUs
# can be added to a SB/VM, but the memory footprint will be big. Another example, with
# `default_maxvcpus = 8` the memory footprint will be small, but 8 will be the maximum number of
# vCPUs supported by the SB/VM. In general, we recommend that you do not edit this variable,
# unless you know what are you doing.
default_maxvcpus = 0
# Bridges can be used to hot plug devices.
# Limitations:
# * Currently only pci bridges are supported
# * Until 30 devices per bridge can be hot plugged.
# * Until 5 PCI bridges can be cold plugged per VM.
# This limitation could be a bug in qemu or in the kernel
# Default number of bridges per SB/VM:
# unspecified or 0 --> will be set to 1
# > 1 <= 5 --> will be set to the specified number
# > 5 --> will be set to 5
default_bridges = 1
# Default memory size in MiB for SB/VM.
# If unspecified then it will be set 2048 MiB.
default_memory = 2048
#
# Default memory slots per SB/VM.
# If unspecified then it will be set 10.
# This is will determine the times that memory will be hotadded to sandbox/VM.
#memory_slots = 10
# The size in MiB will be plused to max memory of hypervisor.
# It is the memory address space for the NVDIMM devie.
# If set block storage driver (block_device_driver) to "nvdimm",
# should set memory_offset to the size of block device.
# Default 0
#memory_offset = 0
# Specifies virtio-mem will be enabled or not.
# Please note that this option should be used with the command
# "echo 1 > /proc/sys/vm/overcommit_memory".
# Default false
#enable_virtio_mem = true
# Disable block device from being used for a container's rootfs.
# In case of a storage driver like devicemapper where a container's
# root file system is backed by a block device, the block device is passed
# directly to the hypervisor for performance reasons.
# This flag prevents the block device from being passed to the hypervisor,
# 9pfs is used instead to pass the rootfs.
disable_block_device_use = false
# Shared file system type:
# - virtio-9p (default)
# - virtio-fs
shared_fs = "virtio-9p"
# Path to vhost-user-fs daemon.
virtio_fs_daemon = "/usr/bin/virtiofsd"
# Default size of DAX cache in MiB
virtio_fs_cache_size = 1024
# Extra args for virtiofsd daemon
#
# Format example:
# ["-o", "arg1=xxx,arg2", "-o", "hello world", "--arg3=yyy"]
#
# see `virtiofsd -h` for possible options.
virtio_fs_extra_args = []
# Cache mode:
#
# - none
# Metadata, data, and pathname lookup are not cached in guest. They are
# always fetched from host and any changes are immediately pushed to host.
#
# - auto
# Metadata and pathname lookup cache expires after a configured amount of
# time (default is 1 second). Data is cached while the file is open (close
# to open consistency).
#
# - always
# Metadata, data, and pathname lookup are cached in guest and never expire.
virtio_fs_cache = "always"
# Block storage driver to be used for the hypervisor in case the container
# rootfs is backed by a block device. This is virtio-scsi, virtio-blk
# or nvdimm.
block_device_driver = "virtio-scsi"
# Specifies cache-related options will be set to block devices or not.
# Default false
#block_device_cache_set = true
# Specifies cache-related options for block devices.
# Denotes whether use of O_DIRECT (bypass the host page cache) is enabled.
# Default false
#block_device_cache_direct = true
# Specifies cache-related options for block devices.
# Denotes whether flush requests for the device are ignored.
# Default false
#block_device_cache_noflush = true
# Enable iothreads (data-plane) to be used. This causes IO to be
# handled in a separate IO thread. This is currently only implemented
# for SCSI.
#
enable_iothreads = false
# Enable pre allocation of VM RAM, default false
# Enabling this will result in lower container density
# as all of the memory will be allocated and locked
# This is useful when you want to reserve all the memory
# upfront or in the cases where you want memory latencies
# to be very predictable
# Default false
#enable_mem_prealloc = true
# Enable huge pages for VM RAM, default false
# Enabling this will result in the VM memory
# being allocated using huge pages.
# This is useful when you want to use vhost-user network
# stacks within the container. This will automatically
# result in memory pre allocation
#enable_hugepages = true
# Enable vhost-user storage device, default false
# Enabling this will result in some Linux reserved block type
# major range 240-254 being chosen to represent vhost-user devices.
enable_vhost_user_store = false
# The base directory specifically used for vhost-user devices.
# Its sub-path "block" is used for block devices; "block/sockets" is
# where we expect vhost-user sockets to live; "block/devices" is where
# simulated block device nodes for vhost-user devices to live.
vhost_user_store_path = "/var/run/kata-containers/vhost-user"
# Enable file based guest memory support. The default is an empty string which
# will disable this feature. In the case of virtio-fs, this is enabled
# automatically and '/dev/shm' is used as the backing folder.
# This option will be ignored if VM templating is enabled.
#file_mem_backend = ""
# Enable swap of vm memory. Default false.
# The behaviour is undefined if mem_prealloc is also set to true
#enable_swap = true
# This option changes the default hypervisor and kernel parameters
# to enable debug output where available. This extra output is added
# to the proxy logs, but only when proxy debug is also enabled.
#
# Default false
#enable_debug = true
# Disable the customizations done in the runtime when it detects
# that it is running on top a VMM. This will result in the runtime
# behaving as it would when running on bare metal.
#
#disable_nesting_checks = true
# This is the msize used for 9p shares. It is the number of bytes
# used for 9p packet payload.
#msize_9p = 8192
# If true and vsocks are supported, use vsocks to communicate directly
# with the agent and no proxy is started, otherwise use unix
# sockets and start a proxy to communicate with the agent.
# Default false
#use_vsock = true
# If false and nvdimm is supported, use nvdimm device to plug guest image.
# Otherwise virtio-block device is used.
# Default is false
#disable_image_nvdimm = true
# VFIO devices are hotplugged on a bridge by default.
# Enable hotplugging on root bus. This may be required for devices with
# a large PCI bar, as this is a current limitation with hotplugging on
# a bridge. This value is valid for "pc" machine type.
# Default false
#hotplug_vfio_on_root_bus = true
# Before hot plugging a PCIe device, you need to add a pcie_root_port device.
# Use this parameter when using some large PCI bar devices, such as Nvidia GPU
# The value means the number of pcie_root_port
# This value is valid when hotplug_vfio_on_root_bus is true and machine_type is "q35"
# Default 0
#pcie_root_port = 2
# If vhost-net backend for virtio-net is not desired, set to true. Default is false, which trades off
# security (vhost-net runs ring0) for network I/O performance.
#disable_vhost_net = true
#
# Default entropy source.
# The path to a host source of entropy (including a real hardware RNG)
# /dev/urandom and /dev/random are two main options.
# Be aware that /dev/random is a blocking source of entropy. If the host
# runs out of entropy, the VMs boot time will increase leading to get startup
# timeouts.
# The source of entropy /dev/urandom is non-blocking and provides a
# generally acceptable source of entropy. It should work well for pretty much
# all practical purposes.
#entropy_source= "/dev/urandom"
# Path to OCI hook binaries in the *guest rootfs*.
# This does not affect host-side hooks which must instead be added to
# the OCI spec passed to the runtime.
#
# You can create a rootfs with hooks by customizing the osbuilder scripts:
# https://github.com/kata-containers/osbuilder
#
# Hooks must be stored in a subdirectory of guest_hook_path according to their
# hook type, i.e. "guest_hook_path/{prestart,postart,poststop}".
# The agent will scan these directories for executable files and add them, in
# lexicographical order, to the lifecycle of the guest container.
# Hooks are executed in the runtime namespace of the guest. See the official documentation:
# https://github.com/opencontainers/runtime-spec/blob/v1.0.1/config.md#posix-platform-hooks
# Warnings will be logged if any error is encountered will scanning for hooks,
# but it will not abort container execution.
#guest_hook_path = "/usr/share/oci/hooks"
[factory]
# VM templating support. Once enabled, new VMs are created from template
# using vm cloning. They will share the same initial kernel, initramfs and
# agent memory by mapping it readonly. It helps speeding up new container
# creation and saves a lot of memory if there are many kata containers running
# on the same host.
#
# When disabled, new VMs are created from scratch.
#
# Note: Requires "initrd=" to be set ("image=" is not supported).
#
# Default false
#enable_template = true
# Specifies the path of template.
#
# Default "/run/vc/vm/template"
#template_path = "/run/vc/vm/template"
# The number of caches of VMCache:
# unspecified or == 0 --> VMCache is disabled
# > 0 --> will be set to the specified number
#
# VMCache is a function that creates VMs as caches before using it.
# It helps speed up new container creation.
# The function consists of a server and some clients communicating
# through Unix socket. The protocol is gRPC in protocols/cache/cache.proto.
# The VMCache server will create some VMs and cache them by factory cache.
# It will convert the VM to gRPC format and transport it when gets
# requestion from clients.
# Factory grpccache is the VMCache client. It will request gRPC format
# VM and convert it back to a VM. If VMCache function is enabled,
# kata-runtime will request VM from factory grpccache when it creates
# a new sandbox.
#
# Default 0
#vm_cache_number = 0
# Specify the address of the Unix socket that is used by VMCache.
#
# Default /var/run/kata-containers/cache.sock
#vm_cache_endpoint = "/var/run/kata-containers/cache.sock"
[proxy.kata]
path = "/usr/libexec/kata-containers/kata-proxy"
# If enabled, proxy messages will be sent to the system log
# (default: disabled)
#enable_debug = true
[shim.kata]
path = "/usr/libexec/kata-containers/kata-shim"
# If enabled, shim messages will be sent to the system log
# (default: disabled)
#enable_debug = true
# If enabled, the shim will create opentracing.io traces and spans.
# (See https://www.jaegertracing.io/docs/getting-started).
#
# Note: By default, the shim runs in a separate network namespace. Therefore,
# to allow it to send trace details to the Jaeger agent running on the host,
# it is necessary to set 'disable_new_netns=true' so that it runs in the host
# network namespace.
#
# (default: disabled)
#enable_tracing = true
[agent.kata]
# If enabled, make the agent display debug-level messages.
# (default: disabled)
#enable_debug = true
# Enable agent tracing.
#
# If enabled, the default trace mode is "dynamic" and the
# default trace type is "isolated". The trace mode and type are set
# explicity with the `trace_type=` and `trace_mode=` options.
#
# Notes:
#
# - Tracing is ONLY enabled when `enable_tracing` is set: explicitly
# setting `trace_mode=` and/or `trace_type=` without setting `enable_tracing`
# will NOT activate agent tracing.
#
# - See https://github.com/kata-containers/agent/blob/master/TRACING.md for
# full details.
#
# (default: disabled)
#enable_tracing = true
#
#trace_mode = "dynamic"
#trace_type = "isolated"
# Comma separated list of kernel modules and their parameters.
# These modules will be loaded in the guest kernel using modprobe(8).
# The following example can be used to load two kernel modules with parameters
# - kernel_modules=["e1000e InterruptThrottleRate=3000,3000,3000 EEE=1", "i915 enable_ppgtt=0"]
# The first word is considered as the module name and the rest as its parameters.
# Container will not be started when:
# * A kernel module is specified and the modprobe command is not installed in the guest
# or it fails loading the module.
# * The module is not available in the guest or it doesn't met the guest kernel
# requirements, like architecture and version.
#
kernel_modules=[]
[netmon]
# If enabled, the network monitoring process gets started when the
# sandbox is created. This allows for the detection of some additional
# network being added to the existing network namespace, after the
# sandbox has been created.
# (default: disabled)
#enable_netmon = true
# Specify the path to the netmon binary.
path = "/usr/libexec/kata-containers/kata-netmon"
# If enabled, netmon messages will be sent to the system log
# (default: disabled)
#enable_debug = true
[runtime]
# If enabled, the runtime will log additional debug messages to the
# system log
# (default: disabled)
#enable_debug = true
#
# Internetworking model
# Determines how the VM should be connected to the
# the container network interface
# Options:
#
# - macvtap
# Used when the Container network interface can be bridged using
# macvtap.
#
# - none
# Used when customize network. Only creates a tap device. No veth pair.
#
# - tcfilter
# Uses tc filter rules to redirect traffic from the network interface
# provided by plugin to a tap interface connected to the VM.
#
internetworking_model="tcfilter"
# disable guest seccomp
# Determines whether container seccomp profiles are passed to the virtual
# machine and applied by the kata agent. If set to true, seccomp is not applied
# within the guest
# (default: true)
disable_guest_seccomp=true
# If enabled, the runtime will create opentracing.io traces and spans.
# (See https://www.jaegertracing.io/docs/getting-started).
# (default: disabled)
#enable_tracing = true
# If enabled, the runtime will not create a network namespace for shim and hypervisor processes.
# This option may have some potential impacts to your host. It should only be used when you know what you're doing.
# `disable_new_netns` conflicts with `enable_netmon`
# `disable_new_netns` conflicts with `internetworking_model=tcfilter` and `internetworking_model=macvtap`. It works only
# with `internetworking_model=none`. The tap device will be in the host network namespace and can connect to a bridge
# (like OVS) directly.
# If you are using docker, `disable_new_netns` only works with `docker run --net=none`
# (default: false)
#disable_new_netns = true
# if enabled, the runtime will add all the kata processes inside one dedicated cgroup.
# The container cgroups in the host are not created, just one single cgroup per sandbox.
# The runtime caller is free to restrict or collect cgroup stats of the overall Kata sandbox.
# The sandbox cgroup path is the parent cgroup of a container with the PodSandbox annotation.
# The sandbox cgroup is constrained if there is no container type annotation.
# See: https://godoc.org/github.com/kata-containers/runtime/virtcontainers#ContainerType
sandbox_cgroup_only=false
# Enabled experimental feature list, format: ["a", "b"].
# Experimental features are features not stable enough for production,
# they may break compatibility, and are prepared for a big version bump.
# Supported experimental features:
# (default: [])
experimental=[]
```
---
# KSM throttler
## version
Output of "`/usr/libexec/kata-ksm-throttler/kata-ksm-throttler --version`":
```
kata-ksm-throttler version 1.11.0-rc0-ae0fdd0
```
## systemd service
# Image details
```yaml
---
osbuilder:
url: "https://github.com/kata-containers/osbuilder"
version: "unknown"
rootfs-creation-time: "2020-04-20T16:06:11.186926495+0000Z"
description: "osbuilder rootfs"
file-format-version: "0.0.2"
architecture: "x86_64"
base-distro:
name: "Clear"
version: "32870"
packages:
default:
- "chrony"
- "iptables-bin"
- "kmod-bin"
- "libudev0-shim"
- "systemd"
- "util-linux-bin"
extra:
agent:
url: "https://github.com/kata-containers/agent"
name: "kata-agent"
version: "1.11.0-rc0-d4df5d96ba10ced41d2d614a35ad6d535be045ba"
agent-is-init-daemon: "no"
```
---
# Initrd details
No initrd
---
# Logfiles
## Runtime logs
Recent runtime problems found in system journal:
```
time="2020-06-10T13:36:37.534725328+08:00" level=warning msg="Could not get device information" device=/dev/kvm error="no such file or directory" source=virtcontainers/pkg/cgroups
time="2020-06-10T13:36:37.534931909+08:00" level=warning msg="cgroups have not been created and cgroup path is empty" source=virtcontainers/pkg/cgroups
time="2020-06-10T14:01:12.70254939+08:00" level=warning msg="failed to get sandbox config from old store: open /var/lib/vc/sbs/159a53a0e66b4028a40b4fe5ce034b226235d09ea0f36efe0464254c74d867c0/config.json: no such file or directory" arch=amd64 command=delete container=159a53a0e66b4028a40b4fe5ce034b226235d09ea0f36efe0464254c74d867c0 name=kata-runtime pid=3882 source=virtcontainers
time="2020-06-10T14:01:12.704269156+08:00" level=warning msg="Could not get device information" device=/dev/kvm error="no such file or directory" source=virtcontainers/pkg/cgroups
time="2020-06-10T14:01:12.704496981+08:00" level=warning msg="cgroups have not been created and cgroup path is empty" source=virtcontainers/pkg/cgroups
time="2020-06-10T14:01:12.79450014+08:00" level=warning msg="no such file or directory: /run/kata-containers/shared/sandboxes/159a53a0e66b4028a40b4fe5ce034b226235d09ea0f36efe0464254c74d867c0/159a53a0e66b4028a40b4fe5ce034b226235d09ea0f36efe0464254c74d867c0/rootfs"
time="2020-06-10T14:01:12.795557342+08:00" level=warning msg="Could not remove container share dir" arch=amd64 command=delete container=159a53a0e66b4028a40b4fe5ce034b226235d09ea0f36efe0464254c74d867c0 error="no such file or directory" name=kata-runtime pid=3882 sandbox=159a53a0e66b4028a40b4fe5ce034b226235d09ea0f36efe0464254c74d867c0 share-dir=/run/kata-containers/shared/sandboxes/159a53a0e66b4028a40b4fe5ce034b226235d09ea0f36efe0464254c74d867c0/159a53a0e66b4028a40b4fe5ce034b226235d09ea0f36efe0464254c74d867c0 source=virtcontainers subsystem=container
time="2020-06-10T14:01:12.80120647+08:00" level=warning msg="failed to get sandbox config from old store: open /var/lib/vc/sbs/159a53a0e66b4028a40b4fe5ce034b226235d09ea0f36efe0464254c74d867c0/config.json: no such file or directory" arch=amd64 command=delete container=159a53a0e66b4028a40b4fe5ce034b226235d09ea0f36efe0464254c74d867c0 name=kata-runtime pid=3882 sandbox=159a53a0e66b4028a40b4fe5ce034b226235d09ea0f36efe0464254c74d867c0 source=virtcontainers
time="2020-06-10T14:01:12.80197238+08:00" level=warning msg="Could not get device information" device=/dev/kvm error="no such file or directory" source=virtcontainers/pkg/cgroups
time="2020-06-10T14:01:12.802217092+08:00" level=warning msg="cgroups have not been created and cgroup path is empty" source=virtcontainers/pkg/cgroups
time="2020-06-10T14:01:12.803793736+08:00" level=warning msg="failed to get sandbox config from old store: open /var/lib/vc/sbs/159a53a0e66b4028a40b4fe5ce034b226235d09ea0f36efe0464254c74d867c0/config.json: no such file or directory" arch=amd64 command=delete container=159a53a0e66b4028a40b4fe5ce034b226235d09ea0f36efe0464254c74d867c0 name=kata-runtime pid=3882 sandbox=159a53a0e66b4028a40b4fe5ce034b226235d09ea0f36efe0464254c74d867c0 source=virtcontainers
time="2020-06-10T14:01:12.804526223+08:00" level=warning msg="Could not get device information" device=/dev/kvm error="no such file or directory" source=virtcontainers/pkg/cgroups
time="2020-06-10T14:01:12.80469411+08:00" level=warning msg="cgroups have not been created and cgroup path is empty" source=virtcontainers/pkg/cgroups
time="2020-06-10T14:01:13.536699276+08:00" level=warning msg="failed to get sandbox config from old store: open /var/lib/vc/sbs/159a53a0e66b4028a40b4fe5ce034b226235d09ea0f36efe0464254c74d867c0/config.json: no such file or directory" arch=amd64 command=delete container=159a53a0e66b4028a40b4fe5ce034b226235d09ea0f36efe0464254c74d867c0 name=kata-runtime pid=3882 sandbox=159a53a0e66b4028a40b4fe5ce034b226235d09ea0f36efe0464254c74d867c0 source=virtcontainers
time="2020-06-10T14:01:13.540063723+08:00" level=warning msg="Could not get device information" device=/dev/kvm error="no such file or directory" source=virtcontainers/pkg/cgroups
time="2020-06-10T14:01:13.545239263+08:00" level=warning msg="cgroups have not been created and cgroup path is empty" source=virtcontainers/pkg/cgroups
time="2020-06-10T14:01:13.557783906+08:00" level=warning msg="sandbox cgroups path is empty" arch=amd64 command=delete container=159a53a0e66b4028a40b4fe5ce034b226235d09ea0f36efe0464254c74d867c0 name=kata-runtime pid=3882 sandbox=159a53a0e66b4028a40b4fe5ce034b226235d09ea0f36efe0464254c74d867c0 source=virtcontainers subsystem=sandbox
time="2020-06-10T14:01:35.214764312+08:00" level=warning msg="Could not get device information" device=/dev/kvm error="no such file or directory" source=virtcontainers/pkg/cgroups
time="2020-06-10T14:01:45.382030613+08:00" level=warning msg="sandbox's cgroup won't be updated: cgroup path is empty" arch=amd64 command=create container=ce57bc35101f67aa9921bfc209d87514e0f5ce66db1d35ab6f41acf8e47cf9f5 name=kata-runtime pid=4052 sandbox=ce57bc35101f67aa9921bfc209d87514e0f5ce66db1d35ab6f41acf8e47cf9f5 source=virtcontainers subsystem=sandbox
time="2020-06-10T14:01:45.422000031+08:00" level=warning msg="failed to get sandbox config from old store: open /var/lib/vc/sbs/ce57bc35101f67aa9921bfc209d87514e0f5ce66db1d35ab6f41acf8e47cf9f5/config.json: no such file or directory" arch=amd64 command=start container=ce57bc35101f67aa9921bfc209d87514e0f5ce66db1d35ab6f41acf8e47cf9f5 name=kata-runtime pid=4283 source=virtcontainers
time="2020-06-10T14:01:45.423379095+08:00" level=warning msg="Could not get device information" device=/dev/kvm error="no such file or directory" source=virtcontainers/pkg/cgroups
time="2020-06-10T14:01:45.423569863+08:00" level=warning msg="cgroups have not been created and cgroup path is empty" source=virtcontainers/pkg/cgroups
time="2020-06-10T14:01:45.425313174+08:00" level=warning msg="failed to get sandbox config from old store: open /var/lib/vc/sbs/ce57bc35101f67aa9921bfc209d87514e0f5ce66db1d35ab6f41acf8e47cf9f5/config.json: no such file or directory" arch=amd64 command=start container=ce57bc35101f67aa9921bfc209d87514e0f5ce66db1d35ab6f41acf8e47cf9f5 name=kata-runtime pid=4283 sandbox=ce57bc35101f67aa9921bfc209d87514e0f5ce66db1d35ab6f41acf8e47cf9f5 source=virtcontainers
time="2020-06-10T14:01:45.425923857+08:00" level=warning msg="Could not get device information" device=/dev/kvm error="no such file or directory" source=virtcontainers/pkg/cgroups
time="2020-06-10T14:01:45.426071584+08:00" level=warning msg="cgroups have not been created and cgroup path is empty" source=virtcontainers/pkg/cgroups
time="2020-06-10T14:14:58.669156255+08:00" level=warning msg="failed to get sandbox config from old store: open /var/lib/vc/sbs/ce57bc35101f67aa9921bfc209d87514e0f5ce66db1d35ab6f41acf8e47cf9f5/config.json: no such file or directory" arch=amd64 command=delete container=ce57bc35101f67aa9921bfc209d87514e0f5ce66db1d35ab6f41acf8e47cf9f5 name=kata-runtime pid=4752 source=virtcontainers
time="2020-06-10T14:14:58.670869147+08:00" level=warning msg="Could not get device information" device=/dev/kvm error="no such file or directory" source=virtcontainers/pkg/cgroups
time="2020-06-10T14:14:58.671148966+08:00" level=warning msg="cgroups have not been created and cgroup path is empty" source=virtcontainers/pkg/cgroups
time="2020-06-10T14:14:58.763401356+08:00" level=warning msg="no such file or directory: /run/kata-containers/shared/sandboxes/ce57bc35101f67aa9921bfc209d87514e0f5ce66db1d35ab6f41acf8e47cf9f5/ce57bc35101f67aa9921bfc209d87514e0f5ce66db1d35ab6f41acf8e47cf9f5/rootfs"
time="2020-06-10T14:14:58.764459351+08:00" level=warning msg="Could not remove container share dir" arch=amd64 command=delete container=ce57bc35101f67aa9921bfc209d87514e0f5ce66db1d35ab6f41acf8e47cf9f5 error="no such file or directory" name=kata-runtime pid=4752 sandbox=ce57bc35101f67aa9921bfc209d87514e0f5ce66db1d35ab6f41acf8e47cf9f5 share-dir=/run/kata-containers/shared/sandboxes/ce57bc35101f67aa9921bfc209d87514e0f5ce66db1d35ab6f41acf8e47cf9f5/ce57bc35101f67aa9921bfc209d87514e0f5ce66db1d35ab6f41acf8e47cf9f5 source=virtcontainers subsystem=container
time="2020-06-10T14:14:58.770127779+08:00" level=warning msg="failed to get sandbox config from old store: open /var/lib/vc/sbs/ce57bc35101f67aa9921bfc209d87514e0f5ce66db1d35ab6f41acf8e47cf9f5/config.json: no such file or directory" arch=amd64 command=delete container=ce57bc35101f67aa9921bfc209d87514e0f5ce66db1d35ab6f41acf8e47cf9f5 name=kata-runtime pid=4752 sandbox=ce57bc35101f67aa9921bfc209d87514e0f5ce66db1d35ab6f41acf8e47cf9f5 source=virtcontainers
time="2020-06-10T14:14:58.770859174+08:00" level=warning msg="Could not get device information" device=/dev/kvm error="no such file or directory" source=virtcontainers/pkg/cgroups
time="2020-06-10T14:14:58.77102566+08:00" level=warning msg="cgroups have not been created and cgroup path is empty" source=virtcontainers/pkg/cgroups
time="2020-06-10T14:14:58.772619748+08:00" level=warning msg="failed to get sandbox config from old store: open /var/lib/vc/sbs/ce57bc35101f67aa9921bfc209d87514e0f5ce66db1d35ab6f41acf8e47cf9f5/config.json: no such file or directory" arch=amd64 command=delete container=ce57bc35101f67aa9921bfc209d87514e0f5ce66db1d35ab6f41acf8e47cf9f5 name=kata-runtime pid=4752 sandbox=ce57bc35101f67aa9921bfc209d87514e0f5ce66db1d35ab6f41acf8e47cf9f5 source=virtcontainers
time="2020-06-10T14:14:58.77333799+08:00" level=warning msg="Could not get device information" device=/dev/kvm error="no such file or directory" source=virtcontainers/pkg/cgroups
time="2020-06-10T14:14:58.773510169+08:00" level=warning msg="cgroups have not been created and cgroup path is empty" source=virtcontainers/pkg/cgroups
time="2020-06-10T14:14:59.517025844+08:00" level=warning msg="failed to get sandbox config from old store: open /var/lib/vc/sbs/ce57bc35101f67aa9921bfc209d87514e0f5ce66db1d35ab6f41acf8e47cf9f5/config.json: no such file or directory" arch=amd64 command=delete container=ce57bc35101f67aa9921bfc209d87514e0f5ce66db1d35ab6f41acf8e47cf9f5 name=kata-runtime pid=4752 sandbox=ce57bc35101f67aa9921bfc209d87514e0f5ce66db1d35ab6f41acf8e47cf9f5 source=virtcontainers
time="2020-06-10T14:14:59.520103794+08:00" level=warning msg="Could not get device information" device=/dev/kvm error="no such file or directory" source=virtcontainers/pkg/cgroups
time="2020-06-10T14:14:59.520400481+08:00" level=warning msg="cgroups have not been created and cgroup path is empty" source=virtcontainers/pkg/cgroups
time="2020-06-10T14:14:59.537999421+08:00" level=warning msg="sandbox cgroups path is empty" arch=amd64 command=delete container=ce57bc35101f67aa9921bfc209d87514e0f5ce66db1d35ab6f41acf8e47cf9f5 name=kata-runtime pid=4752 sandbox=ce57bc35101f67aa9921bfc209d87514e0f5ce66db1d35ab6f41acf8e47cf9f5 source=virtcontainers subsystem=sandbox
time="2020-06-11T09:39:14.05319558+08:00" level=error msg="failed to open file for reading" driver=fs error="open /run/vc/uuid/uuid.json: no such file or directory" file=/run/vc/uuid/uuid.json source=virtcontainers/persist/fs subsystem=persist
time="2020-06-11T09:39:14.053329115+08:00" level=info msg="Load UUID store failed" arch=amd64 command=create container=3b0a074ee051c66075130d9a8e7ed99a101ba19aa8ac4f269c99df3c53b9eedc name=kata-runtime pid=2432 source=virtcontainers subsystem=acrn
time="2020-06-11T09:39:14.053925789+08:00" level=warning msg="Could not get device information" device=/dev/kvm error="no such file or directory" source=virtcontainers/pkg/cgroups
time="2020-06-11T09:39:24.42284489+08:00" level=warning msg="sandbox's cgroup won't be updated: cgroup path is empty" arch=amd64 command=create container=3b0a074ee051c66075130d9a8e7ed99a101ba19aa8ac4f269c99df3c53b9eedc name=kata-runtime pid=2432 sandbox=3b0a074ee051c66075130d9a8e7ed99a101ba19aa8ac4f269c99df3c53b9eedc source=virtcontainers subsystem=sandbox
time="2020-06-11T09:39:24.474670357+08:00" level=warning msg="failed to get sandbox config from old store: open /var/lib/vc/sbs/3b0a074ee051c66075130d9a8e7ed99a101ba19aa8ac4f269c99df3c53b9eedc/config.json: no such file or directory" arch=amd64 command=start container=3b0a074ee051c66075130d9a8e7ed99a101ba19aa8ac4f269c99df3c53b9eedc name=kata-runtime pid=2531 source=virtcontainers
time="2020-06-11T09:39:24.476164168+08:00" level=warning msg="Could not get device information" device=/dev/kvm error="no such file or directory" source=virtcontainers/pkg/cgroups
time="2020-06-11T09:39:24.476229109+08:00" level=warning msg="cgroups have not been created and cgroup path is empty" source=virtcontainers/pkg/cgroups
time="2020-06-11T09:39:24.477706298+08:00" level=warning msg="failed to get sandbox config from old store: open /var/lib/vc/sbs/3b0a074ee051c66075130d9a8e7ed99a101ba19aa8ac4f269c99df3c53b9eedc/config.json: no such file or directory" arch=amd64 command=start container=3b0a074ee051c66075130d9a8e7ed99a101ba19aa8ac4f269c99df3c53b9eedc name=kata-runtime pid=2531 sandbox=3b0a074ee051c66075130d9a8e7ed99a101ba19aa8ac4f269c99df3c53b9eedc source=virtcontainers
time="2020-06-11T09:39:24.478314772+08:00" level=warning msg="Could not get device information" device=/dev/kvm error="no such file or directory" source=virtcontainers/pkg/cgroups
time="2020-06-11T09:39:24.478386175+08:00" level=warning msg="cgroups have not been created and cgroup path is empty" source=virtcontainers/pkg/cgroups
```
## Proxy logs
Recent proxy problems found in system journal:
```
time="2020-06-09T17:52:38.046760855+08:00" level=fatal msg="failed to handle exit signal" error="close unix @->/run/vc/vm/227ff69b6eeb37431b7a51d550c16c5c6c45c6cc637a9a25af9df7aeab50298c/kata.sock: use of closed network connection" name=kata-proxy pid=2952 sandbox=227ff69b6eeb37431b7a51d550c16c5c6c45c6cc637a9a25af9df7aeab50298c source=proxy
time="2020-06-09T17:58:21.450558808+08:00" level=fatal msg="channel error" error="accept unix /run/vc/sbs/cafd67234a96b5e91be1143e826a01a199ceefeb5810fbe05f2c7db5a215ba60/proxy.sock: use of closed network connection" name=kata-proxy pid=3023 sandbox=cafd67234a96b5e91be1143e826a01a199ceefeb5810fbe05f2c7db5a215ba60 source=proxy
time="2020-06-09T18:05:49.669961829+08:00" level=fatal msg="failed to handle exit signal" error="close unix @->/run/vc/vm/09f753a4e16d06b8e0e3ef6dffbb858df82f5583609d7ed51a9bf3fc757c10f0/kata.sock: use of closed network connection" name=kata-proxy pid=2623 sandbox=09f753a4e16d06b8e0e3ef6dffbb858df82f5583609d7ed51a9bf3fc757c10f0 source=proxy
time="2020-06-09T19:18:30.425982134+08:00" level=fatal msg="failed to handle exit signal" error="close unix @->/run/vc/vm/30279f5bc7d5e788ace93837bf1e1bf08dee9abdf0fe8d4b3b3269936269b487/kata.sock: use of closed network connection" name=kata-proxy pid=2423 sandbox=30279f5bc7d5e788ace93837bf1e1bf08dee9abdf0fe8d4b3b3269936269b487 source=proxy
time="2020-06-09T19:54:42.289342993+08:00" level=fatal msg="channel error" error="accept unix /run/vc/sbs/24b13519456c77f747a880067a2f5f1d6a82a42cb57208ead3b900c3acbde89e/proxy.sock: use of closed network connection" name=kata-proxy pid=3765 sandbox=24b13519456c77f747a880067a2f5f1d6a82a42cb57208ead3b900c3acbde89e source=proxy
time="2020-06-09T19:59:52.909781215+08:00" level=fatal msg="channel error" error="accept unix /run/vc/sbs/b797f10ef242c291a0d3ada811646e124ee9c3fe9d68ba85b98704215ba52197/proxy.sock: use of closed network connection" name=kata-proxy pid=4206 sandbox=b797f10ef242c291a0d3ada811646e124ee9c3fe9d68ba85b98704215ba52197 source=proxy
time="2020-06-09T20:17:14.719499252+08:00" level=fatal msg="failed to handle exit signal" error="close unix @->/run/vc/vm/a1415e6bd11052e51966e334cc195a3ebc170535a381dca08bfef5a9d13e124e/kata.sock: use of closed network connection" name=kata-proxy pid=4861 sandbox=a1415e6bd11052e51966e334cc195a3ebc170535a381dca08bfef5a9d13e124e source=proxy
time="2020-06-09T20:33:36.553575386+08:00" level=fatal msg="failed to handle exit signal" error="close unix @->/run/vc/vm/7769df7d75c99569564cd1510039306ab94541d36acba89e3c1724bba11ebb51/kata.sock: use of closed network connection" name=kata-proxy pid=2452 sandbox=7769df7d75c99569564cd1510039306ab94541d36acba89e3c1724bba11ebb51 source=proxy
time="2020-06-09T20:40:50.415250836+08:00" level=fatal msg="failed to handle exit signal" error="close unix @->/run/vc/vm/a0362469f529e338731ecff8bde03f3ceb5aba2032ec1174b3619d4ab48582e7/kata.sock: use of closed network connection" name=kata-proxy pid=2445 sandbox=a0362469f529e338731ecff8bde03f3ceb5aba2032ec1174b3619d4ab48582e7 source=proxy
time="2020-06-09T21:23:19.138054453+08:00" level=fatal msg="channel error" error="accept unix /run/vc/sbs/f93345d4c0ded08e44794329e559629159e16d85710e0b520f02efeedfa1deb9/proxy.sock: use of closed network connection" name=kata-proxy pid=2473 sandbox=f93345d4c0ded08e44794329e559629159e16d85710e0b520f02efeedfa1deb9 source=proxy
time="2020-06-09T21:24:07.523641951+08:00" level=fatal msg="channel error" error="accept unix /run/vc/sbs/e7fda5c33c55741630e8f7346b31ea50f5f9fcbea98202f9acd6024234bb4669/proxy.sock: use of closed network connection" name=kata-proxy pid=3025 sandbox=e7fda5c33c55741630e8f7346b31ea50f5f9fcbea98202f9acd6024234bb4669 source=proxy
time="2020-06-09T21:25:35.826589333+08:00" level=fatal msg="failed to handle exit signal" error="close unix @->/run/vc/vm/22ad7f31b888e1254bae0e0c276a89a66387fba3ed224a64d918bc4f341c74c2/kata.sock: use of closed network connection" name=kata-proxy pid=3347 sandbox=22ad7f31b888e1254bae0e0c276a89a66387fba3ed224a64d918bc4f341c74c2 source=proxy
time="2020-06-09T21:26:19.349662821+08:00" level=fatal msg="failed to handle exit signal" error="close unix @->/run/vc/vm/f97f19875e437dfd3c77ca3d7b49f460c3e01354f618475b5f7428c4f2db7fb3/kata.sock: use of closed network connection" name=kata-proxy pid=3659 sandbox=f97f19875e437dfd3c77ca3d7b49f460c3e01354f618475b5f7428c4f2db7fb3 source=proxy
time="2020-06-09T23:38:53.939196597+08:00" level=fatal msg="failed to handle exit signal" error="close unix @->/run/vc/vm/86e19c544aef276bd325e8a1fc97edad2765da09169852df89c8a2a4e673c87c/kata.sock: use of closed network connection" name=kata-proxy pid=3071 sandbox=86e19c544aef276bd325e8a1fc97edad2765da09169852df89c8a2a4e673c87c source=proxy
time="2020-06-10T14:01:12.808301245+08:00" level=fatal msg="failed to handle exit signal" error="close unix @->/run/vc/vm/159a53a0e66b4028a40b4fe5ce034b226235d09ea0f36efe0464254c74d867c0/kata.sock: use of closed network connection" name=kata-proxy pid=2485 sandbox=159a53a0e66b4028a40b4fe5ce034b226235d09ea0f36efe0464254c74d867c0 source=proxy
time="2020-06-10T14:14:58.77686887+08:00" level=fatal msg="failed to handle exit signal" error="close unix @->/run/vc/vm/ce57bc35101f67aa9921bfc209d87514e0f5ce66db1d35ab6f41acf8e47cf9f5/kata.sock: use of closed network connection" name=kata-proxy pid=4247 sandbox=ce57bc35101f67aa9921bfc209d87514e0f5ce66db1d35ab6f41acf8e47cf9f5 source=proxy
```
## Shim logs
No recent shim problems found in system journal.
## Throttler logs
No recent throttler problems found in system journal.
---
# Container manager details
Have `docker`
## Docker
Output of "`docker version`":
```
Client:
Version: 19.03.6
API version: 1.40
Go version: go1.12.17
Git commit: 369ce74a3c
Built: Fri Feb 28 23:45:43 2020
OS/Arch: linux/amd64
Experimental: false
Server:
Engine:
Version: 19.03.6
API version: 1.40 (minimum version 1.12)
Go version: go1.12.17
Git commit: 369ce74a3c
Built: Wed Feb 19 01:06:16 2020
OS/Arch: linux/amd64
Experimental: false
containerd:
Version: 1.3.3-0ubuntu1~18.04.2
GitCommit:
docker-init:
Version: 0.18.0
GitCommit:
```
Output of "`docker info`":
```
Client:
Debug Mode: false
Server:
Containers: 1
Running: 1
Paused: 0
Stopped: 0
Images: 3
Server Version: 19.03.6
Storage Driver: devicemapper
Pool Name: docker-8:2-2236520-pool
Pool Blocksize: 65.54kB
Base Device Size: 10.74GB
Backing Filesystem: ext4
Udev Sync Supported: true
Data file: /dev/loop8
Metadata file: /dev/loop9
Data loop file: /var/lib/docker/devicemapper/devicemapper/data
Metadata loop file: /var/lib/docker/devicemapper/devicemapper/metadata
Data Space Used: 512.8MB
Data Space Total: 107.4GB
Data Space Available: 46.75GB
Metadata Space Used: 17.81MB
Metadata Space Total: 2.147GB
Metadata Space Available: 2.13GB
Thin Pool Minimum Free Space: 10.74GB
Deferred Removal Enabled: true
Deferred Deletion Enabled: true
Deferred Deleted Device Count: 0
Library Version: 1.02.145 (2017-11-03)
Logging Driver: json-file
Cgroup Driver: cgroupfs
Plugins:
Volume: local
Network: bridge host ipvlan macvlan null overlay
Log: awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog
Swarm: inactive
Runtimes: kata-runtime runc
Default Runtime: kata-runtime
Init Binary: docker-init
containerd version:
runc version: N/A
init version:
Security Options:
seccomp
Profile: default
Kernel Version: 5.4.28-PKT-200203T060100Z-00002-gd7da1d772f85
Operating System: Ubuntu 18.04.4 LTS
OSType: linux
Architecture: x86_64
CPUs: 4
Total Memory: 14.73GiB
Name: nuc7i5bnh
ID: ZEVE:R64C:HPUN:GKOZ:KW2J:YLNS:GRXP:63SI:BRTG:LZUF:V2SN:6DEO
Docker Root Dir: /var/lib/docker
Debug Mode: false
Registry: https://index.docker.io/v1/
Labels:
Experimental: false
Insecure Registries:
127.0.0.0/8
Live Restore Enabled: false
WARNING: the devicemapper storage-driver is deprecated, and will be removed in a future release.
WARNING: devicemapper: usage of loopback devices is strongly discouraged for production use.
Use `--storage-opt dm.thinpooldev` to specify a custom block storage device.
```
Output of "`systemctl show docker`":
```
Type=notify
Restart=always
NotifyAccess=main
RestartUSec=2s
TimeoutStartUSec=infinity
TimeoutStopUSec=infinity
RuntimeMaxUSec=infinity
WatchdogUSec=0
WatchdogTimestamp=Thu 2020-06-11 09:38:42 CST
WatchdogTimestampMonotonic=124490113
PermissionsStartOnly=no
RootDirectoryStartOnly=no
RemainAfterExit=no
GuessMainPID=yes
MainPID=1655
ControlPID=0
FileDescriptorStoreMax=0
NFileDescriptorStore=0
StatusErrno=0
Result=success
UID=[not set]
GID=[not set]
NRestarts=0
ExecMainStartTimestamp=Thu 2020-06-11 09:38:41 CST
ExecMainStartTimestampMonotonic=123319575
ExecMainExitTimestampMonotonic=0
ExecMainPID=1655
ExecMainCode=0
ExecMainStatus=0
ExecStart={ path=/usr/bin/dockerd ; argv[]=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock --ip-forward=true ; ignore_errors=no ; start_time=[Thu 2020-06-11 09:38:41 CST] ; stop_time=[n/a] ; pid=1655 ; code=(null) ; status=0/0 }
ExecReload={ path=/bin/kill ; argv[]=/bin/kill -s HUP $MAINPID ; ignore_errors=no ; start_time=[n/a] ; stop_time=[n/a] ; pid=0 ; code=(null) ; status=0/0 }
Slice=system.slice
ControlGroup=/system.slice/docker.service
MemoryCurrent=[not set]
CPUUsageNSec=[not set]
TasksCurrent=15
IPIngressBytes=18446744073709551615
IPIngressPackets=18446744073709551615
IPEgressBytes=18446744073709551615
IPEgressPackets=18446744073709551615
Delegate=yes
DelegateControllers=cpu cpuacct io blkio memory devices pids
CPUAccounting=no
CPUWeight=[not set]
StartupCPUWeight=[not set]
CPUShares=[not set]
StartupCPUShares=[not set]
CPUQuotaPerSecUSec=infinity
IOAccounting=no
IOWeight=[not set]
StartupIOWeight=[not set]
BlockIOAccounting=no
BlockIOWeight=[not set]
StartupBlockIOWeight=[not set]
MemoryAccounting=no
MemoryLow=0
MemoryHigh=infinity
MemoryMax=infinity
MemorySwapMax=infinity
MemoryLimit=infinity
DevicePolicy=auto
TasksAccounting=yes
TasksMax=infinity
IPAccounting=no
UMask=0022
LimitCPU=infinity
LimitCPUSoft=infinity
LimitFSIZE=infinity
LimitFSIZESoft=infinity
LimitDATA=infinity
LimitDATASoft=infinity
LimitSTACK=infinity
LimitSTACKSoft=8388608
LimitCORE=infinity
LimitCORESoft=infinity
LimitRSS=infinity
LimitRSSSoft=infinity
LimitNOFILE=infinity
LimitNOFILESoft=infinity
LimitAS=infinity
LimitASSoft=infinity
LimitNPROC=infinity
LimitNPROCSoft=infinity
LimitMEMLOCK=16777216
LimitMEMLOCKSoft=16777216
LimitLOCKS=infinity
LimitLOCKSSoft=infinity
LimitSIGPENDING=60331
LimitSIGPENDINGSoft=60331
LimitMSGQUEUE=819200
LimitMSGQUEUESoft=819200
LimitNICE=0
LimitNICESoft=0
LimitRTPRIO=0
LimitRTPRIOSoft=0
LimitRTTIME=infinity
LimitRTTIMESoft=infinity
OOMScoreAdjust=0
Nice=0
IOSchedulingClass=0
IOSchedulingPriority=0
CPUSchedulingPolicy=0
CPUSchedulingPriority=0
TimerSlackNSec=50000
CPUSchedulingResetOnFork=no
NonBlocking=no
StandardInput=null
StandardInputData=
StandardOutput=journal
StandardError=inherit
TTYReset=no
TTYVHangup=no
TTYVTDisallocate=no
SyslogPriority=30
SyslogLevelPrefix=yes
SyslogLevel=6
SyslogFacility=3
LogLevelMax=-1
SecureBits=0
CapabilityBoundingSet=cap_chown cap_dac_override cap_dac_read_search cap_fowner cap_fsetid cap_kill cap_setgid cap_setuid cap_setpcap cap_linux_immutable cap_net_bind_service cap_net_broadcast cap_net_admin cap_net_raw cap_ipc_lock cap_ipc_owner cap_sys_module cap_sys_rawio cap_sys_chroot cap_sys_ptrace cap_sys_pacct cap_sys_admin cap_sys_boot cap_sys_nice cap_sys_resource cap_sys_time cap_sys_tty_config cap_mknod cap_lease cap_audit_write cap_audit_control cap_setfcap cap_mac_override cap_mac_admin cap_syslog cap_wake_alarm cap_block_suspend
AmbientCapabilities=
DynamicUser=no
RemoveIPC=no
MountFlags=
PrivateTmp=no
PrivateDevices=no
ProtectKernelTunables=no
ProtectKernelModules=no
ProtectControlGroups=no
PrivateNetwork=no
PrivateUsers=no
ProtectHome=no
ProtectSystem=no
SameProcessGroup=no
UtmpMode=init
IgnoreSIGPIPE=yes
NoNewPrivileges=no
SystemCallErrorNumber=0
LockPersonality=no
RuntimeDirectoryPreserve=no
RuntimeDirectoryMode=0755
StateDirectoryMode=0755
CacheDirectoryMode=0755
LogsDirectoryMode=0755
ConfigurationDirectoryMode=0755
MemoryDenyWriteExecute=no
RestrictRealtime=no
RestrictNamespaces=no
MountAPIVFS=no
KeyringMode=private
KillMode=process
KillSignal=15
SendSIGKILL=yes
SendSIGHUP=no
Id=docker.service
Names=docker.service
Requires=docker.socket sysinit.target system.slice
Wants=network-online.target
BindsTo=containerd.service
ConsistsOf=docker.socket
Conflicts=shutdown.target
Before=shutdown.target
After=system.slice containerd.service systemd-journald.socket docker.socket firewalld.service network-online.target basic.target sysinit.target
TriggeredBy=docker.socket
Documentation=https://docs.docker.com
Description=Docker Application Container Engine
LoadState=loaded
ActiveState=active
SubState=running
FragmentPath=/lib/systemd/system/docker.service
UnitFileState=disabled
UnitFilePreset=enabled
StateChangeTimestamp=Thu 2020-06-11 09:38:42 CST
StateChangeTimestampMonotonic=124490116
InactiveExitTimestamp=Thu 2020-06-11 09:38:41 CST
InactiveExitTimestampMonotonic=123319796
ActiveEnterTimestamp=Thu 2020-06-11 09:38:42 CST
ActiveEnterTimestampMonotonic=124490116
ActiveExitTimestampMonotonic=0
InactiveEnterTimestampMonotonic=0
CanStart=yes
CanStop=yes
CanReload=yes
CanIsolate=no
StopWhenUnneeded=no
RefuseManualStart=no
RefuseManualStop=no
AllowIsolate=no
DefaultDependencies=yes
OnFailureJobMode=replace
IgnoreOnIsolate=no
NeedDaemonReload=no
JobTimeoutUSec=infinity
JobRunningTimeoutUSec=infinity
JobTimeoutAction=none
ConditionResult=yes
AssertResult=yes
ConditionTimestamp=Thu 2020-06-11 09:38:41 CST
ConditionTimestampMonotonic=123313552
AssertTimestamp=Thu 2020-06-11 09:38:41 CST
AssertTimestampMonotonic=123313554
Transient=no
Perpetual=no
StartLimitIntervalUSec=1min
StartLimitBurst=3
StartLimitAction=none
FailureAction=none
SuccessAction=none
InvocationID=7672d47edc3d4cc4bff1482c0610fcf4
CollectMode=inactive
```
No `kubectl`
No `crio`
Have `containerd`
## containerd
Output of "`containerd --version`":
```
containerd github.com/containerd/containerd 1.3.3-0ubuntu1~18.04.2
```
Output of "`systemctl show containerd`":
```
Type=simple
Restart=always
NotifyAccess=none
RestartUSec=100ms
TimeoutStartUSec=1min 30s
TimeoutStopUSec=1min 30s
RuntimeMaxUSec=infinity
WatchdogUSec=0
WatchdogTimestamp=Thu 2020-06-11 09:36:40 CST
WatchdogTimestampMonotonic=3437269
PermissionsStartOnly=no
RootDirectoryStartOnly=no
RemainAfterExit=no
GuessMainPID=yes
MainPID=653
ControlPID=0
FileDescriptorStoreMax=0
NFileDescriptorStore=0
StatusErrno=0
Result=success
UID=[not set]
GID=[not set]
NRestarts=0
ExecMainStartTimestamp=Thu 2020-06-11 09:36:40 CST
ExecMainStartTimestampMonotonic=3437210
ExecMainExitTimestampMonotonic=0
ExecMainPID=653
ExecMainCode=0
ExecMainStatus=0
ExecStartPre={ path=/sbin/modprobe ; argv[]=/sbin/modprobe overlay ; ignore_errors=yes ; start_time=[Thu 2020-06-11 09:36:40 CST] ; stop_time=[Thu 2020-06-11 09:36:40 CST] ; pid=638 ; code=exited ; status=0 }
ExecStart={ path=/usr/bin/containerd ; argv[]=/usr/bin/containerd ; ignore_errors=no ; start_time=[Thu 2020-06-11 09:36:40 CST] ; stop_time=[n/a] ; pid=653 ; code=(null) ; status=0/0 }
Slice=system.slice
ControlGroup=/system.slice/containerd.service
MemoryCurrent=[not set]
CPUUsageNSec=[not set]
TasksCurrent=55
IPIngressBytes=18446744073709551615
IPIngressPackets=18446744073709551615
IPEgressBytes=18446744073709551615
IPEgressPackets=18446744073709551615
Delegate=yes
DelegateControllers=cpu cpuacct io blkio memory devices pids
CPUAccounting=no
CPUWeight=[not set]
StartupCPUWeight=[not set]
CPUShares=[not set]
StartupCPUShares=[not set]
CPUQuotaPerSecUSec=infinity
IOAccounting=no
IOWeight=[not set]
StartupIOWeight=[not set]
BlockIOAccounting=no
BlockIOWeight=[not set]
StartupBlockIOWeight=[not set]
MemoryAccounting=no
MemoryLow=0
MemoryHigh=infinity
MemoryMax=infinity
MemorySwapMax=infinity
MemoryLimit=infinity
DevicePolicy=auto
TasksAccounting=yes
TasksMax=infinity
IPAccounting=no
UMask=0022
LimitCPU=infinity
LimitCPUSoft=infinity
LimitFSIZE=infinity
LimitFSIZESoft=infinity
LimitDATA=infinity
LimitDATASoft=infinity
LimitSTACK=infinity
LimitSTACKSoft=8388608
LimitCORE=infinity
LimitCORESoft=infinity
LimitRSS=infinity
LimitRSSSoft=infinity
LimitNOFILE=1048576
LimitNOFILESoft=1048576
LimitAS=infinity
LimitASSoft=infinity
LimitNPROC=infinity
LimitNPROCSoft=infinity
LimitMEMLOCK=16777216
LimitMEMLOCKSoft=16777216
LimitLOCKS=infinity
LimitLOCKSSoft=infinity
LimitSIGPENDING=60331
LimitSIGPENDINGSoft=60331
LimitMSGQUEUE=819200
LimitMSGQUEUESoft=819200
LimitNICE=0
LimitNICESoft=0
LimitRTPRIO=0
LimitRTPRIOSoft=0
LimitRTTIME=infinity
LimitRTTIMESoft=infinity
OOMScoreAdjust=0
Nice=0
IOSchedulingClass=0
IOSchedulingPriority=0
CPUSchedulingPolicy=0
CPUSchedulingPriority=0
TimerSlackNSec=50000
CPUSchedulingResetOnFork=no
NonBlocking=no
StandardInput=null
StandardInputData=
StandardOutput=journal
StandardError=inherit
TTYReset=no
TTYVHangup=no
TTYVTDisallocate=no
SyslogPriority=30
SyslogLevelPrefix=yes
SyslogLevel=6
SyslogFacility=3
LogLevelMax=-1
SecureBits=0
CapabilityBoundingSet=cap_chown cap_dac_override cap_dac_read_search cap_fowner cap_fsetid cap_kill cap_setgid cap_setuid cap_setpcap cap_linux_immutable cap_net_bind_service cap_net_broadcast cap_net_admin cap_net_raw cap_ipc_lock cap_ipc_owner cap_sys_module cap_sys_rawio cap_sys_chroot cap_sys_ptrace cap_sys_pacct cap_sys_admin cap_sys_boot cap_sys_nice cap_sys_resource cap_sys_time cap_sys_tty_config cap_mknod cap_lease cap_audit_write cap_audit_control cap_setfcap cap_mac_override cap_mac_admin cap_syslog cap_wake_alarm cap_block_suspend
AmbientCapabilities=
DynamicUser=no
RemoveIPC=no
MountFlags=
PrivateTmp=no
PrivateDevices=no
ProtectKernelTunables=no
ProtectKernelModules=no
ProtectControlGroups=no
PrivateNetwork=no
PrivateUsers=no
ProtectHome=no
ProtectSystem=no
SameProcessGroup=no
UtmpMode=init
IgnoreSIGPIPE=yes
NoNewPrivileges=no
SystemCallErrorNumber=0
LockPersonality=no
RuntimeDirectoryPreserve=no
RuntimeDirectoryMode=0755
StateDirectoryMode=0755
CacheDirectoryMode=0755
LogsDirectoryMode=0755
ConfigurationDirectoryMode=0755
MemoryDenyWriteExecute=no
RestrictRealtime=no
RestrictNamespaces=no
MountAPIVFS=no
KeyringMode=private
KillMode=process
KillSignal=15
SendSIGKILL=yes
SendSIGHUP=no
Id=containerd.service
Names=containerd.service
Requires=sysinit.target system.slice
WantedBy=multi-user.target
BoundBy=docker.service
Conflicts=shutdown.target
Before=multi-user.target docker.service shutdown.target
After=local-fs.target system.slice basic.target sysinit.target systemd-journald.socket network.target
Documentation=https://containerd.io
Description=containerd container runtime
LoadState=loaded
ActiveState=active
SubState=running
FragmentPath=/lib/systemd/system/containerd.service
UnitFileState=enabled
UnitFilePreset=enabled
StateChangeTimestamp=Thu 2020-06-11 09:36:40 CST
StateChangeTimestampMonotonic=3437271
InactiveExitTimestamp=Thu 2020-06-11 09:36:40 CST
InactiveExitTimestampMonotonic=3392193
ActiveEnterTimestamp=Thu 2020-06-11 09:36:40 CST
ActiveEnterTimestampMonotonic=3437271
ActiveExitTimestampMonotonic=0
InactiveEnterTimestampMonotonic=0
CanStart=yes
CanStop=yes
CanReload=no
CanIsolate=no
StopWhenUnneeded=no
RefuseManualStart=no
RefuseManualStop=no
AllowIsolate=no
DefaultDependencies=yes
OnFailureJobMode=replace
IgnoreOnIsolate=no
NeedDaemonReload=no
JobTimeoutUSec=infinity
JobRunningTimeoutUSec=infinity
JobTimeoutAction=none
ConditionResult=yes
AssertResult=yes
ConditionTimestamp=Thu 2020-06-11 09:36:40 CST
ConditionTimestampMonotonic=3391132
AssertTimestamp=Thu 2020-06-11 09:36:40 CST
AssertTimestampMonotonic=3391132
Transient=no
Perpetual=no
StartLimitIntervalUSec=10s
StartLimitBurst=5
StartLimitAction=none
FailureAction=none
SuccessAction=none
InvocationID=9c02556453fd48a6a376179de5e8ca6e
CollectMode=inactive
```
Output of "`cat /etc/containerd/config.toml`":
```
cat: /etc/containerd/config.toml: No such file or directory
```
---
# Packages
Have `dpkg`
Output of "`dpkg -l|egrep "(cc-oci-runtimecc-runtimerunv|kata-proxy|kata-runtime|kata-shim|kata-ksm-throttler|kata-containers-image|linux-container|qemu-)"`":
```
ii kata-containers-image 1.11.0~rc0-44 amd64 Kata containers image
ii kata-ksm-throttler 1.11.0~rc0-47 amd64
ii kata-linux-container 5.4.32.73-47 amd64 linux kernel optimised for container-like workloads.
ii kata-proxy 1.11.0~rc0-45 amd64
ii kata-runtime 1.11.0~rc0-53 amd64
ii kata-shim 1.11.0~rc0-43 amd64
ii qemu-vanilla 4.1.1+git.99c5874a9b-48 amd64 linux kernel optimised for container-like workloads.
```
No `rpm`
---
Description of problem
a) Install ACRN v1.6.1 (acrn-2020w18.4-140000p) on Ubuntu 18.04.4 as documented in this tutorial. b) Set up Kata containers on ACRN by following this tutorial. c) The
kata-manager
installs version 1.11.0-rc0 as of writing. d) Run a Kata container using ACRN VM, for examplesudo docker run -ti --runtime=kata-runtime busybox sh
, and check the networking inside the container is working?Expected result
The networking of the Kata container should be bridged to the host.
Actual result
Show kata-collect-data.sh details
# Meta details Running `kata-collect-data.sh` version `1.11.0-rc0 (commit )` at `2020-06-11.09:41:50.152514120+0800`. --- Runtime is `/usr/bin/kata-runtime`. # `kata-env` Output of "`/usr/bin/kata-runtime kata-env`": ```toml [Meta] Version = "1.0.24" [Runtime] Debug = false Trace = false DisableGuestSeccomp = true DisableNewNetNs = false SandboxCgroupOnly = false Path = "/usr/bin/kata-runtime" [Runtime.Version] OCI = "1.0.1-dev" [Runtime.Version.Version] Semver = "1.11.0-rc0" Major = 1 Minor = 11 Patch = 0 Commit = "" [Runtime.Config] Path = "/etc/kata-containers/configuration.toml" [Hypervisor] MachineType = "" Version = "DM version is: 1.6-2020w18.4.140000p_284 (daily tag:acrn-2020w18.4.140000p), build by mockbuild@2020-04-30 02:27:43" Path = "/usr/bin/acrn-dm" BlockDeviceDriver = "virtio-blk" EntropySource = "/dev/urandom" SharedFS = "" VirtioFSDaemon = "" Msize9p = 0 MemorySlots = 10 PCIeRootPort = 0 HotplugVFIOOnRootBus = false Debug = false UseVSock = false [Image] Path = "/usr/share/kata-containers/kata-containers-image_clearlinux_1.11.0-rc0_agent_d4df5d96ba.img" [Kernel] Path = "/usr/share/kata-containers/vmlinuz-5.4.32.73-47.container" Parameters = "systemd.unit=kata-containers.target systemd.mask=systemd-networkd.service systemd.mask=systemd-networkd.socket" [Initrd] Path = "" [Proxy] Type = "kataProxy" Path = "/usr/libexec/kata-containers/kata-proxy" Debug = false [Proxy.Version] Semver = "1.11.0-rc0-a6f5534" Major = 1 Minor = 11 Patch = 0 Commit = "a6f5534" [Shim] Type = "kataShim" Path = "/usr/libexec/kata-containers/kata-shim" Debug = false [Shim.Version] Semver = "1.11.0-rc0-ad49288" Major = 1 Minor = 11 Patch = 0 Commit = "ad49288" [Agent] Type = "kata" Debug = false Trace = false TraceMode = "" TraceType = "" [Host] Kernel = "5.4.28-PKT-200203T060100Z-00002-gd7da1d772f85" Architecture = "amd64" VMContainerCapable = true SupportVSocks = false [Host.Distro] Name = "Ubuntu" Version = "18.04" [Host.CPU] Vendor = "GenuineIntel" Model = "Intel(R) Core(TM) i5-7300U CPU @ 2.60GHz" [Netmon] Path = "/usr/libexec/kata-containers/kata-netmon" Debug = false Enable = false [Netmon.Version] Semver = "1.11.0-rc0" Major = 1 Minor = 11 Patch = 0 Commit = "<>"
```
---
# Runtime config files
## Runtime default config files
```
/etc/kata-containers/configuration.toml
/usr/share/defaults/kata-containers/configuration.toml
```
## Runtime config file contents
Output of "`cat "/etc/kata-containers/configuration.toml"`":
```toml
# Copyright (c) 2017-2019 Intel Corporation
#
# SPDX-License-Identifier: Apache-2.0
#
# XXX: WARNING: this file is auto-generated.
# XXX:
# XXX: Source file: "cli/config/configuration-acrn.toml.in"
# XXX: Project:
# XXX: Name: Kata Containers
# XXX: Type: kata
[hypervisor.acrn]
path = "/usr/bin/acrn-dm"
ctlpath = "/usr/bin/acrnctl"
kernel = "/usr/share/kata-containers/vmlinuz.container"
image = "/usr/share/kata-containers/kata-containers.img"
# Optional space-separated list of options to pass to the guest kernel.
# For example, use `kernel_params = "vsyscall=emulate"` if you are having
# trouble running pre-2.15 glibc.
#
# WARNING: - any parameter specified here will take priority over the default
# parameter value of the same name used to start the virtual machine.
# Do not set values here unless you understand the impact of doing so as you
# may stop the virtual machine from booting.
# To see the list of default parameters, enable hypervisor debug, create a
# container and look for 'default-kernel-parameters' log entries.
kernel_params = ""
# Path to the firmware.
# If you want that acrn uses the default firmware leave this option empty
firmware = ""
# Default maximum number of vCPUs per SB/VM:
# unspecified or == 0 --> will be set to the actual number of physical cores or to the maximum number
# of vCPUs supported by KVM if that number is exceeded
# > 0 <= number of physical cores --> will be set to the specified number
# > number of physical cores --> will be set to the actual number of physical cores or to the maximum number
# of vCPUs supported by KVM if that number is exceeded
# WARNING: Depending of the architecture, the maximum number of vCPUs supported by KVM is used when
# the actual number of physical cores is greater than it.
# WARNING: Be aware that this value impacts the virtual machine's memory footprint and CPU
# the hotplug functionality. For example, `default_maxvcpus = 240` specifies that until 240 vCPUs
# can be added to a SB/VM, but the memory footprint will be big. Another example, with
# `default_maxvcpus = 8` the memory footprint will be small, but 8 will be the maximum number of
# vCPUs supported by the SB/VM. In general, we recommend that you do not edit this variable,
# unless you know what are you doing.
default_maxvcpus = 1
# Bridges can be used to hot plug devices.
# Limitations:
# * Currently only pci bridges are supported
# * Until 30 devices per bridge can be hot plugged.
# * Until 5 PCI bridges can be cold plugged per VM.
# This limitation could be a bug in the kernel
# Default number of bridges per SB/VM:
# unspecified or 0 --> will be set to 1
# > 1 <= 5 --> will be set to the specified number
# > 5 --> will be set to 5
default_bridges = 1
# Default memory size in MiB for SB/VM.
# If unspecified then it will be set 2048 MiB.
default_memory = 2048
# Block storage driver to be used for the hypervisor in case the container
# rootfs is backed by a block device. ACRN only supports virtio-blk.
block_device_driver = "virtio-blk"
# This option changes the default hypervisor and kernel parameters
# to enable debug output where available. This extra output is added
# to the proxy logs, but only when proxy debug is also enabled.
#
# Default false
#enable_debug = true
# Disable the customizations done in the runtime when it detects
# that it is running on top a VMM. This will result in the runtime
# behaving as it would when running on bare metal.
#
#disable_nesting_checks = true
# If host doesn't support vhost_net, set to true. Thus we won't create vhost fds for nics.
# Default false
#disable_vhost_net = true
# Path to OCI hook binaries in the *guest rootfs*.
# This does not affect host-side hooks which must instead be added to
# the OCI spec passed to the runtime.
#
# You can create a rootfs with hooks by customizing the osbuilder scripts:
# https://github.com/kata-containers/osbuilder
#
# Hooks must be stored in a subdirectory of guest_hook_path according to their
# hook type, i.e. "guest_hook_path/{prestart,postart,poststop}".
# The agent will scan these directories for executable files and add them, in
# lexicographical order, to the lifecycle of the guest container.
# Hooks are executed in the runtime namespace of the guest. See the official documentation:
# https://github.com/opencontainers/runtime-spec/blob/v1.0.1/config.md#posix-platform-hooks
# Warnings will be logged if any error is encountered will scanning for hooks,
# but it will not abort container execution.
#guest_hook_path = "/usr/share/oci/hooks"
[proxy.kata]
path = "/usr/libexec/kata-containers/kata-proxy"
# If enabled, proxy messages will be sent to the system log
# (default: disabled)
#enable_debug = true
[shim.kata]
path = "/usr/libexec/kata-containers/kata-shim"
# If enabled, shim messages will be sent to the system log
# (default: disabled)
#enable_debug = true
# If enabled, the shim will create opentracing.io traces and spans.
# (See https://www.jaegertracing.io/docs/getting-started).
#
# Note: By default, the shim runs in a separate network namespace. Therefore,
# to allow it to send trace details to the Jaeger agent running on the host,
# it is necessary to set 'disable_new_netns=true' so that it runs in the host
# network namespace.
#
# (default: disabled)
#enable_tracing = true
[agent.kata]
# If enabled, make the agent display debug-level messages.
# (default: disabled)
#enable_debug = true
# Enable agent tracing.
#
# If enabled, the default trace mode is "dynamic" and the
# default trace type is "isolated". The trace mode and type are set
# explicity with the `trace_type=` and `trace_mode=` options.
#
# Notes:
#
# - Tracing is ONLY enabled when `enable_tracing` is set: explicitly
# setting `trace_mode=` and/or `trace_type=` without setting `enable_tracing`
# will NOT activate agent tracing.
#
# - See https://github.com/kata-containers/agent/blob/master/TRACING.md for
# full details.
#
# (default: disabled)
#enable_tracing = true
#
#trace_mode = "dynamic"
#trace_type = "isolated"
[netmon]
# If enabled, the network monitoring process gets started when the
# sandbox is created. This allows for the detection of some additional
# network being added to the existing network namespace, after the
# sandbox has been created.
# (default: disabled)
#enable_netmon = true
# Specify the path to the netmon binary.
path = "/usr/libexec/kata-containers/kata-netmon"
# If enabled, netmon messages will be sent to the system log
# (default: disabled)
#enable_debug = true
[runtime]
# If enabled, the runtime will log additional debug messages to the
# system log
# (default: disabled)
#enable_debug = true
#
# Internetworking model
# Determines how the VM should be connected to the
# the container network interface
# Options:
#
# - bridged (Deprecated)
# Uses a linux bridge to interconnect the container interface to
# the VM. Works for most cases except macvlan and ipvlan.
# ***NOTE: This feature has been deprecated with plans to remove this
# feature in the future. Please use other network models listed below.
#
#
# - macvtap
# Used when the Container network interface can be bridged using
# macvtap.
#
# - none
# Used when customize network. Only creates a tap device. No veth pair.
#
# - tcfilter
# Uses tc filter rules to redirect traffic from the network interface
# provided by plugin to a tap interface connected to the VM.
#
internetworking_model="macvtap"
# disable guest seccomp
# Determines whether container seccomp profiles are passed to the virtual
# machine and applied by the kata agent. If set to true, seccomp is not applied
# within the guest
# (default: true)
disable_guest_seccomp=true
# If enabled, the runtime will create opentracing.io traces and spans.
# (See https://www.jaegertracing.io/docs/getting-started).
# (default: disabled)
#enable_tracing = true
# If enabled, the runtime will not create a network namespace for shim and hypervisor processes.
# This option may have some potential impacts to your host. It should only be used when you know what you're doing.
# `disable_new_netns` conflicts with `enable_netmon`
# `disable_new_netns` conflicts with `internetworking_model=bridged` and `internetworking_model=macvtap`. It works only
# with `internetworking_model=none`. The tap device will be in the host network namespace and can connect to a bridge
# (like OVS) directly.
# If you are using docker, `disable_new_netns` only works with `docker run --net=none`
# (default: false)
#disable_new_netns = true
# if enabled, the runtime will add all the kata processes inside one dedicated cgroup.
# The container cgroups in the host are not created, just one single cgroup per sandbox.
# The runtime caller is free to restrict or collect cgroup stats of the overall Kata sandbox.
# The sandbox cgroup path is the parent cgroup of a container with the PodSandbox annotation.
# The sandbox cgroup is constrained if there is no container type annotation.
# See: https://godoc.org/github.com/kata-containers/runtime/virtcontainers#ContainerType
sandbox_cgroup_only=false
# Enabled experimental feature list, format: ["a", "b"].
# Experimental features are features not stable enough for production,
# they may break compatibility, and are prepared for a big version bump.
# Supported experimental features:
# (default: [])
experimental=[]
```
Output of "`cat "/usr/share/defaults/kata-containers/configuration.toml"`":
```toml
# Copyright (c) 2017-2019 Intel Corporation
#
# SPDX-License-Identifier: Apache-2.0
#
# XXX: WARNING: this file is auto-generated.
# XXX:
# XXX: Source file: "cli/config/configuration-qemu.toml.in"
# XXX: Project:
# XXX: Name: Kata Containers
# XXX: Type: kata
[hypervisor.qemu]
path = "/usr/bin/qemu-vanilla-system-x86_64"
kernel = "/usr/share/kata-containers/vmlinuz.container"
image = "/usr/share/kata-containers/kata-containers.img"
machine_type = "pc"
# Optional space-separated list of options to pass to the guest kernel.
# For example, use `kernel_params = "vsyscall=emulate"` if you are having
# trouble running pre-2.15 glibc.
#
# WARNING: - any parameter specified here will take priority over the default
# parameter value of the same name used to start the virtual machine.
# Do not set values here unless you understand the impact of doing so as you
# may stop the virtual machine from booting.
# To see the list of default parameters, enable hypervisor debug, create a
# container and look for 'default-kernel-parameters' log entries.
kernel_params = ""
# Path to the firmware.
# If you want that qemu uses the default firmware leave this option empty
firmware = ""
# Machine accelerators
# comma-separated list of machine accelerators to pass to the hypervisor.
# For example, `machine_accelerators = "nosmm,nosmbus,nosata,nopit,static-prt,nofw"`
machine_accelerators=""
# Default number of vCPUs per SB/VM:
# unspecified or 0 --> will be set to 1
# < 0 --> will be set to the actual number of physical cores
# > 0 <= number of physical cores --> will be set to the specified number
# > number of physical cores --> will be set to the actual number of physical cores
default_vcpus = 1
# Default maximum number of vCPUs per SB/VM:
# unspecified or == 0 --> will be set to the actual number of physical cores or to the maximum number
# of vCPUs supported by KVM if that number is exceeded
# > 0 <= number of physical cores --> will be set to the specified number
# > number of physical cores --> will be set to the actual number of physical cores or to the maximum number
# of vCPUs supported by KVM if that number is exceeded
# WARNING: Depending of the architecture, the maximum number of vCPUs supported by KVM is used when
# the actual number of physical cores is greater than it.
# WARNING: Be aware that this value impacts the virtual machine's memory footprint and CPU
# the hotplug functionality. For example, `default_maxvcpus = 240` specifies that until 240 vCPUs
# can be added to a SB/VM, but the memory footprint will be big. Another example, with
# `default_maxvcpus = 8` the memory footprint will be small, but 8 will be the maximum number of
# vCPUs supported by the SB/VM. In general, we recommend that you do not edit this variable,
# unless you know what are you doing.
default_maxvcpus = 0
# Bridges can be used to hot plug devices.
# Limitations:
# * Currently only pci bridges are supported
# * Until 30 devices per bridge can be hot plugged.
# * Until 5 PCI bridges can be cold plugged per VM.
# This limitation could be a bug in qemu or in the kernel
# Default number of bridges per SB/VM:
# unspecified or 0 --> will be set to 1
# > 1 <= 5 --> will be set to the specified number
# > 5 --> will be set to 5
default_bridges = 1
# Default memory size in MiB for SB/VM.
# If unspecified then it will be set 2048 MiB.
default_memory = 2048
#
# Default memory slots per SB/VM.
# If unspecified then it will be set 10.
# This is will determine the times that memory will be hotadded to sandbox/VM.
#memory_slots = 10
# The size in MiB will be plused to max memory of hypervisor.
# It is the memory address space for the NVDIMM devie.
# If set block storage driver (block_device_driver) to "nvdimm",
# should set memory_offset to the size of block device.
# Default 0
#memory_offset = 0
# Specifies virtio-mem will be enabled or not.
# Please note that this option should be used with the command
# "echo 1 > /proc/sys/vm/overcommit_memory".
# Default false
#enable_virtio_mem = true
# Disable block device from being used for a container's rootfs.
# In case of a storage driver like devicemapper where a container's
# root file system is backed by a block device, the block device is passed
# directly to the hypervisor for performance reasons.
# This flag prevents the block device from being passed to the hypervisor,
# 9pfs is used instead to pass the rootfs.
disable_block_device_use = false
# Shared file system type:
# - virtio-9p (default)
# - virtio-fs
shared_fs = "virtio-9p"
# Path to vhost-user-fs daemon.
virtio_fs_daemon = "/usr/bin/virtiofsd"
# Default size of DAX cache in MiB
virtio_fs_cache_size = 1024
# Extra args for virtiofsd daemon
#
# Format example:
# ["-o", "arg1=xxx,arg2", "-o", "hello world", "--arg3=yyy"]
#
# see `virtiofsd -h` for possible options.
virtio_fs_extra_args = []
# Cache mode:
#
# - none
# Metadata, data, and pathname lookup are not cached in guest. They are
# always fetched from host and any changes are immediately pushed to host.
#
# - auto
# Metadata and pathname lookup cache expires after a configured amount of
# time (default is 1 second). Data is cached while the file is open (close
# to open consistency).
#
# - always
# Metadata, data, and pathname lookup are cached in guest and never expire.
virtio_fs_cache = "always"
# Block storage driver to be used for the hypervisor in case the container
# rootfs is backed by a block device. This is virtio-scsi, virtio-blk
# or nvdimm.
block_device_driver = "virtio-scsi"
# Specifies cache-related options will be set to block devices or not.
# Default false
#block_device_cache_set = true
# Specifies cache-related options for block devices.
# Denotes whether use of O_DIRECT (bypass the host page cache) is enabled.
# Default false
#block_device_cache_direct = true
# Specifies cache-related options for block devices.
# Denotes whether flush requests for the device are ignored.
# Default false
#block_device_cache_noflush = true
# Enable iothreads (data-plane) to be used. This causes IO to be
# handled in a separate IO thread. This is currently only implemented
# for SCSI.
#
enable_iothreads = false
# Enable pre allocation of VM RAM, default false
# Enabling this will result in lower container density
# as all of the memory will be allocated and locked
# This is useful when you want to reserve all the memory
# upfront or in the cases where you want memory latencies
# to be very predictable
# Default false
#enable_mem_prealloc = true
# Enable huge pages for VM RAM, default false
# Enabling this will result in the VM memory
# being allocated using huge pages.
# This is useful when you want to use vhost-user network
# stacks within the container. This will automatically
# result in memory pre allocation
#enable_hugepages = true
# Enable vhost-user storage device, default false
# Enabling this will result in some Linux reserved block type
# major range 240-254 being chosen to represent vhost-user devices.
enable_vhost_user_store = false
# The base directory specifically used for vhost-user devices.
# Its sub-path "block" is used for block devices; "block/sockets" is
# where we expect vhost-user sockets to live; "block/devices" is where
# simulated block device nodes for vhost-user devices to live.
vhost_user_store_path = "/var/run/kata-containers/vhost-user"
# Enable file based guest memory support. The default is an empty string which
# will disable this feature. In the case of virtio-fs, this is enabled
# automatically and '/dev/shm' is used as the backing folder.
# This option will be ignored if VM templating is enabled.
#file_mem_backend = ""
# Enable swap of vm memory. Default false.
# The behaviour is undefined if mem_prealloc is also set to true
#enable_swap = true
# This option changes the default hypervisor and kernel parameters
# to enable debug output where available. This extra output is added
# to the proxy logs, but only when proxy debug is also enabled.
#
# Default false
#enable_debug = true
# Disable the customizations done in the runtime when it detects
# that it is running on top a VMM. This will result in the runtime
# behaving as it would when running on bare metal.
#
#disable_nesting_checks = true
# This is the msize used for 9p shares. It is the number of bytes
# used for 9p packet payload.
#msize_9p = 8192
# If true and vsocks are supported, use vsocks to communicate directly
# with the agent and no proxy is started, otherwise use unix
# sockets and start a proxy to communicate with the agent.
# Default false
#use_vsock = true
# If false and nvdimm is supported, use nvdimm device to plug guest image.
# Otherwise virtio-block device is used.
# Default is false
#disable_image_nvdimm = true
# VFIO devices are hotplugged on a bridge by default.
# Enable hotplugging on root bus. This may be required for devices with
# a large PCI bar, as this is a current limitation with hotplugging on
# a bridge. This value is valid for "pc" machine type.
# Default false
#hotplug_vfio_on_root_bus = true
# Before hot plugging a PCIe device, you need to add a pcie_root_port device.
# Use this parameter when using some large PCI bar devices, such as Nvidia GPU
# The value means the number of pcie_root_port
# This value is valid when hotplug_vfio_on_root_bus is true and machine_type is "q35"
# Default 0
#pcie_root_port = 2
# If vhost-net backend for virtio-net is not desired, set to true. Default is false, which trades off
# security (vhost-net runs ring0) for network I/O performance.
#disable_vhost_net = true
#
# Default entropy source.
# The path to a host source of entropy (including a real hardware RNG)
# /dev/urandom and /dev/random are two main options.
# Be aware that /dev/random is a blocking source of entropy. If the host
# runs out of entropy, the VMs boot time will increase leading to get startup
# timeouts.
# The source of entropy /dev/urandom is non-blocking and provides a
# generally acceptable source of entropy. It should work well for pretty much
# all practical purposes.
#entropy_source= "/dev/urandom"
# Path to OCI hook binaries in the *guest rootfs*.
# This does not affect host-side hooks which must instead be added to
# the OCI spec passed to the runtime.
#
# You can create a rootfs with hooks by customizing the osbuilder scripts:
# https://github.com/kata-containers/osbuilder
#
# Hooks must be stored in a subdirectory of guest_hook_path according to their
# hook type, i.e. "guest_hook_path/{prestart,postart,poststop}".
# The agent will scan these directories for executable files and add them, in
# lexicographical order, to the lifecycle of the guest container.
# Hooks are executed in the runtime namespace of the guest. See the official documentation:
# https://github.com/opencontainers/runtime-spec/blob/v1.0.1/config.md#posix-platform-hooks
# Warnings will be logged if any error is encountered will scanning for hooks,
# but it will not abort container execution.
#guest_hook_path = "/usr/share/oci/hooks"
[factory]
# VM templating support. Once enabled, new VMs are created from template
# using vm cloning. They will share the same initial kernel, initramfs and
# agent memory by mapping it readonly. It helps speeding up new container
# creation and saves a lot of memory if there are many kata containers running
# on the same host.
#
# When disabled, new VMs are created from scratch.
#
# Note: Requires "initrd=" to be set ("image=" is not supported).
#
# Default false
#enable_template = true
# Specifies the path of template.
#
# Default "/run/vc/vm/template"
#template_path = "/run/vc/vm/template"
# The number of caches of VMCache:
# unspecified or == 0 --> VMCache is disabled
# > 0 --> will be set to the specified number
#
# VMCache is a function that creates VMs as caches before using it.
# It helps speed up new container creation.
# The function consists of a server and some clients communicating
# through Unix socket. The protocol is gRPC in protocols/cache/cache.proto.
# The VMCache server will create some VMs and cache them by factory cache.
# It will convert the VM to gRPC format and transport it when gets
# requestion from clients.
# Factory grpccache is the VMCache client. It will request gRPC format
# VM and convert it back to a VM. If VMCache function is enabled,
# kata-runtime will request VM from factory grpccache when it creates
# a new sandbox.
#
# Default 0
#vm_cache_number = 0
# Specify the address of the Unix socket that is used by VMCache.
#
# Default /var/run/kata-containers/cache.sock
#vm_cache_endpoint = "/var/run/kata-containers/cache.sock"
[proxy.kata]
path = "/usr/libexec/kata-containers/kata-proxy"
# If enabled, proxy messages will be sent to the system log
# (default: disabled)
#enable_debug = true
[shim.kata]
path = "/usr/libexec/kata-containers/kata-shim"
# If enabled, shim messages will be sent to the system log
# (default: disabled)
#enable_debug = true
# If enabled, the shim will create opentracing.io traces and spans.
# (See https://www.jaegertracing.io/docs/getting-started).
#
# Note: By default, the shim runs in a separate network namespace. Therefore,
# to allow it to send trace details to the Jaeger agent running on the host,
# it is necessary to set 'disable_new_netns=true' so that it runs in the host
# network namespace.
#
# (default: disabled)
#enable_tracing = true
[agent.kata]
# If enabled, make the agent display debug-level messages.
# (default: disabled)
#enable_debug = true
# Enable agent tracing.
#
# If enabled, the default trace mode is "dynamic" and the
# default trace type is "isolated". The trace mode and type are set
# explicity with the `trace_type=` and `trace_mode=` options.
#
# Notes:
#
# - Tracing is ONLY enabled when `enable_tracing` is set: explicitly
# setting `trace_mode=` and/or `trace_type=` without setting `enable_tracing`
# will NOT activate agent tracing.
#
# - See https://github.com/kata-containers/agent/blob/master/TRACING.md for
# full details.
#
# (default: disabled)
#enable_tracing = true
#
#trace_mode = "dynamic"
#trace_type = "isolated"
# Comma separated list of kernel modules and their parameters.
# These modules will be loaded in the guest kernel using modprobe(8).
# The following example can be used to load two kernel modules with parameters
# - kernel_modules=["e1000e InterruptThrottleRate=3000,3000,3000 EEE=1", "i915 enable_ppgtt=0"]
# The first word is considered as the module name and the rest as its parameters.
# Container will not be started when:
# * A kernel module is specified and the modprobe command is not installed in the guest
# or it fails loading the module.
# * The module is not available in the guest or it doesn't met the guest kernel
# requirements, like architecture and version.
#
kernel_modules=[]
[netmon]
# If enabled, the network monitoring process gets started when the
# sandbox is created. This allows for the detection of some additional
# network being added to the existing network namespace, after the
# sandbox has been created.
# (default: disabled)
#enable_netmon = true
# Specify the path to the netmon binary.
path = "/usr/libexec/kata-containers/kata-netmon"
# If enabled, netmon messages will be sent to the system log
# (default: disabled)
#enable_debug = true
[runtime]
# If enabled, the runtime will log additional debug messages to the
# system log
# (default: disabled)
#enable_debug = true
#
# Internetworking model
# Determines how the VM should be connected to the
# the container network interface
# Options:
#
# - macvtap
# Used when the Container network interface can be bridged using
# macvtap.
#
# - none
# Used when customize network. Only creates a tap device. No veth pair.
#
# - tcfilter
# Uses tc filter rules to redirect traffic from the network interface
# provided by plugin to a tap interface connected to the VM.
#
internetworking_model="tcfilter"
# disable guest seccomp
# Determines whether container seccomp profiles are passed to the virtual
# machine and applied by the kata agent. If set to true, seccomp is not applied
# within the guest
# (default: true)
disable_guest_seccomp=true
# If enabled, the runtime will create opentracing.io traces and spans.
# (See https://www.jaegertracing.io/docs/getting-started).
# (default: disabled)
#enable_tracing = true
# If enabled, the runtime will not create a network namespace for shim and hypervisor processes.
# This option may have some potential impacts to your host. It should only be used when you know what you're doing.
# `disable_new_netns` conflicts with `enable_netmon`
# `disable_new_netns` conflicts with `internetworking_model=tcfilter` and `internetworking_model=macvtap`. It works only
# with `internetworking_model=none`. The tap device will be in the host network namespace and can connect to a bridge
# (like OVS) directly.
# If you are using docker, `disable_new_netns` only works with `docker run --net=none`
# (default: false)
#disable_new_netns = true
# if enabled, the runtime will add all the kata processes inside one dedicated cgroup.
# The container cgroups in the host are not created, just one single cgroup per sandbox.
# The runtime caller is free to restrict or collect cgroup stats of the overall Kata sandbox.
# The sandbox cgroup path is the parent cgroup of a container with the PodSandbox annotation.
# The sandbox cgroup is constrained if there is no container type annotation.
# See: https://godoc.org/github.com/kata-containers/runtime/virtcontainers#ContainerType
sandbox_cgroup_only=false
# Enabled experimental feature list, format: ["a", "b"].
# Experimental features are features not stable enough for production,
# they may break compatibility, and are prepared for a big version bump.
# Supported experimental features:
# (default: [])
experimental=[]
```
---
# KSM throttler
## version
Output of "`/usr/libexec/kata-ksm-throttler/kata-ksm-throttler --version`":
```
kata-ksm-throttler version 1.11.0-rc0-ae0fdd0
```
## systemd service
# Image details
```yaml
---
osbuilder:
url: "https://github.com/kata-containers/osbuilder"
version: "unknown"
rootfs-creation-time: "2020-04-20T16:06:11.186926495+0000Z"
description: "osbuilder rootfs"
file-format-version: "0.0.2"
architecture: "x86_64"
base-distro:
name: "Clear"
version: "32870"
packages:
default:
- "chrony"
- "iptables-bin"
- "kmod-bin"
- "libudev0-shim"
- "systemd"
- "util-linux-bin"
extra:
agent:
url: "https://github.com/kata-containers/agent"
name: "kata-agent"
version: "1.11.0-rc0-d4df5d96ba10ced41d2d614a35ad6d535be045ba"
agent-is-init-daemon: "no"
```
---
# Initrd details
No initrd
---
# Logfiles
## Runtime logs
Recent runtime problems found in system journal:
```
time="2020-06-10T13:36:37.534725328+08:00" level=warning msg="Could not get device information" device=/dev/kvm error="no such file or directory" source=virtcontainers/pkg/cgroups
time="2020-06-10T13:36:37.534931909+08:00" level=warning msg="cgroups have not been created and cgroup path is empty" source=virtcontainers/pkg/cgroups
time="2020-06-10T14:01:12.70254939+08:00" level=warning msg="failed to get sandbox config from old store: open /var/lib/vc/sbs/159a53a0e66b4028a40b4fe5ce034b226235d09ea0f36efe0464254c74d867c0/config.json: no such file or directory" arch=amd64 command=delete container=159a53a0e66b4028a40b4fe5ce034b226235d09ea0f36efe0464254c74d867c0 name=kata-runtime pid=3882 source=virtcontainers
time="2020-06-10T14:01:12.704269156+08:00" level=warning msg="Could not get device information" device=/dev/kvm error="no such file or directory" source=virtcontainers/pkg/cgroups
time="2020-06-10T14:01:12.704496981+08:00" level=warning msg="cgroups have not been created and cgroup path is empty" source=virtcontainers/pkg/cgroups
time="2020-06-10T14:01:12.79450014+08:00" level=warning msg="no such file or directory: /run/kata-containers/shared/sandboxes/159a53a0e66b4028a40b4fe5ce034b226235d09ea0f36efe0464254c74d867c0/159a53a0e66b4028a40b4fe5ce034b226235d09ea0f36efe0464254c74d867c0/rootfs"
time="2020-06-10T14:01:12.795557342+08:00" level=warning msg="Could not remove container share dir" arch=amd64 command=delete container=159a53a0e66b4028a40b4fe5ce034b226235d09ea0f36efe0464254c74d867c0 error="no such file or directory" name=kata-runtime pid=3882 sandbox=159a53a0e66b4028a40b4fe5ce034b226235d09ea0f36efe0464254c74d867c0 share-dir=/run/kata-containers/shared/sandboxes/159a53a0e66b4028a40b4fe5ce034b226235d09ea0f36efe0464254c74d867c0/159a53a0e66b4028a40b4fe5ce034b226235d09ea0f36efe0464254c74d867c0 source=virtcontainers subsystem=container
time="2020-06-10T14:01:12.80120647+08:00" level=warning msg="failed to get sandbox config from old store: open /var/lib/vc/sbs/159a53a0e66b4028a40b4fe5ce034b226235d09ea0f36efe0464254c74d867c0/config.json: no such file or directory" arch=amd64 command=delete container=159a53a0e66b4028a40b4fe5ce034b226235d09ea0f36efe0464254c74d867c0 name=kata-runtime pid=3882 sandbox=159a53a0e66b4028a40b4fe5ce034b226235d09ea0f36efe0464254c74d867c0 source=virtcontainers
time="2020-06-10T14:01:12.80197238+08:00" level=warning msg="Could not get device information" device=/dev/kvm error="no such file or directory" source=virtcontainers/pkg/cgroups
time="2020-06-10T14:01:12.802217092+08:00" level=warning msg="cgroups have not been created and cgroup path is empty" source=virtcontainers/pkg/cgroups
time="2020-06-10T14:01:12.803793736+08:00" level=warning msg="failed to get sandbox config from old store: open /var/lib/vc/sbs/159a53a0e66b4028a40b4fe5ce034b226235d09ea0f36efe0464254c74d867c0/config.json: no such file or directory" arch=amd64 command=delete container=159a53a0e66b4028a40b4fe5ce034b226235d09ea0f36efe0464254c74d867c0 name=kata-runtime pid=3882 sandbox=159a53a0e66b4028a40b4fe5ce034b226235d09ea0f36efe0464254c74d867c0 source=virtcontainers
time="2020-06-10T14:01:12.804526223+08:00" level=warning msg="Could not get device information" device=/dev/kvm error="no such file or directory" source=virtcontainers/pkg/cgroups
time="2020-06-10T14:01:12.80469411+08:00" level=warning msg="cgroups have not been created and cgroup path is empty" source=virtcontainers/pkg/cgroups
time="2020-06-10T14:01:13.536699276+08:00" level=warning msg="failed to get sandbox config from old store: open /var/lib/vc/sbs/159a53a0e66b4028a40b4fe5ce034b226235d09ea0f36efe0464254c74d867c0/config.json: no such file or directory" arch=amd64 command=delete container=159a53a0e66b4028a40b4fe5ce034b226235d09ea0f36efe0464254c74d867c0 name=kata-runtime pid=3882 sandbox=159a53a0e66b4028a40b4fe5ce034b226235d09ea0f36efe0464254c74d867c0 source=virtcontainers
time="2020-06-10T14:01:13.540063723+08:00" level=warning msg="Could not get device information" device=/dev/kvm error="no such file or directory" source=virtcontainers/pkg/cgroups
time="2020-06-10T14:01:13.545239263+08:00" level=warning msg="cgroups have not been created and cgroup path is empty" source=virtcontainers/pkg/cgroups
time="2020-06-10T14:01:13.557783906+08:00" level=warning msg="sandbox cgroups path is empty" arch=amd64 command=delete container=159a53a0e66b4028a40b4fe5ce034b226235d09ea0f36efe0464254c74d867c0 name=kata-runtime pid=3882 sandbox=159a53a0e66b4028a40b4fe5ce034b226235d09ea0f36efe0464254c74d867c0 source=virtcontainers subsystem=sandbox
time="2020-06-10T14:01:35.214764312+08:00" level=warning msg="Could not get device information" device=/dev/kvm error="no such file or directory" source=virtcontainers/pkg/cgroups
time="2020-06-10T14:01:45.382030613+08:00" level=warning msg="sandbox's cgroup won't be updated: cgroup path is empty" arch=amd64 command=create container=ce57bc35101f67aa9921bfc209d87514e0f5ce66db1d35ab6f41acf8e47cf9f5 name=kata-runtime pid=4052 sandbox=ce57bc35101f67aa9921bfc209d87514e0f5ce66db1d35ab6f41acf8e47cf9f5 source=virtcontainers subsystem=sandbox
time="2020-06-10T14:01:45.422000031+08:00" level=warning msg="failed to get sandbox config from old store: open /var/lib/vc/sbs/ce57bc35101f67aa9921bfc209d87514e0f5ce66db1d35ab6f41acf8e47cf9f5/config.json: no such file or directory" arch=amd64 command=start container=ce57bc35101f67aa9921bfc209d87514e0f5ce66db1d35ab6f41acf8e47cf9f5 name=kata-runtime pid=4283 source=virtcontainers
time="2020-06-10T14:01:45.423379095+08:00" level=warning msg="Could not get device information" device=/dev/kvm error="no such file or directory" source=virtcontainers/pkg/cgroups
time="2020-06-10T14:01:45.423569863+08:00" level=warning msg="cgroups have not been created and cgroup path is empty" source=virtcontainers/pkg/cgroups
time="2020-06-10T14:01:45.425313174+08:00" level=warning msg="failed to get sandbox config from old store: open /var/lib/vc/sbs/ce57bc35101f67aa9921bfc209d87514e0f5ce66db1d35ab6f41acf8e47cf9f5/config.json: no such file or directory" arch=amd64 command=start container=ce57bc35101f67aa9921bfc209d87514e0f5ce66db1d35ab6f41acf8e47cf9f5 name=kata-runtime pid=4283 sandbox=ce57bc35101f67aa9921bfc209d87514e0f5ce66db1d35ab6f41acf8e47cf9f5 source=virtcontainers
time="2020-06-10T14:01:45.425923857+08:00" level=warning msg="Could not get device information" device=/dev/kvm error="no such file or directory" source=virtcontainers/pkg/cgroups
time="2020-06-10T14:01:45.426071584+08:00" level=warning msg="cgroups have not been created and cgroup path is empty" source=virtcontainers/pkg/cgroups
time="2020-06-10T14:14:58.669156255+08:00" level=warning msg="failed to get sandbox config from old store: open /var/lib/vc/sbs/ce57bc35101f67aa9921bfc209d87514e0f5ce66db1d35ab6f41acf8e47cf9f5/config.json: no such file or directory" arch=amd64 command=delete container=ce57bc35101f67aa9921bfc209d87514e0f5ce66db1d35ab6f41acf8e47cf9f5 name=kata-runtime pid=4752 source=virtcontainers
time="2020-06-10T14:14:58.670869147+08:00" level=warning msg="Could not get device information" device=/dev/kvm error="no such file or directory" source=virtcontainers/pkg/cgroups
time="2020-06-10T14:14:58.671148966+08:00" level=warning msg="cgroups have not been created and cgroup path is empty" source=virtcontainers/pkg/cgroups
time="2020-06-10T14:14:58.763401356+08:00" level=warning msg="no such file or directory: /run/kata-containers/shared/sandboxes/ce57bc35101f67aa9921bfc209d87514e0f5ce66db1d35ab6f41acf8e47cf9f5/ce57bc35101f67aa9921bfc209d87514e0f5ce66db1d35ab6f41acf8e47cf9f5/rootfs"
time="2020-06-10T14:14:58.764459351+08:00" level=warning msg="Could not remove container share dir" arch=amd64 command=delete container=ce57bc35101f67aa9921bfc209d87514e0f5ce66db1d35ab6f41acf8e47cf9f5 error="no such file or directory" name=kata-runtime pid=4752 sandbox=ce57bc35101f67aa9921bfc209d87514e0f5ce66db1d35ab6f41acf8e47cf9f5 share-dir=/run/kata-containers/shared/sandboxes/ce57bc35101f67aa9921bfc209d87514e0f5ce66db1d35ab6f41acf8e47cf9f5/ce57bc35101f67aa9921bfc209d87514e0f5ce66db1d35ab6f41acf8e47cf9f5 source=virtcontainers subsystem=container
time="2020-06-10T14:14:58.770127779+08:00" level=warning msg="failed to get sandbox config from old store: open /var/lib/vc/sbs/ce57bc35101f67aa9921bfc209d87514e0f5ce66db1d35ab6f41acf8e47cf9f5/config.json: no such file or directory" arch=amd64 command=delete container=ce57bc35101f67aa9921bfc209d87514e0f5ce66db1d35ab6f41acf8e47cf9f5 name=kata-runtime pid=4752 sandbox=ce57bc35101f67aa9921bfc209d87514e0f5ce66db1d35ab6f41acf8e47cf9f5 source=virtcontainers
time="2020-06-10T14:14:58.770859174+08:00" level=warning msg="Could not get device information" device=/dev/kvm error="no such file or directory" source=virtcontainers/pkg/cgroups
time="2020-06-10T14:14:58.77102566+08:00" level=warning msg="cgroups have not been created and cgroup path is empty" source=virtcontainers/pkg/cgroups
time="2020-06-10T14:14:58.772619748+08:00" level=warning msg="failed to get sandbox config from old store: open /var/lib/vc/sbs/ce57bc35101f67aa9921bfc209d87514e0f5ce66db1d35ab6f41acf8e47cf9f5/config.json: no such file or directory" arch=amd64 command=delete container=ce57bc35101f67aa9921bfc209d87514e0f5ce66db1d35ab6f41acf8e47cf9f5 name=kata-runtime pid=4752 sandbox=ce57bc35101f67aa9921bfc209d87514e0f5ce66db1d35ab6f41acf8e47cf9f5 source=virtcontainers
time="2020-06-10T14:14:58.77333799+08:00" level=warning msg="Could not get device information" device=/dev/kvm error="no such file or directory" source=virtcontainers/pkg/cgroups
time="2020-06-10T14:14:58.773510169+08:00" level=warning msg="cgroups have not been created and cgroup path is empty" source=virtcontainers/pkg/cgroups
time="2020-06-10T14:14:59.517025844+08:00" level=warning msg="failed to get sandbox config from old store: open /var/lib/vc/sbs/ce57bc35101f67aa9921bfc209d87514e0f5ce66db1d35ab6f41acf8e47cf9f5/config.json: no such file or directory" arch=amd64 command=delete container=ce57bc35101f67aa9921bfc209d87514e0f5ce66db1d35ab6f41acf8e47cf9f5 name=kata-runtime pid=4752 sandbox=ce57bc35101f67aa9921bfc209d87514e0f5ce66db1d35ab6f41acf8e47cf9f5 source=virtcontainers
time="2020-06-10T14:14:59.520103794+08:00" level=warning msg="Could not get device information" device=/dev/kvm error="no such file or directory" source=virtcontainers/pkg/cgroups
time="2020-06-10T14:14:59.520400481+08:00" level=warning msg="cgroups have not been created and cgroup path is empty" source=virtcontainers/pkg/cgroups
time="2020-06-10T14:14:59.537999421+08:00" level=warning msg="sandbox cgroups path is empty" arch=amd64 command=delete container=ce57bc35101f67aa9921bfc209d87514e0f5ce66db1d35ab6f41acf8e47cf9f5 name=kata-runtime pid=4752 sandbox=ce57bc35101f67aa9921bfc209d87514e0f5ce66db1d35ab6f41acf8e47cf9f5 source=virtcontainers subsystem=sandbox
time="2020-06-11T09:39:14.05319558+08:00" level=error msg="failed to open file for reading" driver=fs error="open /run/vc/uuid/uuid.json: no such file or directory" file=/run/vc/uuid/uuid.json source=virtcontainers/persist/fs subsystem=persist
time="2020-06-11T09:39:14.053329115+08:00" level=info msg="Load UUID store failed" arch=amd64 command=create container=3b0a074ee051c66075130d9a8e7ed99a101ba19aa8ac4f269c99df3c53b9eedc name=kata-runtime pid=2432 source=virtcontainers subsystem=acrn
time="2020-06-11T09:39:14.053925789+08:00" level=warning msg="Could not get device information" device=/dev/kvm error="no such file or directory" source=virtcontainers/pkg/cgroups
time="2020-06-11T09:39:24.42284489+08:00" level=warning msg="sandbox's cgroup won't be updated: cgroup path is empty" arch=amd64 command=create container=3b0a074ee051c66075130d9a8e7ed99a101ba19aa8ac4f269c99df3c53b9eedc name=kata-runtime pid=2432 sandbox=3b0a074ee051c66075130d9a8e7ed99a101ba19aa8ac4f269c99df3c53b9eedc source=virtcontainers subsystem=sandbox
time="2020-06-11T09:39:24.474670357+08:00" level=warning msg="failed to get sandbox config from old store: open /var/lib/vc/sbs/3b0a074ee051c66075130d9a8e7ed99a101ba19aa8ac4f269c99df3c53b9eedc/config.json: no such file or directory" arch=amd64 command=start container=3b0a074ee051c66075130d9a8e7ed99a101ba19aa8ac4f269c99df3c53b9eedc name=kata-runtime pid=2531 source=virtcontainers
time="2020-06-11T09:39:24.476164168+08:00" level=warning msg="Could not get device information" device=/dev/kvm error="no such file or directory" source=virtcontainers/pkg/cgroups
time="2020-06-11T09:39:24.476229109+08:00" level=warning msg="cgroups have not been created and cgroup path is empty" source=virtcontainers/pkg/cgroups
time="2020-06-11T09:39:24.477706298+08:00" level=warning msg="failed to get sandbox config from old store: open /var/lib/vc/sbs/3b0a074ee051c66075130d9a8e7ed99a101ba19aa8ac4f269c99df3c53b9eedc/config.json: no such file or directory" arch=amd64 command=start container=3b0a074ee051c66075130d9a8e7ed99a101ba19aa8ac4f269c99df3c53b9eedc name=kata-runtime pid=2531 sandbox=3b0a074ee051c66075130d9a8e7ed99a101ba19aa8ac4f269c99df3c53b9eedc source=virtcontainers
time="2020-06-11T09:39:24.478314772+08:00" level=warning msg="Could not get device information" device=/dev/kvm error="no such file or directory" source=virtcontainers/pkg/cgroups
time="2020-06-11T09:39:24.478386175+08:00" level=warning msg="cgroups have not been created and cgroup path is empty" source=virtcontainers/pkg/cgroups
```
## Proxy logs
Recent proxy problems found in system journal:
```
time="2020-06-09T17:52:38.046760855+08:00" level=fatal msg="failed to handle exit signal" error="close unix @->/run/vc/vm/227ff69b6eeb37431b7a51d550c16c5c6c45c6cc637a9a25af9df7aeab50298c/kata.sock: use of closed network connection" name=kata-proxy pid=2952 sandbox=227ff69b6eeb37431b7a51d550c16c5c6c45c6cc637a9a25af9df7aeab50298c source=proxy
time="2020-06-09T17:58:21.450558808+08:00" level=fatal msg="channel error" error="accept unix /run/vc/sbs/cafd67234a96b5e91be1143e826a01a199ceefeb5810fbe05f2c7db5a215ba60/proxy.sock: use of closed network connection" name=kata-proxy pid=3023 sandbox=cafd67234a96b5e91be1143e826a01a199ceefeb5810fbe05f2c7db5a215ba60 source=proxy
time="2020-06-09T18:05:49.669961829+08:00" level=fatal msg="failed to handle exit signal" error="close unix @->/run/vc/vm/09f753a4e16d06b8e0e3ef6dffbb858df82f5583609d7ed51a9bf3fc757c10f0/kata.sock: use of closed network connection" name=kata-proxy pid=2623 sandbox=09f753a4e16d06b8e0e3ef6dffbb858df82f5583609d7ed51a9bf3fc757c10f0 source=proxy
time="2020-06-09T19:18:30.425982134+08:00" level=fatal msg="failed to handle exit signal" error="close unix @->/run/vc/vm/30279f5bc7d5e788ace93837bf1e1bf08dee9abdf0fe8d4b3b3269936269b487/kata.sock: use of closed network connection" name=kata-proxy pid=2423 sandbox=30279f5bc7d5e788ace93837bf1e1bf08dee9abdf0fe8d4b3b3269936269b487 source=proxy
time="2020-06-09T19:54:42.289342993+08:00" level=fatal msg="channel error" error="accept unix /run/vc/sbs/24b13519456c77f747a880067a2f5f1d6a82a42cb57208ead3b900c3acbde89e/proxy.sock: use of closed network connection" name=kata-proxy pid=3765 sandbox=24b13519456c77f747a880067a2f5f1d6a82a42cb57208ead3b900c3acbde89e source=proxy
time="2020-06-09T19:59:52.909781215+08:00" level=fatal msg="channel error" error="accept unix /run/vc/sbs/b797f10ef242c291a0d3ada811646e124ee9c3fe9d68ba85b98704215ba52197/proxy.sock: use of closed network connection" name=kata-proxy pid=4206 sandbox=b797f10ef242c291a0d3ada811646e124ee9c3fe9d68ba85b98704215ba52197 source=proxy
time="2020-06-09T20:17:14.719499252+08:00" level=fatal msg="failed to handle exit signal" error="close unix @->/run/vc/vm/a1415e6bd11052e51966e334cc195a3ebc170535a381dca08bfef5a9d13e124e/kata.sock: use of closed network connection" name=kata-proxy pid=4861 sandbox=a1415e6bd11052e51966e334cc195a3ebc170535a381dca08bfef5a9d13e124e source=proxy
time="2020-06-09T20:33:36.553575386+08:00" level=fatal msg="failed to handle exit signal" error="close unix @->/run/vc/vm/7769df7d75c99569564cd1510039306ab94541d36acba89e3c1724bba11ebb51/kata.sock: use of closed network connection" name=kata-proxy pid=2452 sandbox=7769df7d75c99569564cd1510039306ab94541d36acba89e3c1724bba11ebb51 source=proxy
time="2020-06-09T20:40:50.415250836+08:00" level=fatal msg="failed to handle exit signal" error="close unix @->/run/vc/vm/a0362469f529e338731ecff8bde03f3ceb5aba2032ec1174b3619d4ab48582e7/kata.sock: use of closed network connection" name=kata-proxy pid=2445 sandbox=a0362469f529e338731ecff8bde03f3ceb5aba2032ec1174b3619d4ab48582e7 source=proxy
time="2020-06-09T21:23:19.138054453+08:00" level=fatal msg="channel error" error="accept unix /run/vc/sbs/f93345d4c0ded08e44794329e559629159e16d85710e0b520f02efeedfa1deb9/proxy.sock: use of closed network connection" name=kata-proxy pid=2473 sandbox=f93345d4c0ded08e44794329e559629159e16d85710e0b520f02efeedfa1deb9 source=proxy
time="2020-06-09T21:24:07.523641951+08:00" level=fatal msg="channel error" error="accept unix /run/vc/sbs/e7fda5c33c55741630e8f7346b31ea50f5f9fcbea98202f9acd6024234bb4669/proxy.sock: use of closed network connection" name=kata-proxy pid=3025 sandbox=e7fda5c33c55741630e8f7346b31ea50f5f9fcbea98202f9acd6024234bb4669 source=proxy
time="2020-06-09T21:25:35.826589333+08:00" level=fatal msg="failed to handle exit signal" error="close unix @->/run/vc/vm/22ad7f31b888e1254bae0e0c276a89a66387fba3ed224a64d918bc4f341c74c2/kata.sock: use of closed network connection" name=kata-proxy pid=3347 sandbox=22ad7f31b888e1254bae0e0c276a89a66387fba3ed224a64d918bc4f341c74c2 source=proxy
time="2020-06-09T21:26:19.349662821+08:00" level=fatal msg="failed to handle exit signal" error="close unix @->/run/vc/vm/f97f19875e437dfd3c77ca3d7b49f460c3e01354f618475b5f7428c4f2db7fb3/kata.sock: use of closed network connection" name=kata-proxy pid=3659 sandbox=f97f19875e437dfd3c77ca3d7b49f460c3e01354f618475b5f7428c4f2db7fb3 source=proxy
time="2020-06-09T23:38:53.939196597+08:00" level=fatal msg="failed to handle exit signal" error="close unix @->/run/vc/vm/86e19c544aef276bd325e8a1fc97edad2765da09169852df89c8a2a4e673c87c/kata.sock: use of closed network connection" name=kata-proxy pid=3071 sandbox=86e19c544aef276bd325e8a1fc97edad2765da09169852df89c8a2a4e673c87c source=proxy
time="2020-06-10T14:01:12.808301245+08:00" level=fatal msg="failed to handle exit signal" error="close unix @->/run/vc/vm/159a53a0e66b4028a40b4fe5ce034b226235d09ea0f36efe0464254c74d867c0/kata.sock: use of closed network connection" name=kata-proxy pid=2485 sandbox=159a53a0e66b4028a40b4fe5ce034b226235d09ea0f36efe0464254c74d867c0 source=proxy
time="2020-06-10T14:14:58.77686887+08:00" level=fatal msg="failed to handle exit signal" error="close unix @->/run/vc/vm/ce57bc35101f67aa9921bfc209d87514e0f5ce66db1d35ab6f41acf8e47cf9f5/kata.sock: use of closed network connection" name=kata-proxy pid=4247 sandbox=ce57bc35101f67aa9921bfc209d87514e0f5ce66db1d35ab6f41acf8e47cf9f5 source=proxy
```
## Shim logs
No recent shim problems found in system journal.
## Throttler logs
No recent throttler problems found in system journal.
---
# Container manager details
Have `docker`
## Docker
Output of "`docker version`":
```
Client:
Version: 19.03.6
API version: 1.40
Go version: go1.12.17
Git commit: 369ce74a3c
Built: Fri Feb 28 23:45:43 2020
OS/Arch: linux/amd64
Experimental: false
Server:
Engine:
Version: 19.03.6
API version: 1.40 (minimum version 1.12)
Go version: go1.12.17
Git commit: 369ce74a3c
Built: Wed Feb 19 01:06:16 2020
OS/Arch: linux/amd64
Experimental: false
containerd:
Version: 1.3.3-0ubuntu1~18.04.2
GitCommit:
docker-init:
Version: 0.18.0
GitCommit:
```
Output of "`docker info`":
```
Client:
Debug Mode: false
Server:
Containers: 1
Running: 1
Paused: 0
Stopped: 0
Images: 3
Server Version: 19.03.6
Storage Driver: devicemapper
Pool Name: docker-8:2-2236520-pool
Pool Blocksize: 65.54kB
Base Device Size: 10.74GB
Backing Filesystem: ext4
Udev Sync Supported: true
Data file: /dev/loop8
Metadata file: /dev/loop9
Data loop file: /var/lib/docker/devicemapper/devicemapper/data
Metadata loop file: /var/lib/docker/devicemapper/devicemapper/metadata
Data Space Used: 512.8MB
Data Space Total: 107.4GB
Data Space Available: 46.75GB
Metadata Space Used: 17.81MB
Metadata Space Total: 2.147GB
Metadata Space Available: 2.13GB
Thin Pool Minimum Free Space: 10.74GB
Deferred Removal Enabled: true
Deferred Deletion Enabled: true
Deferred Deleted Device Count: 0
Library Version: 1.02.145 (2017-11-03)
Logging Driver: json-file
Cgroup Driver: cgroupfs
Plugins:
Volume: local
Network: bridge host ipvlan macvlan null overlay
Log: awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog
Swarm: inactive
Runtimes: kata-runtime runc
Default Runtime: kata-runtime
Init Binary: docker-init
containerd version:
runc version: N/A
init version:
Security Options:
seccomp
Profile: default
Kernel Version: 5.4.28-PKT-200203T060100Z-00002-gd7da1d772f85
Operating System: Ubuntu 18.04.4 LTS
OSType: linux
Architecture: x86_64
CPUs: 4
Total Memory: 14.73GiB
Name: nuc7i5bnh
ID: ZEVE:R64C:HPUN:GKOZ:KW2J:YLNS:GRXP:63SI:BRTG:LZUF:V2SN:6DEO
Docker Root Dir: /var/lib/docker
Debug Mode: false
Registry: https://index.docker.io/v1/
Labels:
Experimental: false
Insecure Registries:
127.0.0.0/8
Live Restore Enabled: false
WARNING: the devicemapper storage-driver is deprecated, and will be removed in a future release.
WARNING: devicemapper: usage of loopback devices is strongly discouraged for production use.
Use `--storage-opt dm.thinpooldev` to specify a custom block storage device.
```
Output of "`systemctl show docker`":
```
Type=notify
Restart=always
NotifyAccess=main
RestartUSec=2s
TimeoutStartUSec=infinity
TimeoutStopUSec=infinity
RuntimeMaxUSec=infinity
WatchdogUSec=0
WatchdogTimestamp=Thu 2020-06-11 09:38:42 CST
WatchdogTimestampMonotonic=124490113
PermissionsStartOnly=no
RootDirectoryStartOnly=no
RemainAfterExit=no
GuessMainPID=yes
MainPID=1655
ControlPID=0
FileDescriptorStoreMax=0
NFileDescriptorStore=0
StatusErrno=0
Result=success
UID=[not set]
GID=[not set]
NRestarts=0
ExecMainStartTimestamp=Thu 2020-06-11 09:38:41 CST
ExecMainStartTimestampMonotonic=123319575
ExecMainExitTimestampMonotonic=0
ExecMainPID=1655
ExecMainCode=0
ExecMainStatus=0
ExecStart={ path=/usr/bin/dockerd ; argv[]=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock --ip-forward=true ; ignore_errors=no ; start_time=[Thu 2020-06-11 09:38:41 CST] ; stop_time=[n/a] ; pid=1655 ; code=(null) ; status=0/0 }
ExecReload={ path=/bin/kill ; argv[]=/bin/kill -s HUP $MAINPID ; ignore_errors=no ; start_time=[n/a] ; stop_time=[n/a] ; pid=0 ; code=(null) ; status=0/0 }
Slice=system.slice
ControlGroup=/system.slice/docker.service
MemoryCurrent=[not set]
CPUUsageNSec=[not set]
TasksCurrent=15
IPIngressBytes=18446744073709551615
IPIngressPackets=18446744073709551615
IPEgressBytes=18446744073709551615
IPEgressPackets=18446744073709551615
Delegate=yes
DelegateControllers=cpu cpuacct io blkio memory devices pids
CPUAccounting=no
CPUWeight=[not set]
StartupCPUWeight=[not set]
CPUShares=[not set]
StartupCPUShares=[not set]
CPUQuotaPerSecUSec=infinity
IOAccounting=no
IOWeight=[not set]
StartupIOWeight=[not set]
BlockIOAccounting=no
BlockIOWeight=[not set]
StartupBlockIOWeight=[not set]
MemoryAccounting=no
MemoryLow=0
MemoryHigh=infinity
MemoryMax=infinity
MemorySwapMax=infinity
MemoryLimit=infinity
DevicePolicy=auto
TasksAccounting=yes
TasksMax=infinity
IPAccounting=no
UMask=0022
LimitCPU=infinity
LimitCPUSoft=infinity
LimitFSIZE=infinity
LimitFSIZESoft=infinity
LimitDATA=infinity
LimitDATASoft=infinity
LimitSTACK=infinity
LimitSTACKSoft=8388608
LimitCORE=infinity
LimitCORESoft=infinity
LimitRSS=infinity
LimitRSSSoft=infinity
LimitNOFILE=infinity
LimitNOFILESoft=infinity
LimitAS=infinity
LimitASSoft=infinity
LimitNPROC=infinity
LimitNPROCSoft=infinity
LimitMEMLOCK=16777216
LimitMEMLOCKSoft=16777216
LimitLOCKS=infinity
LimitLOCKSSoft=infinity
LimitSIGPENDING=60331
LimitSIGPENDINGSoft=60331
LimitMSGQUEUE=819200
LimitMSGQUEUESoft=819200
LimitNICE=0
LimitNICESoft=0
LimitRTPRIO=0
LimitRTPRIOSoft=0
LimitRTTIME=infinity
LimitRTTIMESoft=infinity
OOMScoreAdjust=0
Nice=0
IOSchedulingClass=0
IOSchedulingPriority=0
CPUSchedulingPolicy=0
CPUSchedulingPriority=0
TimerSlackNSec=50000
CPUSchedulingResetOnFork=no
NonBlocking=no
StandardInput=null
StandardInputData=
StandardOutput=journal
StandardError=inherit
TTYReset=no
TTYVHangup=no
TTYVTDisallocate=no
SyslogPriority=30
SyslogLevelPrefix=yes
SyslogLevel=6
SyslogFacility=3
LogLevelMax=-1
SecureBits=0
CapabilityBoundingSet=cap_chown cap_dac_override cap_dac_read_search cap_fowner cap_fsetid cap_kill cap_setgid cap_setuid cap_setpcap cap_linux_immutable cap_net_bind_service cap_net_broadcast cap_net_admin cap_net_raw cap_ipc_lock cap_ipc_owner cap_sys_module cap_sys_rawio cap_sys_chroot cap_sys_ptrace cap_sys_pacct cap_sys_admin cap_sys_boot cap_sys_nice cap_sys_resource cap_sys_time cap_sys_tty_config cap_mknod cap_lease cap_audit_write cap_audit_control cap_setfcap cap_mac_override cap_mac_admin cap_syslog cap_wake_alarm cap_block_suspend
AmbientCapabilities=
DynamicUser=no
RemoveIPC=no
MountFlags=
PrivateTmp=no
PrivateDevices=no
ProtectKernelTunables=no
ProtectKernelModules=no
ProtectControlGroups=no
PrivateNetwork=no
PrivateUsers=no
ProtectHome=no
ProtectSystem=no
SameProcessGroup=no
UtmpMode=init
IgnoreSIGPIPE=yes
NoNewPrivileges=no
SystemCallErrorNumber=0
LockPersonality=no
RuntimeDirectoryPreserve=no
RuntimeDirectoryMode=0755
StateDirectoryMode=0755
CacheDirectoryMode=0755
LogsDirectoryMode=0755
ConfigurationDirectoryMode=0755
MemoryDenyWriteExecute=no
RestrictRealtime=no
RestrictNamespaces=no
MountAPIVFS=no
KeyringMode=private
KillMode=process
KillSignal=15
SendSIGKILL=yes
SendSIGHUP=no
Id=docker.service
Names=docker.service
Requires=docker.socket sysinit.target system.slice
Wants=network-online.target
BindsTo=containerd.service
ConsistsOf=docker.socket
Conflicts=shutdown.target
Before=shutdown.target
After=system.slice containerd.service systemd-journald.socket docker.socket firewalld.service network-online.target basic.target sysinit.target
TriggeredBy=docker.socket
Documentation=https://docs.docker.com
Description=Docker Application Container Engine
LoadState=loaded
ActiveState=active
SubState=running
FragmentPath=/lib/systemd/system/docker.service
UnitFileState=disabled
UnitFilePreset=enabled
StateChangeTimestamp=Thu 2020-06-11 09:38:42 CST
StateChangeTimestampMonotonic=124490116
InactiveExitTimestamp=Thu 2020-06-11 09:38:41 CST
InactiveExitTimestampMonotonic=123319796
ActiveEnterTimestamp=Thu 2020-06-11 09:38:42 CST
ActiveEnterTimestampMonotonic=124490116
ActiveExitTimestampMonotonic=0
InactiveEnterTimestampMonotonic=0
CanStart=yes
CanStop=yes
CanReload=yes
CanIsolate=no
StopWhenUnneeded=no
RefuseManualStart=no
RefuseManualStop=no
AllowIsolate=no
DefaultDependencies=yes
OnFailureJobMode=replace
IgnoreOnIsolate=no
NeedDaemonReload=no
JobTimeoutUSec=infinity
JobRunningTimeoutUSec=infinity
JobTimeoutAction=none
ConditionResult=yes
AssertResult=yes
ConditionTimestamp=Thu 2020-06-11 09:38:41 CST
ConditionTimestampMonotonic=123313552
AssertTimestamp=Thu 2020-06-11 09:38:41 CST
AssertTimestampMonotonic=123313554
Transient=no
Perpetual=no
StartLimitIntervalUSec=1min
StartLimitBurst=3
StartLimitAction=none
FailureAction=none
SuccessAction=none
InvocationID=7672d47edc3d4cc4bff1482c0610fcf4
CollectMode=inactive
```
No `kubectl`
No `crio`
Have `containerd`
## containerd
Output of "`containerd --version`":
```
containerd github.com/containerd/containerd 1.3.3-0ubuntu1~18.04.2
```
Output of "`systemctl show containerd`":
```
Type=simple
Restart=always
NotifyAccess=none
RestartUSec=100ms
TimeoutStartUSec=1min 30s
TimeoutStopUSec=1min 30s
RuntimeMaxUSec=infinity
WatchdogUSec=0
WatchdogTimestamp=Thu 2020-06-11 09:36:40 CST
WatchdogTimestampMonotonic=3437269
PermissionsStartOnly=no
RootDirectoryStartOnly=no
RemainAfterExit=no
GuessMainPID=yes
MainPID=653
ControlPID=0
FileDescriptorStoreMax=0
NFileDescriptorStore=0
StatusErrno=0
Result=success
UID=[not set]
GID=[not set]
NRestarts=0
ExecMainStartTimestamp=Thu 2020-06-11 09:36:40 CST
ExecMainStartTimestampMonotonic=3437210
ExecMainExitTimestampMonotonic=0
ExecMainPID=653
ExecMainCode=0
ExecMainStatus=0
ExecStartPre={ path=/sbin/modprobe ; argv[]=/sbin/modprobe overlay ; ignore_errors=yes ; start_time=[Thu 2020-06-11 09:36:40 CST] ; stop_time=[Thu 2020-06-11 09:36:40 CST] ; pid=638 ; code=exited ; status=0 }
ExecStart={ path=/usr/bin/containerd ; argv[]=/usr/bin/containerd ; ignore_errors=no ; start_time=[Thu 2020-06-11 09:36:40 CST] ; stop_time=[n/a] ; pid=653 ; code=(null) ; status=0/0 }
Slice=system.slice
ControlGroup=/system.slice/containerd.service
MemoryCurrent=[not set]
CPUUsageNSec=[not set]
TasksCurrent=55
IPIngressBytes=18446744073709551615
IPIngressPackets=18446744073709551615
IPEgressBytes=18446744073709551615
IPEgressPackets=18446744073709551615
Delegate=yes
DelegateControllers=cpu cpuacct io blkio memory devices pids
CPUAccounting=no
CPUWeight=[not set]
StartupCPUWeight=[not set]
CPUShares=[not set]
StartupCPUShares=[not set]
CPUQuotaPerSecUSec=infinity
IOAccounting=no
IOWeight=[not set]
StartupIOWeight=[not set]
BlockIOAccounting=no
BlockIOWeight=[not set]
StartupBlockIOWeight=[not set]
MemoryAccounting=no
MemoryLow=0
MemoryHigh=infinity
MemoryMax=infinity
MemorySwapMax=infinity
MemoryLimit=infinity
DevicePolicy=auto
TasksAccounting=yes
TasksMax=infinity
IPAccounting=no
UMask=0022
LimitCPU=infinity
LimitCPUSoft=infinity
LimitFSIZE=infinity
LimitFSIZESoft=infinity
LimitDATA=infinity
LimitDATASoft=infinity
LimitSTACK=infinity
LimitSTACKSoft=8388608
LimitCORE=infinity
LimitCORESoft=infinity
LimitRSS=infinity
LimitRSSSoft=infinity
LimitNOFILE=1048576
LimitNOFILESoft=1048576
LimitAS=infinity
LimitASSoft=infinity
LimitNPROC=infinity
LimitNPROCSoft=infinity
LimitMEMLOCK=16777216
LimitMEMLOCKSoft=16777216
LimitLOCKS=infinity
LimitLOCKSSoft=infinity
LimitSIGPENDING=60331
LimitSIGPENDINGSoft=60331
LimitMSGQUEUE=819200
LimitMSGQUEUESoft=819200
LimitNICE=0
LimitNICESoft=0
LimitRTPRIO=0
LimitRTPRIOSoft=0
LimitRTTIME=infinity
LimitRTTIMESoft=infinity
OOMScoreAdjust=0
Nice=0
IOSchedulingClass=0
IOSchedulingPriority=0
CPUSchedulingPolicy=0
CPUSchedulingPriority=0
TimerSlackNSec=50000
CPUSchedulingResetOnFork=no
NonBlocking=no
StandardInput=null
StandardInputData=
StandardOutput=journal
StandardError=inherit
TTYReset=no
TTYVHangup=no
TTYVTDisallocate=no
SyslogPriority=30
SyslogLevelPrefix=yes
SyslogLevel=6
SyslogFacility=3
LogLevelMax=-1
SecureBits=0
CapabilityBoundingSet=cap_chown cap_dac_override cap_dac_read_search cap_fowner cap_fsetid cap_kill cap_setgid cap_setuid cap_setpcap cap_linux_immutable cap_net_bind_service cap_net_broadcast cap_net_admin cap_net_raw cap_ipc_lock cap_ipc_owner cap_sys_module cap_sys_rawio cap_sys_chroot cap_sys_ptrace cap_sys_pacct cap_sys_admin cap_sys_boot cap_sys_nice cap_sys_resource cap_sys_time cap_sys_tty_config cap_mknod cap_lease cap_audit_write cap_audit_control cap_setfcap cap_mac_override cap_mac_admin cap_syslog cap_wake_alarm cap_block_suspend
AmbientCapabilities=
DynamicUser=no
RemoveIPC=no
MountFlags=
PrivateTmp=no
PrivateDevices=no
ProtectKernelTunables=no
ProtectKernelModules=no
ProtectControlGroups=no
PrivateNetwork=no
PrivateUsers=no
ProtectHome=no
ProtectSystem=no
SameProcessGroup=no
UtmpMode=init
IgnoreSIGPIPE=yes
NoNewPrivileges=no
SystemCallErrorNumber=0
LockPersonality=no
RuntimeDirectoryPreserve=no
RuntimeDirectoryMode=0755
StateDirectoryMode=0755
CacheDirectoryMode=0755
LogsDirectoryMode=0755
ConfigurationDirectoryMode=0755
MemoryDenyWriteExecute=no
RestrictRealtime=no
RestrictNamespaces=no
MountAPIVFS=no
KeyringMode=private
KillMode=process
KillSignal=15
SendSIGKILL=yes
SendSIGHUP=no
Id=containerd.service
Names=containerd.service
Requires=sysinit.target system.slice
WantedBy=multi-user.target
BoundBy=docker.service
Conflicts=shutdown.target
Before=multi-user.target docker.service shutdown.target
After=local-fs.target system.slice basic.target sysinit.target systemd-journald.socket network.target
Documentation=https://containerd.io
Description=containerd container runtime
LoadState=loaded
ActiveState=active
SubState=running
FragmentPath=/lib/systemd/system/containerd.service
UnitFileState=enabled
UnitFilePreset=enabled
StateChangeTimestamp=Thu 2020-06-11 09:36:40 CST
StateChangeTimestampMonotonic=3437271
InactiveExitTimestamp=Thu 2020-06-11 09:36:40 CST
InactiveExitTimestampMonotonic=3392193
ActiveEnterTimestamp=Thu 2020-06-11 09:36:40 CST
ActiveEnterTimestampMonotonic=3437271
ActiveExitTimestampMonotonic=0
InactiveEnterTimestampMonotonic=0
CanStart=yes
CanStop=yes
CanReload=no
CanIsolate=no
StopWhenUnneeded=no
RefuseManualStart=no
RefuseManualStop=no
AllowIsolate=no
DefaultDependencies=yes
OnFailureJobMode=replace
IgnoreOnIsolate=no
NeedDaemonReload=no
JobTimeoutUSec=infinity
JobRunningTimeoutUSec=infinity
JobTimeoutAction=none
ConditionResult=yes
AssertResult=yes
ConditionTimestamp=Thu 2020-06-11 09:36:40 CST
ConditionTimestampMonotonic=3391132
AssertTimestamp=Thu 2020-06-11 09:36:40 CST
AssertTimestampMonotonic=3391132
Transient=no
Perpetual=no
StartLimitIntervalUSec=10s
StartLimitBurst=5
StartLimitAction=none
FailureAction=none
SuccessAction=none
InvocationID=9c02556453fd48a6a376179de5e8ca6e
CollectMode=inactive
```
Output of "`cat /etc/containerd/config.toml`":
```
cat: /etc/containerd/config.toml: No such file or directory
```
---
# Packages
Have `dpkg`
Output of "`dpkg -l|egrep "(cc-oci-runtimecc-runtimerunv|kata-proxy|kata-runtime|kata-shim|kata-ksm-throttler|kata-containers-image|linux-container|qemu-)"`":
```
ii kata-containers-image 1.11.0~rc0-44 amd64 Kata containers image
ii kata-ksm-throttler 1.11.0~rc0-47 amd64
ii kata-linux-container 5.4.32.73-47 amd64 linux kernel optimised for container-like workloads.
ii kata-proxy 1.11.0~rc0-45 amd64
ii kata-runtime 1.11.0~rc0-53 amd64
ii kata-shim 1.11.0~rc0-43 amd64
ii qemu-vanilla 4.1.1+git.99c5874a9b-48 amd64 linux kernel optimised for container-like workloads.
```
No `rpm`
---