kata-containers / runtime

Kata Containers version 1.x runtime (for version 2.x see https://github.com/kata-containers/kata-containers).
https://katacontainers.io/
Apache License 2.0
2.1k stars 375 forks source link

System cannot boot: Missing /etc/machine-id and /etc is mounted read-only #1537

Closed zhsj closed 5 years ago

zhsj commented 5 years ago

Kata version: 1.7.0-alpha0

Not sure if it's big problem, since I haven't met failures. But the guest kernel dmesg says:

# dmesg -l err
[    0.762826] systemd[1]: System cannot boot: Missing /etc/machine-id and /etc is mounted read-only.
[    0.762949] systemd[1]: Booting up is supported only when:
[    0.762980] systemd[1]: 1) /etc/machine-id exists and is populated.
[    0.763018] systemd[1]: 2) /etc/machine-id exists and is empty.
[    0.763085] systemd[1]: 3) /etc/machine-id is missing and /etc is writable.

I think it's caused by #1389

grahamwhaley commented 5 years ago

Thanks for the report @zhsj - can you start by pasting the output of kata-runtime kata-env here please? /cc @amshinde @devimc @gnawux @bergwolf

zhsj commented 5 years ago

okay.

Show kata-collect-data.sh details

# Meta details Running `kata-collect-data.sh` version `1.7.0-alpha0 (commit fef124921cb877d4253e8f0b4d818d1ddcc43129)` at `2019-04-16.17:50:35.538738364+0800`. --- Runtime is `/opt/kata/bin/kata-runtime`. # `kata-env` Output of "`/opt/kata/bin/kata-runtime kata-env`": ```toml [Meta] Version = "1.0.21" [Runtime] Debug = false Trace = false DisableGuestSeccomp = true DisableNewNetNs = false Path = "/opt/kata/bin/kata-runtime" [Runtime.Version] Semver = "1.7.0-alpha0" Commit = "fef124921cb877d4253e8f0b4d818d1ddcc43129" OCI = "1.0.1-dev" [Runtime.Config] Path = "/opt/kata/share/defaults/kata-containers/configuration-qemu.toml" [Hypervisor] MachineType = "pc" Version = "QEMU emulator version 2.11.2(kata-static)\nCopyright (c) 2003-2017 Fabrice Bellard and the QEMU Project developers" Path = "/opt/kata/bin/qemu-system-x86_64" BlockDeviceDriver = "virtio-scsi" EntropySource = "/dev/urandom" Msize9p = 8192 MemorySlots = 10 Debug = false UseVSock = false [Image] Path = "/opt/kata/share/kata-containers/kata-containers-image_clearlinux_1.7.0-alpha0_agent_74639b76c00.img" [Kernel] Path = "/opt/kata/share/kata-containers/vmlinuz-4.19.28-31" Parameters = "init=/usr/lib/systemd/systemd systemd.unit=kata-containers.target systemd.mask=systemd-networkd.service systemd.mask=systemd-networkd.socket systemd.mask=systemd-journald.service systemd.mask=systemd-journald.socket systemd.mask=systemd-journal-flush.service systemd.mask=systemd-udevd.service systemd.mask=systemd-udevd.socket systemd.mask=systemd-udev-trigger.service systemd.mask=systemd-timesyncd.service systemd.mask=systemd-update-utmp.service systemd.mask=systemd-tmpfiles-setup.service systemd.mask=systemd-tmpfiles-cleanup.service systemd.mask=systemd-tmpfiles-cleanup.timer systemd.mask=tmp.mount systemd.mask=systemd-random-seed.service" [Initrd] Path = "" [Proxy] Type = "kataProxy" Version = "kata-proxy version 1.7.0-alpha0-a6ab2923beb82891e87f4745300870f791790286" Path = "/opt/kata/libexec/kata-containers/kata-proxy" Debug = false [Shim] Type = "kataShim" Version = "kata-shim version 1.7.0-alpha0-f219b89a387d8091600a1d2aa627d6b60d5a2e7b" Path = "/opt/kata/libexec/kata-containers/kata-shim" Debug = false [Agent] Type = "kata" [Host] Kernel = "4.19.0-2-amd64" Architecture = "amd64" VMContainerCapable = true SupportVSocks = false [Host.Distro] Name = "Debian GNU/Linux" Version = "10" [Host.CPU] Vendor = "GenuineIntel" Model = "Intel(R) Core(TM) i5-7500 CPU @ 3.40GHz" [Netmon] Version = "kata-netmon version 1.7.0-alpha0" Path = "/opt/kata/libexec/kata-containers/kata-netmon" Debug = false Enable = false ``` --- # Runtime config files ## Runtime default config files ``` /etc/kata-containers/configuration.toml /opt/kata/share/defaults/kata-containers/configuration.toml ``` ## Runtime config file contents Config file `/etc/kata-containers/configuration.toml` not found Output of "`cat "/opt/kata/share/defaults/kata-containers/configuration.toml"`": ```toml # Copyright (c) 2017-2019 Intel Corporation # # SPDX-License-Identifier: Apache-2.0 # # XXX: WARNING: this file is auto-generated. # XXX: # XXX: Source file: "cli/config/configuration-qemu.toml.in" # XXX: Project: # XXX: Name: Kata Containers # XXX: Type: kata [hypervisor.qemu] path = "/opt/kata/bin/qemu-system-x86_64" kernel = "/opt/kata/share/kata-containers/vmlinuz.container" image = "/opt/kata/share/kata-containers/kata-containers.img" machine_type = "pc" # Optional space-separated list of options to pass to the guest kernel. # For example, use `kernel_params = "vsyscall=emulate"` if you are having # trouble running pre-2.15 glibc. # # WARNING: - any parameter specified here will take priority over the default # parameter value of the same name used to start the virtual machine. # Do not set values here unless you understand the impact of doing so as you # may stop the virtual machine from booting. # To see the list of default parameters, enable hypervisor debug, create a # container and look for 'default-kernel-parameters' log entries. kernel_params = "" # Path to the firmware. # If you want that qemu uses the default firmware leave this option empty firmware = "" # Machine accelerators # comma-separated list of machine accelerators to pass to the hypervisor. # For example, `machine_accelerators = "nosmm,nosmbus,nosata,nopit,static-prt,nofw"` machine_accelerators="" # Default number of vCPUs per SB/VM: # unspecified or 0 --> will be set to 1 # < 0 --> will be set to the actual number of physical cores # > 0 <= number of physical cores --> will be set to the specified number # > number of physical cores --> will be set to the actual number of physical cores default_vcpus = 1 # Default maximum number of vCPUs per SB/VM: # unspecified or == 0 --> will be set to the actual number of physical cores or to the maximum number # of vCPUs supported by KVM if that number is exceeded # > 0 <= number of physical cores --> will be set to the specified number # > number of physical cores --> will be set to the actual number of physical cores or to the maximum number # of vCPUs supported by KVM if that number is exceeded # WARNING: Depending of the architecture, the maximum number of vCPUs supported by KVM is used when # the actual number of physical cores is greater than it. # WARNING: Be aware that this value impacts the virtual machine's memory footprint and CPU # the hotplug functionality. For example, `default_maxvcpus = 240` specifies that until 240 vCPUs # can be added to a SB/VM, but the memory footprint will be big. Another example, with # `default_maxvcpus = 8` the memory footprint will be small, but 8 will be the maximum number of # vCPUs supported by the SB/VM. In general, we recommend that you do not edit this variable, # unless you know what are you doing. default_maxvcpus = 0 # Bridges can be used to hot plug devices. # Limitations: # * Currently only pci bridges are supported # * Until 30 devices per bridge can be hot plugged. # * Until 5 PCI bridges can be cold plugged per VM. # This limitation could be a bug in qemu or in the kernel # Default number of bridges per SB/VM: # unspecified or 0 --> will be set to 1 # > 1 <= 5 --> will be set to the specified number # > 5 --> will be set to 5 default_bridges = 1 # Default memory size in MiB for SB/VM. # If unspecified then it will be set 2048 MiB. default_memory = 2048 # # Default memory slots per SB/VM. # If unspecified then it will be set 10. # This is will determine the times that memory will be hotadded to sandbox/VM. #memory_slots = 10 # The size in MiB will be plused to max memory of hypervisor. # It is the memory address space for the NVDIMM devie. # If set block storage driver (block_device_driver) to "nvdimm", # should set memory_offset to the size of block device. # Default 0 #memory_offset = 0 # Disable block device from being used for a container's rootfs. # In case of a storage driver like devicemapper where a container's # root file system is backed by a block device, the block device is passed # directly to the hypervisor for performance reasons. # This flag prevents the block device from being passed to the hypervisor, # 9pfs is used instead to pass the rootfs. disable_block_device_use = false # Block storage driver to be used for the hypervisor in case the container # rootfs is backed by a block device. This is virtio-scsi, virtio-blk # or nvdimm. block_device_driver = "virtio-scsi" # Specifies cache-related options will be set to block devices or not. # Default false #block_device_cache_set = true # Specifies cache-related options for block devices. # Denotes whether use of O_DIRECT (bypass the host page cache) is enabled. # Default false #block_device_cache_direct = true # Specifies cache-related options for block devices. # Denotes whether flush requests for the device are ignored. # Default false #block_device_cache_noflush = true # Enable iothreads (data-plane) to be used. This causes IO to be # handled in a separate IO thread. This is currently only implemented # for SCSI. # enable_iothreads = false # Enable pre allocation of VM RAM, default false # Enabling this will result in lower container density # as all of the memory will be allocated and locked # This is useful when you want to reserve all the memory # upfront or in the cases where you want memory latencies # to be very predictable # Default false #enable_mem_prealloc = true # Enable huge pages for VM RAM, default false # Enabling this will result in the VM memory # being allocated using huge pages. # This is useful when you want to use vhost-user network # stacks within the container. This will automatically # result in memory pre allocation #enable_hugepages = true # Enable swap of vm memory. Default false. # The behaviour is undefined if mem_prealloc is also set to true #enable_swap = true # This option changes the default hypervisor and kernel parameters # to enable debug output where available. This extra output is added # to the proxy logs, but only when proxy debug is also enabled. # # Default false #enable_debug = true # Disable the customizations done in the runtime when it detects # that it is running on top a VMM. This will result in the runtime # behaving as it would when running on bare metal. # #disable_nesting_checks = true # This is the msize used for 9p shares. It is the number of bytes # used for 9p packet payload. #msize_9p = 8192 # If true and vsocks are supported, use vsocks to communicate directly # with the agent and no proxy is started, otherwise use unix # sockets and start a proxy to communicate with the agent. # Default false #use_vsock = true # VFIO devices are hotplugged on a bridge by default. # Enable hotplugging on root bus. This may be required for devices with # a large PCI bar, as this is a current limitation with hotplugging on # a bridge. This value is valid for "pc" machine type. # Default false #hotplug_vfio_on_root_bus = true # If host doesn't support vhost_net, set to true. Thus we won't create vhost fds for nics. # Default false #disable_vhost_net = true # # Default entropy source. # The path to a host source of entropy (including a real hardware RNG) # /dev/urandom and /dev/random are two main options. # Be aware that /dev/random is a blocking source of entropy. If the host # runs out of entropy, the VMs boot time will increase leading to get startup # timeouts. # The source of entropy /dev/urandom is non-blocking and provides a # generally acceptable source of entropy. It should work well for pretty much # all practical purposes. #entropy_source= "/dev/urandom" # Path to OCI hook binaries in the *guest rootfs*. # This does not affect host-side hooks which must instead be added to # the OCI spec passed to the runtime. # # You can create a rootfs with hooks by customizing the osbuilder scripts: # https://github.com/kata-containers/osbuilder # # Hooks must be stored in a subdirectory of guest_hook_path according to their # hook type, i.e. "guest_hook_path/{prestart,postart,poststop}". # The agent will scan these directories for executable files and add them, in # lexicographical order, to the lifecycle of the guest container. # Hooks are executed in the runtime namespace of the guest. See the official documentation: # https://github.com/opencontainers/runtime-spec/blob/v1.0.1/config.md#posix-platform-hooks # Warnings will be logged if any error is encountered will scanning for hooks, # but it will not abort container execution. #guest_hook_path = "/usr/share/oci/hooks" [factory] # VM templating support. Once enabled, new VMs are created from template # using vm cloning. They will share the same initial kernel, initramfs and # agent memory by mapping it readonly. It helps speeding up new container # creation and saves a lot of memory if there are many kata containers running # on the same host. # # When disabled, new VMs are created from scratch. # # Note: Requires "initrd=" to be set ("image=" is not supported). # # Default false #enable_template = true # The number of caches of VMCache: # unspecified or == 0 --> VMCache is disabled # > 0 --> will be set to the specified number # # VMCache is a function that creates VMs as caches before using it. # It helps speed up new container creation. # The function consists of a server and some clients communicating # through Unix socket. The protocol is gRPC in protocols/cache/cache.proto. # The VMCache server will create some VMs and cache them by factory cache. # It will convert the VM to gRPC format and transport it when gets # requestion from clients. # Factory grpccache is the VMCache client. It will request gRPC format # VM and convert it back to a VM. If VMCache function is enabled, # kata-runtime will request VM from factory grpccache when it creates # a new sandbox. # # Default 0 #vm_cache_number = 0 # Specify the address of the Unix socket that is used by VMCache. # # Default /var/run/kata-containers/cache.sock #vm_cache_endpoint = "/var/run/kata-containers/cache.sock" [proxy.kata] path = "/opt/kata/libexec/kata-containers/kata-proxy" # If enabled, proxy messages will be sent to the system log # (default: disabled) #enable_debug = true [shim.kata] path = "/opt/kata/libexec/kata-containers/kata-shim" # If enabled, shim messages will be sent to the system log # (default: disabled) #enable_debug = true # If enabled, the shim will create opentracing.io traces and spans. # (See https://www.jaegertracing.io/docs/getting-started). # # Note: By default, the shim runs in a separate network namespace. Therefore, # to allow it to send trace details to the Jaeger agent running on the host, # it is necessary to set 'disable_new_netns=true' so that it runs in the host # network namespace. # # (default: disabled) #enable_tracing = true [agent.kata] # There is no field for this section. The goal is only to be able to # specify which type of agent the user wants to use. [netmon] # If enabled, the network monitoring process gets started when the # sandbox is created. This allows for the detection of some additional # network being added to the existing network namespace, after the # sandbox has been created. # (default: disabled) #enable_netmon = true # Specify the path to the netmon binary. path = "/opt/kata/libexec/kata-containers/kata-netmon" # If enabled, netmon messages will be sent to the system log # (default: disabled) #enable_debug = true [runtime] # If enabled, the runtime will log additional debug messages to the # system log # (default: disabled) #enable_debug = true # # Internetworking model # Determines how the VM should be connected to the # the container network interface # Options: # # - bridged # Uses a linux bridge to interconnect the container interface to # the VM. Works for most cases except macvlan and ipvlan. # # - macvtap # Used when the Container network interface can be bridged using # macvtap. # # - none # Used when customize network. Only creates a tap device. No veth pair. # # - tcfilter # Uses tc filter rules to redirect traffic from the network interface # provided by plugin to a tap interface connected to the VM. # internetworking_model="macvtap" # disable guest seccomp # Determines whether container seccomp profiles are passed to the virtual # machine and applied by the kata agent. If set to true, seccomp is not applied # within the guest # (default: true) disable_guest_seccomp=true # If enabled, the runtime will create opentracing.io traces and spans. # (See https://www.jaegertracing.io/docs/getting-started). # (default: disabled) #enable_tracing = true # If enabled, the runtime will not create a network namespace for shim and hypervisor processes. # This option may have some potential impacts to your host. It should only be used when you know what you're doing. # `disable_new_netns` conflicts with `enable_netmon` # `disable_new_netns` conflicts with `internetworking_model=bridged` and `internetworking_model=macvtap`. It works only # with `internetworking_model=none`. The tap device will be in the host network namespace and can connect to a bridge # (like OVS) directly. # If you are using docker, `disable_new_netns` only works with `docker run --net=none` # (default: false) #disable_new_netns = true # Enabled experimental feature list, format: ["a", "b"]. # Experimental features are features not stable enough for production, # They may break compatibility, and are prepared for a big version bump. # Supported experimental features: # 1. "newstore": new persist storage driver which breaks backward compatibility, # expected to move out of experimental in 2.0.0. # (default: []) experimental=[] ``` Config file `/usr/share/defaults/kata-containers/configuration.toml` not found --- # KSM throttler ## version find: ‘/usr/libexec’: No such file or directory Output of "` --version`": ``` ./kata-collect-data.sh: line 176: --version: command not found ``` ## systemd service # Image details ```yaml --- osbuilder: url: "https://github.com/kata-containers/osbuilder" version: "unknown" rootfs-creation-time: "2019-04-05T21:18:33.459414928+0000Z" description: "osbuilder rootfs" file-format-version: "0.0.2" architecture: "x86_64" base-distro: name: "Clear" version: "28670" packages: default: - "chrony" - "iptables-bin" - "libudev0-shim" - "systemd" extra: agent: url: "https://github.com/kata-containers/agent" name: "kata-agent" version: "1.7.0-alpha0-74639b76c00375da340bea681da3b9771e69e01a" agent-is-init-daemon: "no" dax-nvdimm-header: "true" ``` --- # Initrd details No initrd --- # Logfiles ## Runtime logs Recent runtime problems found in system journal: ``` time="2019-04-16T17:09:36.22184168+08:00" level=warning msg="load sandbox devices failed" arch=amd64 command=create container=1adf1b75979f64cc22620f67b87507227ffc8848481b66214e4c3499feacc48c error="open /run/vc/sbs/1adf1b75979f64cc22620f67b87507227ffc8848481b66214e4c3499feacc48c/devices.json: no such file or directory" name=kata-runtime pid=30664 sandbox=1adf1b75979f64cc22620f67b87507227ffc8848481b66214e4c3499feacc48c sandboxid=1adf1b75979f64cc22620f67b87507227ffc8848481b66214e4c3499feacc48c source=virtcontainers subsystem=sandbox time="2019-04-16T17:13:09.270493504+08:00" level=error msg="Signal 37 is not supported" arch=amd64 command=kill container=1adf1b75979f64cc22620f67b87507227ffc8848481b66214e4c3499feacc48c name=kata-runtime pid=31295 sandbox=1adf1b75979f64cc22620f67b87507227ffc8848481b66214e4c3499feacc48c source=runtime time="2019-04-16T17:13:52.549484113+08:00" level=warning msg="load sandbox devices failed" arch=amd64 command=create container=722b9cb728942296ae28ac693cbd041c545670bb6010ef985848376963846c97 error="open /run/vc/sbs/722b9cb728942296ae28ac693cbd041c545670bb6010ef985848376963846c97/devices.json: no such file or directory" name=kata-runtime pid=31534 sandbox=722b9cb728942296ae28ac693cbd041c545670bb6010ef985848376963846c97 sandboxid=722b9cb728942296ae28ac693cbd041c545670bb6010ef985848376963846c97 source=virtcontainers subsystem=sandbox time="2019-04-16T17:15:11.043418224+08:00" level=error msg="Signal 37 is not supported" arch=amd64 command=kill container=722b9cb728942296ae28ac693cbd041c545670bb6010ef985848376963846c97 name=kata-runtime pid=31900 sandbox=722b9cb728942296ae28ac693cbd041c545670bb6010ef985848376963846c97 source=runtime time="2019-04-16T17:16:07.597480157+08:00" level=warning msg="load sandbox devices failed" arch=amd64 command=create container=120b03a054f0e5157feb7d3e1e52386ef2bca34c18b3342e396f3d3db81477fc error="open /run/vc/sbs/120b03a054f0e5157feb7d3e1e52386ef2bca34c18b3342e396f3d3db81477fc/devices.json: no such file or directory" name=kata-runtime pid=32245 sandbox=120b03a054f0e5157feb7d3e1e52386ef2bca34c18b3342e396f3d3db81477fc sandboxid=120b03a054f0e5157feb7d3e1e52386ef2bca34c18b3342e396f3d3db81477fc source=virtcontainers subsystem=sandbox time="2019-04-16T17:20:13.705520917+08:00" level=warning msg="load sandbox devices failed" arch=amd64 command=create container=1cd0e6b1dd7342c97b86b0d87b0cebfa48e3830d5227bba1b2f301a8260b72cc error="open /run/vc/sbs/1cd0e6b1dd7342c97b86b0d87b0cebfa48e3830d5227bba1b2f301a8260b72cc/devices.json: no such file or directory" name=kata-runtime pid=574 sandbox=1cd0e6b1dd7342c97b86b0d87b0cebfa48e3830d5227bba1b2f301a8260b72cc sandboxid=1cd0e6b1dd7342c97b86b0d87b0cebfa48e3830d5227bba1b2f301a8260b72cc source=virtcontainers subsystem=sandbox time="2019-04-16T17:20:28.849618097+08:00" level=warning msg="load sandbox devices failed" arch=amd64 command=create container=b3b637d7c368576ff86fab6470c249c126ba662bca4616a87ae51229f00b337b error="open /run/vc/sbs/b3b637d7c368576ff86fab6470c249c126ba662bca4616a87ae51229f00b337b/devices.json: no such file or directory" name=kata-runtime pid=815 sandbox=b3b637d7c368576ff86fab6470c249c126ba662bca4616a87ae51229f00b337b sandboxid=b3b637d7c368576ff86fab6470c249c126ba662bca4616a87ae51229f00b337b source=virtcontainers subsystem=sandbox time="2019-04-16T17:28:10.237607748+08:00" level=warning msg="load sandbox devices failed" arch=amd64 command=create container=a2cddd4ff3f46820c6167c72f719580a2b186538e7fdc676ba5c351f1e3189dd error="open /run/vc/sbs/a2cddd4ff3f46820c6167c72f719580a2b186538e7fdc676ba5c351f1e3189dd/devices.json: no such file or directory" name=kata-runtime pid=2319 sandbox=a2cddd4ff3f46820c6167c72f719580a2b186538e7fdc676ba5c351f1e3189dd sandboxid=a2cddd4ff3f46820c6167c72f719580a2b186538e7fdc676ba5c351f1e3189dd source=virtcontainers subsystem=sandbox time="2019-04-16T17:36:05.665810562+08:00" level=warning msg="load sandbox devices failed" arch=amd64 command=create container=a2cddd4ff3f46820c6167c72f719580a2b186538e7fdc676ba5c351f1e3189dd error="open /run/vc/sbs/a2cddd4ff3f46820c6167c72f719580a2b186538e7fdc676ba5c351f1e3189dd/devices.json: no such file or directory" name=kata-runtime pid=3587 sandbox=a2cddd4ff3f46820c6167c72f719580a2b186538e7fdc676ba5c351f1e3189dd sandboxid=a2cddd4ff3f46820c6167c72f719580a2b186538e7fdc676ba5c351f1e3189dd source=virtcontainers subsystem=sandbox ``` ## Proxy logs Recent proxy problems found in system journal: ``` time="2019-04-16T17:13:11.562068776+08:00" level=fatal msg="failed to handle exit signal" error="close unix @->/run/vc/vm/1adf1b75979f64cc22620f67b87507227ffc8848481b66214e4c3499feacc48c/kata.sock: use of closed network connection" name=kata-proxy pid=30700 sandbox=1adf1b75979f64cc22620f67b87507227ffc8848481b66214e4c3499feacc48c source=proxy time="2019-04-16T17:15:13.363385513+08:00" level=fatal msg="failed to handle exit signal" error="close unix @->/run/vc/vm/722b9cb728942296ae28ac693cbd041c545670bb6010ef985848376963846c97/kata.sock: use of closed network connection" name=kata-proxy pid=31567 sandbox=722b9cb728942296ae28ac693cbd041c545670bb6010ef985848376963846c97 source=proxy time="2019-04-16T17:19:43.174265273+08:00" level=fatal msg="channel error" error="accept unix /run/vc/sbs/120b03a054f0e5157feb7d3e1e52386ef2bca34c18b3342e396f3d3db81477fc/proxy.sock: use of closed network connection" name=kata-proxy pid=32278 sandbox=120b03a054f0e5157feb7d3e1e52386ef2bca34c18b3342e396f3d3db81477fc source=proxy time="2019-04-16T17:20:15.621151503+08:00" level=fatal msg="failed to handle exit signal" error="close unix @->/run/vc/vm/1cd0e6b1dd7342c97b86b0d87b0cebfa48e3830d5227bba1b2f301a8260b72cc/kata.sock: use of closed network connection" name=kata-proxy pid=611 sandbox=1cd0e6b1dd7342c97b86b0d87b0cebfa48e3830d5227bba1b2f301a8260b72cc source=proxy time="2019-04-16T17:27:31.984892078+08:00" level=fatal msg="channel error" error="accept unix /run/vc/sbs/b3b637d7c368576ff86fab6470c249c126ba662bca4616a87ae51229f00b337b/proxy.sock: use of closed network connection" name=kata-proxy pid=848 sandbox=b3b637d7c368576ff86fab6470c249c126ba662bca4616a87ae51229f00b337b source=proxy time="2019-04-16T17:35:30.894453877+08:00" level=fatal msg="channel error" error="accept unix /run/vc/sbs/a2cddd4ff3f46820c6167c72f719580a2b186538e7fdc676ba5c351f1e3189dd/proxy.sock: use of closed network connection" name=kata-proxy pid=2352 sandbox=a2cddd4ff3f46820c6167c72f719580a2b186538e7fdc676ba5c351f1e3189dd source=proxy ``` ## Shim logs No recent shim problems found in system journal. ## Throttler logs No recent throttler problems found in system journal. --- # Container manager details Have `docker` ## Docker Output of "`docker version`": ``` Client: Version: 18.09.1 API version: 1.39 Go version: go1.11.5 Git commit: 4c52b90 Built: Mon, 11 Mar 2019 00:06:03 +0000 OS/Arch: linux/amd64 Experimental: false Server: Engine: Version: 18.09.1 API version: 1.39 (minimum version 1.12) Go version: go1.11.5 Git commit: 4c52b90 Built: Mon Mar 11 00:06:03 2019 OS/Arch: linux/amd64 Experimental: false ``` Output of "`docker info`": ``` Containers: 6 Running: 2 Paused: 0 Stopped: 4 Images: 48 Server Version: 18.09.1 Storage Driver: overlay2 Backing Filesystem: extfs Supports d_type: true Native Overlay Diff: true Logging Driver: json-file Cgroup Driver: cgroupfs Plugins: Volume: local Network: bridge host macvlan null overlay Log: awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog Swarm: inactive Runtimes: runc kata-runtime Default Runtime: runc Init Binary: docker-init containerd version: 9754871865f7fe2f4e74d43e2fc7ccd237edcbce runc version: 1.0.0~rc6+dfsg1-3 init version: v0.18.0 (expected: fec3683b971d9c3ef73f284f176672c44b448662) Security Options: apparmor seccomp Profile: default Kernel Version: 4.19.0-2-amd64 Operating System: Debian GNU/Linux buster/sid OSType: linux Architecture: x86_64 CPUs: 4 Total Memory: 7.679GiB Name: zhsj-debian ID: QJ6V:IQZD:ISBY:GLGJ:QXVM:44GY:MCPZ:RQ4B:DBKB:BMBR:34LJ:ZXPG Docker Root Dir: /var/lib/docker Debug Mode (client): false Debug Mode (server): false Registry: https://index.docker.io/v1/ Labels: Experimental: false Insecure Registries: 10.110.210.0/24 127.0.0.0/8 Registry Mirrors: https://registry.docker-cn.com/ Live Restore Enabled: false WARNING: No swap limit support ``` Output of "`systemctl show docker`": ``` Type=notify Restart=on-failure NotifyAccess=main RestartUSec=100ms TimeoutStartUSec=infinity TimeoutStopUSec=1min 30s RuntimeMaxUSec=infinity WatchdogUSec=0 WatchdogTimestampMonotonic=0 RootDirectoryStartOnly=no RemainAfterExit=no GuessMainPID=yes MainPID=11924 ControlPID=0 FileDescriptorStoreMax=0 NFileDescriptorStore=0 StatusErrno=0 Result=success UID=[not set] GID=[not set] NRestarts=0 ExecMainStartTimestamp=Tue 2019-04-16 15:28:48 CST ExecMainStartTimestampMonotonic=2138225015939 ExecMainExitTimestampMonotonic=0 ExecMainPID=11924 ExecMainCode=0 ExecMainStatus=0 ExecStart={ path=/usr/sbin/dockerd ; argv[]=/usr/sbin/dockerd -H fd:// $DOCKER_OPTS ; ignore_errors=no ; start_time=[Tue 2019-04-16 15:28:48 CST] ; stop_time=[n/a] ; pid=11924 ; code=(null) ; status=0/0 } ExecReload={ path=/bin/kill ; argv[]=/bin/kill -s HUP $MAINPID ; ignore_errors=no ; start_time=[n/a] ; stop_time=[n/a] ; pid=0 ; code=(null) ; status=0/0 } Slice=system.slice ControlGroup=/system.slice/docker.service MemoryCurrent=385941504 CPUUsageNSec=[not set] TasksCurrent=78 IPIngressBytes=18446744073709551615 IPIngressPackets=18446744073709551615 IPEgressBytes=18446744073709551615 IPEgressPackets=18446744073709551615 Delegate=yes DelegateControllers=cpu cpuacct io blkio memory devices pids bpf-firewall bpf-devices CPUAccounting=no CPUWeight=[not set] StartupCPUWeight=[not set] CPUShares=[not set] StartupCPUShares=[not set] CPUQuotaPerSecUSec=infinity IOAccounting=no IOWeight=[not set] StartupIOWeight=[not set] BlockIOAccounting=no BlockIOWeight=[not set] StartupBlockIOWeight=[not set] MemoryAccounting=yes MemoryMin=0 MemoryLow=0 MemoryHigh=infinity MemoryMax=infinity MemorySwapMax=infinity MemoryLimit=infinity DevicePolicy=auto TasksAccounting=yes TasksMax=infinity IPAccounting=no EnvironmentFiles=/etc/default/docker (ignore_errors=yes) UMask=0022 LimitCPU=infinity LimitCPUSoft=infinity LimitFSIZE=infinity LimitFSIZESoft=infinity LimitDATA=infinity LimitDATASoft=infinity LimitSTACK=infinity LimitSTACKSoft=8388608 LimitCORE=infinity LimitCORESoft=infinity LimitRSS=infinity LimitRSSSoft=infinity LimitNOFILE=1048576 LimitNOFILESoft=1048576 LimitAS=infinity LimitASSoft=infinity LimitNPROC=infinity LimitNPROCSoft=infinity LimitMEMLOCK=65536 LimitMEMLOCKSoft=65536 LimitLOCKS=infinity LimitLOCKSSoft=infinity LimitSIGPENDING=31337 LimitSIGPENDINGSoft=31337 LimitMSGQUEUE=819200 LimitMSGQUEUESoft=819200 LimitNICE=0 LimitNICESoft=0 LimitRTPRIO=0 LimitRTPRIOSoft=0 LimitRTTIME=infinity LimitRTTIMESoft=infinity OOMScoreAdjust=0 Nice=0 IOSchedulingClass=0 IOSchedulingPriority=0 CPUSchedulingPolicy=0 CPUSchedulingPriority=0 TimerSlackNSec=50000 CPUSchedulingResetOnFork=no NonBlocking=no StandardInput=null StandardInputData= StandardOutput=journal StandardError=inherit TTYReset=no TTYVHangup=no TTYVTDisallocate=no SyslogPriority=30 SyslogLevelPrefix=yes SyslogLevel=6 SyslogFacility=3 LogLevelMax=-1 LogRateLimitIntervalUSec=0 LogRateLimitBurst=0 SecureBits=0 CapabilityBoundingSet=cap_chown cap_dac_override cap_dac_read_search cap_fowner cap_fsetid cap_kill cap_setgid cap_setuid cap_setpcap cap_linux_immutable cap_net_bind_service cap_net_broadcast cap_net_admin cap_net_raw cap_ipc_lock cap_ipc_owner cap_sys_module cap_sys_rawio cap_sys_chroot cap_sys_ptrace cap_sys_pacct cap_sys_admin cap_sys_boot cap_sys_nice cap_sys_resource cap_sys_time cap_sys_tty_config cap_mknod cap_lease cap_audit_write cap_audit_control cap_setfcap cap_mac_override cap_mac_admin cap_syslog cap_wake_alarm cap_block_suspend AmbientCapabilities= DynamicUser=no RemoveIPC=no MountFlags= PrivateTmp=no PrivateDevices=no ProtectKernelTunables=no ProtectKernelModules=no ProtectControlGroups=no PrivateNetwork=no PrivateUsers=no PrivateMounts=no ProtectHome=no ProtectSystem=no SameProcessGroup=no UtmpMode=init IgnoreSIGPIPE=yes NoNewPrivileges=no SystemCallErrorNumber=0 LockPersonality=no RuntimeDirectoryPreserve=no RuntimeDirectoryMode=0755 StateDirectoryMode=0755 CacheDirectoryMode=0755 LogsDirectoryMode=0755 ConfigurationDirectoryMode=0755 MemoryDenyWriteExecute=no RestrictRealtime=no RestrictNamespaces=no MountAPIVFS=no KeyringMode=private KillMode=process KillSignal=15 FinalKillSignal=9 SendSIGKILL=yes SendSIGHUP=no WatchdogSignal=6 Id=docker.service Names=docker.service Requires=system.slice docker.socket sysinit.target Wants=network-online.target WantedBy=multi-user.target ConsistsOf=docker.socket Conflicts=shutdown.target Before=shutdown.target multi-user.target After=firewalld.service basic.target docker.socket system.slice sysinit.target systemd-journald.socket network-online.target TriggeredBy=docker.socket Documentation=https://docs.docker.com Description=Docker Application Container Engine LoadState=loaded ActiveState=active SubState=running FragmentPath=/lib/systemd/system/docker.service UnitFileState=enabled UnitFilePreset=enabled StateChangeTimestamp=Tue 2019-04-16 15:28:50 CST StateChangeTimestampMonotonic=2138226404871 InactiveExitTimestamp=Tue 2019-04-16 15:28:48 CST InactiveExitTimestampMonotonic=2138225016160 ActiveEnterTimestamp=Tue 2019-04-16 15:28:50 CST ActiveEnterTimestampMonotonic=2138226404871 ActiveExitTimestamp=Tue 2019-04-16 15:28:47 CST ActiveExitTimestampMonotonic=2138224005329 InactiveEnterTimestamp=Tue 2019-04-16 15:28:48 CST InactiveEnterTimestampMonotonic=2138225010244 CanStart=yes CanStop=yes CanReload=yes CanIsolate=no StopWhenUnneeded=no RefuseManualStart=no RefuseManualStop=no AllowIsolate=no DefaultDependencies=yes OnFailureJobMode=replace IgnoreOnIsolate=no NeedDaemonReload=no JobTimeoutUSec=infinity JobRunningTimeoutUSec=infinity JobTimeoutAction=none ConditionResult=yes AssertResult=yes ConditionTimestamp=Tue 2019-04-16 15:28:48 CST ConditionTimestampMonotonic=2138225015432 AssertTimestamp=Tue 2019-04-16 15:28:48 CST AssertTimestampMonotonic=2138225015432 Transient=no Perpetual=no StartLimitIntervalUSec=1min StartLimitBurst=3 StartLimitAction=none FailureAction=none FailureActionExitStatus=-1 SuccessAction=none SuccessActionExitStatus=-1 InvocationID=a60479fbe5bd4458ba0dde98cc5aac39 CollectMode=inactive ``` No `kubectl` No `crio` Have `containerd` ## containerd Output of "`containerd --version`": ``` containerd github.com/containerd/containerd 1.2.4~ds1-1 ``` Output of "`systemctl show containerd`": ``` Type=simple Restart=always NotifyAccess=none RestartUSec=5s TimeoutStartUSec=1min 30s TimeoutStopUSec=1min 30s RuntimeMaxUSec=infinity WatchdogUSec=0 WatchdogTimestampMonotonic=0 RootDirectoryStartOnly=no RemainAfterExit=no GuessMainPID=yes MainPID=0 ControlPID=0 FileDescriptorStoreMax=0 NFileDescriptorStore=0 StatusErrno=0 Result=success UID=[not set] GID=[not set] NRestarts=0 ExecMainStartTimestampMonotonic=0 ExecMainExitTimestampMonotonic=0 ExecMainPID=0 ExecMainCode=0 ExecMainStatus=0 ExecStartPre={ path=/sbin/modprobe ; argv[]=/sbin/modprobe overlay ; ignore_errors=no ; start_time=[n/a] ; stop_time=[n/a] ; pid=0 ; code=(null) ; status=0/0 } ExecStart={ path=/usr/bin/containerd ; argv[]=/usr/bin/containerd ; ignore_errors=no ; start_time=[n/a] ; stop_time=[n/a] ; pid=0 ; code=(null) ; status=0/0 } Slice=system.slice MemoryCurrent=[not set] CPUUsageNSec=[not set] TasksCurrent=[not set] IPIngressBytes=18446744073709551615 IPIngressPackets=18446744073709551615 IPEgressBytes=18446744073709551615 IPEgressPackets=18446744073709551615 Delegate=yes DelegateControllers=cpu cpuacct io blkio memory devices pids bpf-firewall bpf-devices CPUAccounting=no CPUWeight=[not set] StartupCPUWeight=[not set] CPUShares=[not set] StartupCPUShares=[not set] CPUQuotaPerSecUSec=infinity IOAccounting=no IOWeight=[not set] StartupIOWeight=[not set] BlockIOAccounting=no BlockIOWeight=[not set] StartupBlockIOWeight=[not set] MemoryAccounting=yes MemoryMin=0 MemoryLow=0 MemoryHigh=infinity MemoryMax=infinity MemorySwapMax=infinity MemoryLimit=infinity DevicePolicy=auto TasksAccounting=yes TasksMax=4915 IPAccounting=no UMask=0022 LimitCPU=infinity LimitCPUSoft=infinity LimitFSIZE=infinity LimitFSIZESoft=infinity LimitDATA=infinity LimitDATASoft=infinity LimitSTACK=infinity LimitSTACKSoft=8388608 LimitCORE=infinity LimitCORESoft=infinity LimitRSS=infinity LimitRSSSoft=infinity LimitNOFILE=1048576 LimitNOFILESoft=1048576 LimitAS=infinity LimitASSoft=infinity LimitNPROC=infinity LimitNPROCSoft=infinity LimitMEMLOCK=65536 LimitMEMLOCKSoft=65536 LimitLOCKS=infinity LimitLOCKSSoft=infinity LimitSIGPENDING=31337 LimitSIGPENDINGSoft=31337 LimitMSGQUEUE=819200 LimitMSGQUEUESoft=819200 LimitNICE=0 LimitNICESoft=0 LimitRTPRIO=0 LimitRTPRIOSoft=0 LimitRTTIME=infinity LimitRTTIMESoft=infinity OOMScoreAdjust=-999 Nice=0 IOSchedulingClass=0 IOSchedulingPriority=0 CPUSchedulingPolicy=0 CPUSchedulingPriority=0 TimerSlackNSec=50000 CPUSchedulingResetOnFork=no NonBlocking=no StandardInput=null StandardInputData= StandardOutput=journal StandardError=inherit TTYReset=no TTYVHangup=no TTYVTDisallocate=no SyslogPriority=30 SyslogLevelPrefix=yes SyslogLevel=6 SyslogFacility=3 LogLevelMax=-1 LogRateLimitIntervalUSec=0 LogRateLimitBurst=0 SecureBits=0 CapabilityBoundingSet=cap_chown cap_dac_override cap_dac_read_search cap_fowner cap_fsetid cap_kill cap_setgid cap_setuid cap_setpcap cap_linux_immutable cap_net_bind_service cap_net_broadcast cap_net_admin cap_net_raw cap_ipc_lock cap_ipc_owner cap_sys_module cap_sys_rawio cap_sys_chroot cap_sys_ptrace cap_sys_pacct cap_sys_admin cap_sys_boot cap_sys_nice cap_sys_resource cap_sys_time cap_sys_tty_config cap_mknod cap_lease cap_audit_write cap_audit_control cap_setfcap cap_mac_override cap_mac_admin cap_syslog cap_wake_alarm cap_block_suspend AmbientCapabilities= DynamicUser=no RemoveIPC=no MountFlags= PrivateTmp=no PrivateDevices=no ProtectKernelTunables=no ProtectKernelModules=no ProtectControlGroups=no PrivateNetwork=no PrivateUsers=no PrivateMounts=no ProtectHome=no ProtectSystem=no SameProcessGroup=no UtmpMode=init IgnoreSIGPIPE=yes NoNewPrivileges=no SystemCallErrorNumber=0 LockPersonality=no RuntimeDirectoryPreserve=no RuntimeDirectoryMode=0755 StateDirectoryMode=0755 CacheDirectoryMode=0755 LogsDirectoryMode=0755 ConfigurationDirectoryMode=0755 MemoryDenyWriteExecute=no RestrictRealtime=no RestrictNamespaces=no MountAPIVFS=no KeyringMode=private KillMode=process KillSignal=15 FinalKillSignal=9 SendSIGKILL=yes SendSIGHUP=no WatchdogSignal=6 Id=containerd.service Names=containerd.service Requires=system.slice sysinit.target Conflicts=shutdown.target Before=shutdown.target After=network.target basic.target systemd-journald.socket system.slice sysinit.target Documentation=https://containerd.io Description=containerd container runtime LoadState=loaded ActiveState=inactive SubState=dead FragmentPath=/etc/systemd/system/containerd.service UnitFileState=disabled UnitFilePreset=enabled StateChangeTimestampMonotonic=0 InactiveExitTimestampMonotonic=0 ActiveEnterTimestampMonotonic=0 ActiveExitTimestampMonotonic=0 InactiveEnterTimestampMonotonic=0 CanStart=yes CanStop=yes CanReload=no CanIsolate=no StopWhenUnneeded=no RefuseManualStart=no RefuseManualStop=no AllowIsolate=no DefaultDependencies=yes OnFailureJobMode=replace IgnoreOnIsolate=no NeedDaemonReload=no JobTimeoutUSec=infinity JobRunningTimeoutUSec=infinity JobTimeoutAction=none ConditionResult=no AssertResult=no ConditionTimestampMonotonic=0 AssertTimestampMonotonic=0 Transient=no Perpetual=no StartLimitIntervalUSec=10s StartLimitBurst=5 StartLimitAction=none FailureAction=none FailureActionExitStatus=-1 SuccessAction=none SuccessActionExitStatus=-1 CollectMode=inactive ``` Output of "`cat /etc/containerd/config.toml`": ``` cat: /etc/containerd/config.toml: No such file or directory ``` --- # Packages Have `dpkg` Output of "`dpkg -l|egrep "(cc-oci-runtimecc-runtimerunv|kata-proxy|kata-runtime|kata-shim|kata-ksm-throttler|kata-containers-image|linux-container|qemu-)"`": ``` ii qemu-system-common 1:3.1+dfsg-7 amd64 QEMU full system emulation binaries (common files) ii qemu-system-data 1:3.1+dfsg-7 all QEMU full system emulation (data files) ii qemu-system-x86 1:3.1+dfsg-7 amd64 QEMU full system emulation binaries (x86) ii qemu-utils 1:3.1+dfsg-7 amd64 QEMU utilities ``` No `rpm` ---

jodh-intel commented 5 years ago

@devimc, @jcvenegas - sounds like we need to create this file in osbuilder?

jcvenegas commented 5 years ago

@jodh-intel I dont expect it is created by us, as part of clearlinux stateless features this should not be needed and if needed it may be done by systemd itself. @zhsj so you see the error in dmesg but the container is still usable?

zhsj commented 5 years ago

@jcvenegas not sure, but I haven't see errors when creating a simple container like docker run debian bash.

BTW, I do see other errors in dmesg, like

[   58.128303] systemd[1]: chronyd.service: Failed to run 'start' task: Read-only file system
[   58.128454] systemd[1]: chronyd.service: Failed with result 'resources'.
[   58.128753] systemd[1]: Failed to start NTP client/server.

should I open another issue, or just contiune here since they are caused by read-only root?

jcvenegas commented 5 years ago

@zhsj the first issue seems that is not critical, but probably the second, I'd say keep on this issue the conversation @amshinde is this something you expected or you suggest to remove read-only mount to allow chronyd.

amshinde commented 5 years ago

not sure, but I haven't see errors when creating a simple container like docker run debian bash.

@zhsj What container are you running?

zhsj commented 5 years ago

not sure, but I haven't see errors when creating a simple container like docker run debian bash.

@zhsj What container are you running?

docker run --runtime kata-runtime -it debian bash However I don't think the container image is relevant here. The dmesg(including chronyd) is from the VM kernel.

amshinde commented 5 years ago

@jodh-intel @jcvenegas systemd expects /etc/machine-id to be present, else creates it. In case of clear-linux, because of stateless approach, this file is missing. For testing purposes, I created this file myself, which helped to get over this error, but I still still chronyd.service to fail.

Running the chronyd binary with /usr/sbin/chronyd -d on the debug console did not give any errors. However running the systemd service does give the error chronyd.service: Failed to run 'start' task: Read-only file system

Following this issue filed in systemd for systemd services not working with read-only file system, I did try a few other hacks like mounting /var as tmpfs itself, but it did not solve the issue.

Considering that I was able to run the chrony binary manually, it looks like chrony itself can run on a read-only file system. But systemd requires read-write access to root file system.

cc @devimc @grahamwhaley

zhsj commented 5 years ago

https://github.com/systemd/systemd/issues/5610

evverx: The root cause is PrivateTmp=, which requires writable /var/tmp.

poettering: I am pretty sure we should expect that /var/tmp is writable during normal operation. We expect the same from /run and /tmp, and /var as a whole. Closing.

devimc commented 5 years ago

I can't reproduce this, also we have a test to check dmeg errors https://github.com/kata-containers/tests/blob/master/integration/docker/run_test.go#L278

grahamwhaley commented 5 years ago

@devimc @amshinde @zhsj - is this Issue still valid/open - does it rely on the pending chrony service PR?

devimc commented 5 years ago

@zhsj can you please try to reproduce this again using the latest version of kata (1.7.0) ?

zhsj commented 5 years ago

systemd still complains machine-id. The log is clear that systemd expects something. I have no idea why the CI never catches that.

And chrony is not started as well(checked with a debug console).

Show kata-collect-data.sh details

# Meta details Running `kata-collect-data.sh` version `1.7.0 (commit d4f4644312d2acbfed8a150e49831787f8ebdd90)` at `2019-05-22.01:40:06.450237068+0800`. --- Runtime is `/opt/kata/bin/kata-runtime`. # `kata-env` Output of "`/opt/kata/bin/kata-runtime kata-env`": ```toml [Meta] Version = "1.0.23" [Runtime] Debug = false Trace = false DisableGuestSeccomp = true DisableNewNetNs = false Path = "/opt/kata/bin/kata-runtime" [Runtime.Version] Semver = "1.7.0" Commit = "d4f4644312d2acbfed8a150e49831787f8ebdd90" OCI = "1.0.1-dev" [Runtime.Config] Path = "/opt/kata/share/defaults/kata-containers/configuration-qemu.toml" [Hypervisor] MachineType = "pc" Version = "QEMU emulator version 2.11.2(kata-static)\nCopyright (c) 2003-2017 Fabrice Bellard and the QEMU Project developers" Path = "/opt/kata/bin/qemu-system-x86_64" BlockDeviceDriver = "virtio-scsi" EntropySource = "/dev/urandom" Msize9p = 8192 MemorySlots = 10 Debug = false UseVSock = false SharedFS = "virtio-9p" [Image] Path = "/opt/kata/share/kata-containers/kata-containers-image_clearlinux_1.7.0_agent_43bd707543.img" [Kernel] Path = "/opt/kata/share/kata-containers/vmlinuz-4.19.28-39" Parameters = "init=/usr/lib/systemd/systemd systemd.unit=kata-containers.target systemd.mask=systemd-networkd.service systemd.mask=systemd-networkd.socket systemd.mask=systemd-journald.service systemd.mask=systemd-journald.socket systemd.mask=systemd-journal-flush.service systemd.mask=systemd-journald-dev-log.socket systemd.mask=systemd-udevd.service systemd.mask=systemd-udevd.socket systemd.mask=systemd-udev-trigger.service systemd.mask=systemd-udevd-kernel.socket systemd.mask=systemd-udevd-control.socket systemd.mask=systemd-timesyncd.service systemd.mask=systemd-update-utmp.service systemd.mask=systemd-tmpfiles-setup.service systemd.mask=systemd-tmpfiles-cleanup.service systemd.mask=systemd-tmpfiles-cleanup.timer systemd.mask=tmp.mount systemd.mask=systemd-random-seed.service systemd.mask=systemd-coredump@.service" [Initrd] Path = "" [Proxy] Type = "kataProxy" Version = "kata-proxy version 1.7.0-ea2b0bb14ef7906105d9ac808503292096add170" Path = "/opt/kata/libexec/kata-containers/kata-proxy" Debug = false [Shim] Type = "kataShim" Version = "kata-shim version 1.7.0-7f2ab7726d6baf0b82ff2a35bd50c73f6b4a3d3a" Path = "/opt/kata/libexec/kata-containers/kata-shim" Debug = false [Agent] Type = "kata" Debug = false Trace = false TraceMode = "" TraceType = "" [Host] Kernel = "4.19.0-4-amd64" Architecture = "amd64" VMContainerCapable = true SupportVSocks = true [Host.Distro] Name = "Debian GNU/Linux" Version = "10" [Host.CPU] Vendor = "GenuineIntel" Model = "Intel(R) Core(TM) i7-6500U CPU @ 2.50GHz" [Netmon] Version = "kata-netmon version 1.7.0" Path = "/opt/kata/libexec/kata-containers/kata-netmon" Debug = false Enable = false ``` --- # Runtime config files ## Runtime default config files ``` /etc/kata-containers/configuration.toml /opt/kata/share/defaults/kata-containers/configuration.toml ``` ## Runtime config file contents Output of "`cat "/etc/kata-containers/configuration.toml"`": ```toml # Copyright (c) 2017-2019 Intel Corporation # # SPDX-License-Identifier: Apache-2.0 # # XXX: WARNING: this file is auto-generated. # XXX: # XXX: Source file: "cli/config/configuration-qemu.toml.in" # XXX: Project: # XXX: Name: Kata Containers # XXX: Type: kata [hypervisor.qemu] path = "/opt/kata/bin/qemu-system-x86_64" kernel = "/opt/kata/share/kata-containers/vmlinuz.container" image = "/opt/kata/share/kata-containers/kata-containers.img" machine_type = "pc" # Optional space-separated list of options to pass to the guest kernel. # For example, use `kernel_params = "vsyscall=emulate"` if you are having # trouble running pre-2.15 glibc. # # WARNING: - any parameter specified here will take priority over the default # parameter value of the same name used to start the virtual machine. # Do not set values here unless you understand the impact of doing so as you # may stop the virtual machine from booting. # To see the list of default parameters, enable hypervisor debug, create a # container and look for 'default-kernel-parameters' log entries. kernel_params = "" # Path to the firmware. # If you want that qemu uses the default firmware leave this option empty firmware = "" # Machine accelerators # comma-separated list of machine accelerators to pass to the hypervisor. # For example, `machine_accelerators = "nosmm,nosmbus,nosata,nopit,static-prt,nofw"` machine_accelerators="" # Default number of vCPUs per SB/VM: # unspecified or 0 --> will be set to 1 # < 0 --> will be set to the actual number of physical cores # > 0 <= number of physical cores --> will be set to the specified number # > number of physical cores --> will be set to the actual number of physical cores default_vcpus = 1 # Default maximum number of vCPUs per SB/VM: # unspecified or == 0 --> will be set to the actual number of physical cores or to the maximum number # of vCPUs supported by KVM if that number is exceeded # > 0 <= number of physical cores --> will be set to the specified number # > number of physical cores --> will be set to the actual number of physical cores or to the maximum number # of vCPUs supported by KVM if that number is exceeded # WARNING: Depending of the architecture, the maximum number of vCPUs supported by KVM is used when # the actual number of physical cores is greater than it. # WARNING: Be aware that this value impacts the virtual machine's memory footprint and CPU # the hotplug functionality. For example, `default_maxvcpus = 240` specifies that until 240 vCPUs # can be added to a SB/VM, but the memory footprint will be big. Another example, with # `default_maxvcpus = 8` the memory footprint will be small, but 8 will be the maximum number of # vCPUs supported by the SB/VM. In general, we recommend that you do not edit this variable, # unless you know what are you doing. default_maxvcpus = 0 # Bridges can be used to hot plug devices. # Limitations: # * Currently only pci bridges are supported # * Until 30 devices per bridge can be hot plugged. # * Until 5 PCI bridges can be cold plugged per VM. # This limitation could be a bug in qemu or in the kernel # Default number of bridges per SB/VM: # unspecified or 0 --> will be set to 1 # > 1 <= 5 --> will be set to the specified number # > 5 --> will be set to 5 default_bridges = 1 # Default memory size in MiB for SB/VM. # If unspecified then it will be set 2048 MiB. default_memory = 2048 # # Default memory slots per SB/VM. # If unspecified then it will be set 10. # This is will determine the times that memory will be hotadded to sandbox/VM. #memory_slots = 10 # The size in MiB will be plused to max memory of hypervisor. # It is the memory address space for the NVDIMM devie. # If set block storage driver (block_device_driver) to "nvdimm", # should set memory_offset to the size of block device. # Default 0 #memory_offset = 0 # Disable block device from being used for a container's rootfs. # In case of a storage driver like devicemapper where a container's # root file system is backed by a block device, the block device is passed # directly to the hypervisor for performance reasons. # This flag prevents the block device from being passed to the hypervisor, # 9pfs is used instead to pass the rootfs. disable_block_device_use = false # Shared file system type: # - virtio-9p (default) # - virtio-fs shared_fs = "virtio-9p" # Path to vhost-user-fs daemon. virtio_fs_daemon = "/opt/kata/bin/virtiofsd-x86_64" # Default size of DAX cache in MiB virtio_fs_cache_size = 1024 # Cache mode: # # - none # Metadata, data, and pathname lookup are not cached in guest. They are # always fetched from host and any changes are immediately pushed to host. # # - auto # Metadata and pathname lookup cache expires after a configured amount of # time (default is 1 second). Data is cached while the file is open (close # to open consistency). # # - always # Metadata, data, and pathname lookup are cached in guest and never expire. virtio_fs_cache = "always" # Block storage driver to be used for the hypervisor in case the container # rootfs is backed by a block device. This is virtio-scsi, virtio-blk # or nvdimm. block_device_driver = "virtio-scsi" # Specifies cache-related options will be set to block devices or not. # Default false #block_device_cache_set = true # Specifies cache-related options for block devices. # Denotes whether use of O_DIRECT (bypass the host page cache) is enabled. # Default false #block_device_cache_direct = true # Specifies cache-related options for block devices. # Denotes whether flush requests for the device are ignored. # Default false #block_device_cache_noflush = true # Enable iothreads (data-plane) to be used. This causes IO to be # handled in a separate IO thread. This is currently only implemented # for SCSI. # enable_iothreads = false # Enable pre allocation of VM RAM, default false # Enabling this will result in lower container density # as all of the memory will be allocated and locked # This is useful when you want to reserve all the memory # upfront or in the cases where you want memory latencies # to be very predictable # Default false #enable_mem_prealloc = true # Enable huge pages for VM RAM, default false # Enabling this will result in the VM memory # being allocated using huge pages. # This is useful when you want to use vhost-user network # stacks within the container. This will automatically # result in memory pre allocation #enable_hugepages = true # Enable swap of vm memory. Default false. # The behaviour is undefined if mem_prealloc is also set to true #enable_swap = true # This option changes the default hypervisor and kernel parameters # to enable debug output where available. This extra output is added # to the proxy logs, but only when proxy debug is also enabled. # # Default false #enable_debug = true # Disable the customizations done in the runtime when it detects # that it is running on top a VMM. This will result in the runtime # behaving as it would when running on bare metal. # #disable_nesting_checks = true # This is the msize used for 9p shares. It is the number of bytes # used for 9p packet payload. #msize_9p = 8192 # If true and vsocks are supported, use vsocks to communicate directly # with the agent and no proxy is started, otherwise use unix # sockets and start a proxy to communicate with the agent. # Default false #use_vsock = true # VFIO devices are hotplugged on a bridge by default. # Enable hotplugging on root bus. This may be required for devices with # a large PCI bar, as this is a current limitation with hotplugging on # a bridge. This value is valid for "pc" machine type. # Default false #hotplug_vfio_on_root_bus = true # If host doesn't support vhost_net, set to true. Thus we won't create vhost fds for nics. # Default false #disable_vhost_net = true # # Default entropy source. # The path to a host source of entropy (including a real hardware RNG) # /dev/urandom and /dev/random are two main options. # Be aware that /dev/random is a blocking source of entropy. If the host # runs out of entropy, the VMs boot time will increase leading to get startup # timeouts. # The source of entropy /dev/urandom is non-blocking and provides a # generally acceptable source of entropy. It should work well for pretty much # all practical purposes. #entropy_source= "/dev/urandom" # Path to OCI hook binaries in the *guest rootfs*. # This does not affect host-side hooks which must instead be added to # the OCI spec passed to the runtime. # # You can create a rootfs with hooks by customizing the osbuilder scripts: # https://github.com/kata-containers/osbuilder # # Hooks must be stored in a subdirectory of guest_hook_path according to their # hook type, i.e. "guest_hook_path/{prestart,postart,poststop}". # The agent will scan these directories for executable files and add them, in # lexicographical order, to the lifecycle of the guest container. # Hooks are executed in the runtime namespace of the guest. See the official documentation: # https://github.com/opencontainers/runtime-spec/blob/v1.0.1/config.md#posix-platform-hooks # Warnings will be logged if any error is encountered will scanning for hooks, # but it will not abort container execution. #guest_hook_path = "/usr/share/oci/hooks" [factory] # VM templating support. Once enabled, new VMs are created from template # using vm cloning. They will share the same initial kernel, initramfs and # agent memory by mapping it readonly. It helps speeding up new container # creation and saves a lot of memory if there are many kata containers running # on the same host. # # When disabled, new VMs are created from scratch. # # Note: Requires "initrd=" to be set ("image=" is not supported). # # Default false #enable_template = true # Specifies the path of template. # # Default "/run/vc/vm/template" #template_path = "/run/vc/vm/template" # The number of caches of VMCache: # unspecified or == 0 --> VMCache is disabled # > 0 --> will be set to the specified number # # VMCache is a function that creates VMs as caches before using it. # It helps speed up new container creation. # The function consists of a server and some clients communicating # through Unix socket. The protocol is gRPC in protocols/cache/cache.proto. # The VMCache server will create some VMs and cache them by factory cache. # It will convert the VM to gRPC format and transport it when gets # requestion from clients. # Factory grpccache is the VMCache client. It will request gRPC format # VM and convert it back to a VM. If VMCache function is enabled, # kata-runtime will request VM from factory grpccache when it creates # a new sandbox. # # Default 0 #vm_cache_number = 0 # Specify the address of the Unix socket that is used by VMCache. # # Default /var/run/kata-containers/cache.sock #vm_cache_endpoint = "/var/run/kata-containers/cache.sock" [proxy.kata] path = "/opt/kata/libexec/kata-containers/kata-proxy" # If enabled, proxy messages will be sent to the system log # (default: disabled) #enable_debug = true [shim.kata] path = "/opt/kata/libexec/kata-containers/kata-shim" # If enabled, shim messages will be sent to the system log # (default: disabled) #enable_debug = true # If enabled, the shim will create opentracing.io traces and spans. # (See https://www.jaegertracing.io/docs/getting-started). # # Note: By default, the shim runs in a separate network namespace. Therefore, # to allow it to send trace details to the Jaeger agent running on the host, # it is necessary to set 'disable_new_netns=true' so that it runs in the host # network namespace. # # (default: disabled) #enable_tracing = true [agent.kata] # If enabled, make the agent display debug-level messages. # (default: disabled) #enable_debug = true # Enable agent tracing. # # If enabled, the default trace mode is "dynamic" and the # default trace type is "isolated". The trace mode and type are set # explicity with the `trace_type=` and `trace_mode=` options. # # Notes: # # - Tracing is ONLY enabled when `enable_tracing` is set: explicitly # setting `trace_mode=` and/or `trace_type=` without setting `enable_tracing` # will NOT activate agent tracing. # # - See https://github.com/kata-containers/agent/blob/master/TRACING.md for # full details. # # (default: disabled) #enable_tracing = true # #trace_mode = "dynamic" #trace_type = "isolated" [netmon] # If enabled, the network monitoring process gets started when the # sandbox is created. This allows for the detection of some additional # network being added to the existing network namespace, after the # sandbox has been created. # (default: disabled) #enable_netmon = true # Specify the path to the netmon binary. path = "/opt/kata/libexec/kata-containers/kata-netmon" # If enabled, netmon messages will be sent to the system log # (default: disabled) #enable_debug = true [runtime] # If enabled, the runtime will log additional debug messages to the # system log # (default: disabled) #enable_debug = true # # Internetworking model # Determines how the VM should be connected to the # the container network interface # Options: # # - bridged # Uses a linux bridge to interconnect the container interface to # the VM. Works for most cases except macvlan and ipvlan. # # - macvtap # Used when the Container network interface can be bridged using # macvtap. # # - none # Used when customize network. Only creates a tap device. No veth pair. # # - tcfilter # Uses tc filter rules to redirect traffic from the network interface # provided by plugin to a tap interface connected to the VM. # internetworking_model="tcfilter" # disable guest seccomp # Determines whether container seccomp profiles are passed to the virtual # machine and applied by the kata agent. If set to true, seccomp is not applied # within the guest # (default: true) disable_guest_seccomp=true # If enabled, the runtime will create opentracing.io traces and spans. # (See https://www.jaegertracing.io/docs/getting-started). # (default: disabled) #enable_tracing = true # If enabled, the runtime will not create a network namespace for shim and hypervisor processes. # This option may have some potential impacts to your host. It should only be used when you know what you're doing. # `disable_new_netns` conflicts with `enable_netmon` # `disable_new_netns` conflicts with `internetworking_model=bridged` and `internetworking_model=macvtap`. It works only # with `internetworking_model=none`. The tap device will be in the host network namespace and can connect to a bridge # (like OVS) directly. # If you are using docker, `disable_new_netns` only works with `docker run --net=none` # (default: false) #disable_new_netns = true # Enabled experimental feature list, format: ["a", "b"]. # Experimental features are features not stable enough for production, # They may break compatibility, and are prepared for a big version bump. # Supported experimental features: # 1. "newstore": new persist storage driver which breaks backward compatibility, # expected to move out of experimental in 2.0.0. # (default: []) experimental=[] ``` Output of "`cat "/opt/kata/share/defaults/kata-containers/configuration.toml"`": ```toml # Copyright (c) 2017-2019 Intel Corporation # # SPDX-License-Identifier: Apache-2.0 # # XXX: WARNING: this file is auto-generated. # XXX: # XXX: Source file: "cli/config/configuration-qemu.toml.in" # XXX: Project: # XXX: Name: Kata Containers # XXX: Type: kata [hypervisor.qemu] path = "/opt/kata/bin/qemu-system-x86_64" kernel = "/opt/kata/share/kata-containers/vmlinuz.container" image = "/opt/kata/share/kata-containers/kata-containers.img" machine_type = "pc" # Optional space-separated list of options to pass to the guest kernel. # For example, use `kernel_params = "vsyscall=emulate"` if you are having # trouble running pre-2.15 glibc. # # WARNING: - any parameter specified here will take priority over the default # parameter value of the same name used to start the virtual machine. # Do not set values here unless you understand the impact of doing so as you # may stop the virtual machine from booting. # To see the list of default parameters, enable hypervisor debug, create a # container and look for 'default-kernel-parameters' log entries. kernel_params = "" # Path to the firmware. # If you want that qemu uses the default firmware leave this option empty firmware = "" # Machine accelerators # comma-separated list of machine accelerators to pass to the hypervisor. # For example, `machine_accelerators = "nosmm,nosmbus,nosata,nopit,static-prt,nofw"` machine_accelerators="" # Default number of vCPUs per SB/VM: # unspecified or 0 --> will be set to 1 # < 0 --> will be set to the actual number of physical cores # > 0 <= number of physical cores --> will be set to the specified number # > number of physical cores --> will be set to the actual number of physical cores default_vcpus = 1 # Default maximum number of vCPUs per SB/VM: # unspecified or == 0 --> will be set to the actual number of physical cores or to the maximum number # of vCPUs supported by KVM if that number is exceeded # > 0 <= number of physical cores --> will be set to the specified number # > number of physical cores --> will be set to the actual number of physical cores or to the maximum number # of vCPUs supported by KVM if that number is exceeded # WARNING: Depending of the architecture, the maximum number of vCPUs supported by KVM is used when # the actual number of physical cores is greater than it. # WARNING: Be aware that this value impacts the virtual machine's memory footprint and CPU # the hotplug functionality. For example, `default_maxvcpus = 240` specifies that until 240 vCPUs # can be added to a SB/VM, but the memory footprint will be big. Another example, with # `default_maxvcpus = 8` the memory footprint will be small, but 8 will be the maximum number of # vCPUs supported by the SB/VM. In general, we recommend that you do not edit this variable, # unless you know what are you doing. default_maxvcpus = 0 # Bridges can be used to hot plug devices. # Limitations: # * Currently only pci bridges are supported # * Until 30 devices per bridge can be hot plugged. # * Until 5 PCI bridges can be cold plugged per VM. # This limitation could be a bug in qemu or in the kernel # Default number of bridges per SB/VM: # unspecified or 0 --> will be set to 1 # > 1 <= 5 --> will be set to the specified number # > 5 --> will be set to 5 default_bridges = 1 # Default memory size in MiB for SB/VM. # If unspecified then it will be set 2048 MiB. default_memory = 2048 # # Default memory slots per SB/VM. # If unspecified then it will be set 10. # This is will determine the times that memory will be hotadded to sandbox/VM. #memory_slots = 10 # The size in MiB will be plused to max memory of hypervisor. # It is the memory address space for the NVDIMM devie. # If set block storage driver (block_device_driver) to "nvdimm", # should set memory_offset to the size of block device. # Default 0 #memory_offset = 0 # Disable block device from being used for a container's rootfs. # In case of a storage driver like devicemapper where a container's # root file system is backed by a block device, the block device is passed # directly to the hypervisor for performance reasons. # This flag prevents the block device from being passed to the hypervisor, # 9pfs is used instead to pass the rootfs. disable_block_device_use = false # Shared file system type: # - virtio-9p (default) # - virtio-fs shared_fs = "virtio-9p" # Path to vhost-user-fs daemon. virtio_fs_daemon = "/opt/kata/bin/virtiofsd-x86_64" # Default size of DAX cache in MiB virtio_fs_cache_size = 1024 # Cache mode: # # - none # Metadata, data, and pathname lookup are not cached in guest. They are # always fetched from host and any changes are immediately pushed to host. # # - auto # Metadata and pathname lookup cache expires after a configured amount of # time (default is 1 second). Data is cached while the file is open (close # to open consistency). # # - always # Metadata, data, and pathname lookup are cached in guest and never expire. virtio_fs_cache = "always" # Block storage driver to be used for the hypervisor in case the container # rootfs is backed by a block device. This is virtio-scsi, virtio-blk # or nvdimm. block_device_driver = "virtio-scsi" # Specifies cache-related options will be set to block devices or not. # Default false #block_device_cache_set = true # Specifies cache-related options for block devices. # Denotes whether use of O_DIRECT (bypass the host page cache) is enabled. # Default false #block_device_cache_direct = true # Specifies cache-related options for block devices. # Denotes whether flush requests for the device are ignored. # Default false #block_device_cache_noflush = true # Enable iothreads (data-plane) to be used. This causes IO to be # handled in a separate IO thread. This is currently only implemented # for SCSI. # enable_iothreads = false # Enable pre allocation of VM RAM, default false # Enabling this will result in lower container density # as all of the memory will be allocated and locked # This is useful when you want to reserve all the memory # upfront or in the cases where you want memory latencies # to be very predictable # Default false #enable_mem_prealloc = true # Enable huge pages for VM RAM, default false # Enabling this will result in the VM memory # being allocated using huge pages. # This is useful when you want to use vhost-user network # stacks within the container. This will automatically # result in memory pre allocation #enable_hugepages = true # Enable swap of vm memory. Default false. # The behaviour is undefined if mem_prealloc is also set to true #enable_swap = true # This option changes the default hypervisor and kernel parameters # to enable debug output where available. This extra output is added # to the proxy logs, but only when proxy debug is also enabled. # # Default false #enable_debug = true # Disable the customizations done in the runtime when it detects # that it is running on top a VMM. This will result in the runtime # behaving as it would when running on bare metal. # #disable_nesting_checks = true # This is the msize used for 9p shares. It is the number of bytes # used for 9p packet payload. #msize_9p = 8192 # If true and vsocks are supported, use vsocks to communicate directly # with the agent and no proxy is started, otherwise use unix # sockets and start a proxy to communicate with the agent. # Default false #use_vsock = true # VFIO devices are hotplugged on a bridge by default. # Enable hotplugging on root bus. This may be required for devices with # a large PCI bar, as this is a current limitation with hotplugging on # a bridge. This value is valid for "pc" machine type. # Default false #hotplug_vfio_on_root_bus = true # If host doesn't support vhost_net, set to true. Thus we won't create vhost fds for nics. # Default false #disable_vhost_net = true # # Default entropy source. # The path to a host source of entropy (including a real hardware RNG) # /dev/urandom and /dev/random are two main options. # Be aware that /dev/random is a blocking source of entropy. If the host # runs out of entropy, the VMs boot time will increase leading to get startup # timeouts. # The source of entropy /dev/urandom is non-blocking and provides a # generally acceptable source of entropy. It should work well for pretty much # all practical purposes. #entropy_source= "/dev/urandom" # Path to OCI hook binaries in the *guest rootfs*. # This does not affect host-side hooks which must instead be added to # the OCI spec passed to the runtime. # # You can create a rootfs with hooks by customizing the osbuilder scripts: # https://github.com/kata-containers/osbuilder # # Hooks must be stored in a subdirectory of guest_hook_path according to their # hook type, i.e. "guest_hook_path/{prestart,postart,poststop}". # The agent will scan these directories for executable files and add them, in # lexicographical order, to the lifecycle of the guest container. # Hooks are executed in the runtime namespace of the guest. See the official documentation: # https://github.com/opencontainers/runtime-spec/blob/v1.0.1/config.md#posix-platform-hooks # Warnings will be logged if any error is encountered will scanning for hooks, # but it will not abort container execution. #guest_hook_path = "/usr/share/oci/hooks" [factory] # VM templating support. Once enabled, new VMs are created from template # using vm cloning. They will share the same initial kernel, initramfs and # agent memory by mapping it readonly. It helps speeding up new container # creation and saves a lot of memory if there are many kata containers running # on the same host. # # When disabled, new VMs are created from scratch. # # Note: Requires "initrd=" to be set ("image=" is not supported). # # Default false #enable_template = true # Specifies the path of template. # # Default "/run/vc/vm/template" #template_path = "/run/vc/vm/template" # The number of caches of VMCache: # unspecified or == 0 --> VMCache is disabled # > 0 --> will be set to the specified number # # VMCache is a function that creates VMs as caches before using it. # It helps speed up new container creation. # The function consists of a server and some clients communicating # through Unix socket. The protocol is gRPC in protocols/cache/cache.proto. # The VMCache server will create some VMs and cache them by factory cache. # It will convert the VM to gRPC format and transport it when gets # requestion from clients. # Factory grpccache is the VMCache client. It will request gRPC format # VM and convert it back to a VM. If VMCache function is enabled, # kata-runtime will request VM from factory grpccache when it creates # a new sandbox. # # Default 0 #vm_cache_number = 0 # Specify the address of the Unix socket that is used by VMCache. # # Default /var/run/kata-containers/cache.sock #vm_cache_endpoint = "/var/run/kata-containers/cache.sock" [proxy.kata] path = "/opt/kata/libexec/kata-containers/kata-proxy" # If enabled, proxy messages will be sent to the system log # (default: disabled) #enable_debug = true [shim.kata] path = "/opt/kata/libexec/kata-containers/kata-shim" # If enabled, shim messages will be sent to the system log # (default: disabled) #enable_debug = true # If enabled, the shim will create opentracing.io traces and spans. # (See https://www.jaegertracing.io/docs/getting-started). # # Note: By default, the shim runs in a separate network namespace. Therefore, # to allow it to send trace details to the Jaeger agent running on the host, # it is necessary to set 'disable_new_netns=true' so that it runs in the host # network namespace. # # (default: disabled) #enable_tracing = true [agent.kata] # If enabled, make the agent display debug-level messages. # (default: disabled) #enable_debug = true # Enable agent tracing. # # If enabled, the default trace mode is "dynamic" and the # default trace type is "isolated". The trace mode and type are set # explicity with the `trace_type=` and `trace_mode=` options. # # Notes: # # - Tracing is ONLY enabled when `enable_tracing` is set: explicitly # setting `trace_mode=` and/or `trace_type=` without setting `enable_tracing` # will NOT activate agent tracing. # # - See https://github.com/kata-containers/agent/blob/master/TRACING.md for # full details. # # (default: disabled) #enable_tracing = true # #trace_mode = "dynamic" #trace_type = "isolated" [netmon] # If enabled, the network monitoring process gets started when the # sandbox is created. This allows for the detection of some additional # network being added to the existing network namespace, after the # sandbox has been created. # (default: disabled) #enable_netmon = true # Specify the path to the netmon binary. path = "/opt/kata/libexec/kata-containers/kata-netmon" # If enabled, netmon messages will be sent to the system log # (default: disabled) #enable_debug = true [runtime] # If enabled, the runtime will log additional debug messages to the # system log # (default: disabled) #enable_debug = true # # Internetworking model # Determines how the VM should be connected to the # the container network interface # Options: # # - bridged # Uses a linux bridge to interconnect the container interface to # the VM. Works for most cases except macvlan and ipvlan. # # - macvtap # Used when the Container network interface can be bridged using # macvtap. # # - none # Used when customize network. Only creates a tap device. No veth pair. # # - tcfilter # Uses tc filter rules to redirect traffic from the network interface # provided by plugin to a tap interface connected to the VM. # internetworking_model="tcfilter" # disable guest seccomp # Determines whether container seccomp profiles are passed to the virtual # machine and applied by the kata agent. If set to true, seccomp is not applied # within the guest # (default: true) disable_guest_seccomp=true # If enabled, the runtime will create opentracing.io traces and spans. # (See https://www.jaegertracing.io/docs/getting-started). # (default: disabled) #enable_tracing = true # If enabled, the runtime will not create a network namespace for shim and hypervisor processes. # This option may have some potential impacts to your host. It should only be used when you know what you're doing. # `disable_new_netns` conflicts with `enable_netmon` # `disable_new_netns` conflicts with `internetworking_model=bridged` and `internetworking_model=macvtap`. It works only # with `internetworking_model=none`. The tap device will be in the host network namespace and can connect to a bridge # (like OVS) directly. # If you are using docker, `disable_new_netns` only works with `docker run --net=none` # (default: false) #disable_new_netns = true # Enabled experimental feature list, format: ["a", "b"]. # Experimental features are features not stable enough for production, # They may break compatibility, and are prepared for a big version bump. # Supported experimental features: # 1. "newstore": new persist storage driver which breaks backward compatibility, # expected to move out of experimental in 2.0.0. # (default: []) experimental=[] ``` Config file `/usr/share/defaults/kata-containers/configuration.toml` not found --- # KSM throttler ## version find: ‘/usr/libexec’: No such file or directory Output of "` --version`": ``` ./bin/kata-collect-data.sh: line 176: --version: command not found ``` ## systemd service # Image details ```yaml --- osbuilder: url: "https://github.com/kata-containers/osbuilder" version: "unknown" rootfs-creation-time: "2019-05-16T15:45:26.352874446+0000Z" description: "osbuilder rootfs" file-format-version: "0.0.2" architecture: "x86_64" base-distro: name: "Clear" version: "29440" packages: default: - "chrony" - "iptables-bin" - "libudev0-shim" - "systemd" extra: agent: url: "https://github.com/kata-containers/agent" name: "kata-agent" version: "1.7.0-43bd7075430fd62ff713daa2708489005cd20042" agent-is-init-daemon: "no" dax-nvdimm-header: "true" ``` --- # Initrd details No initrd --- # Logfiles ## Runtime logs Recent runtime problems found in system journal: ``` time="2019-05-22T01:38:28.603811544+08:00" level=warning msg="load sandbox devices failed" arch=amd64 command=create container=23016a7f3dc8c94ce1c267b6a0c68d0a55a43828d3b893726e5ca95549f52652 error="open /run/vc/sbs/23016a7f3dc8c94ce1c267b6a0c68d0a55a43828d3b893726e5ca95549f52652/devices.json: no such file or directory" name=kata-runtime pid=14988 sandbox=23016a7f3dc8c94ce1c267b6a0c68d0a55a43828d3b893726e5ca95549f52652 sandboxid=23016a7f3dc8c94ce1c267b6a0c68d0a55a43828d3b893726e5ca95549f52652 source=virtcontainers subsystem=sandbox ``` ## Proxy logs No recent proxy problems found in system journal. ## Shim logs No recent shim problems found in system journal. ## Throttler logs No recent throttler problems found in system journal. --- # Container manager details Have `docker` ## Docker Output of "`docker version`": ``` Client: Version: 18.09.1 API version: 1.39 Go version: go1.11.6 Git commit: 4c52b90 Built: Sat, 18 May 2019 15:23:52 +0700 OS/Arch: linux/amd64 Experimental: false Server: Engine: Version: 18.09.1 API version: 1.39 (minimum version 1.12) Go version: go1.11.6 Git commit: 4c52b90 Built: Sat May 18 08:23:52 2019 OS/Arch: linux/amd64 Experimental: false ``` Output of "`docker info`": ``` Containers: 1 Running: 1 Paused: 0 Stopped: 0 Images: 1 Server Version: 18.09.1 Storage Driver: overlay2 Backing Filesystem: extfs Supports d_type: true Native Overlay Diff: true Logging Driver: journald Cgroup Driver: cgroupfs Plugins: Volume: local Network: bridge host macvlan null overlay Log: awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog Swarm: inactive Runtimes: kata-runtime runc Default Runtime: runc Init Binary: docker-init containerd version: runc version: 1.0.0~rc6+dfsg1-3 init version: v0.18.0 (expected: fec3683b971d9c3ef73f284f176672c44b448662) Security Options: apparmor seccomp Profile: default Kernel Version: 4.19.0-4-amd64 Operating System: Debian GNU/Linux 10 (buster) OSType: linux Architecture: x86_64 CPUs: 4 Total Memory: 7.661GiB Name: debian ID: UTDS:SGWV:BVN6:KYWG:ZN54:CPP2:GC4M:SC22:7475:IJNN:SXXH:QIOI Docker Root Dir: /var/lib/docker Debug Mode (client): false Debug Mode (server): false Registry: https://index.docker.io/v1/ Labels: Experimental: false Insecure Registries: 127.0.0.0/8 Registry Mirrors: https://registry.docker-cn.com/ Live Restore Enabled: false WARNING: No swap limit support ``` Output of "`systemctl show docker`": ``` Type=notify Restart=on-failure NotifyAccess=main RestartUSec=100ms TimeoutStartUSec=infinity TimeoutStopUSec=1min 30s RuntimeMaxUSec=infinity WatchdogUSec=0 WatchdogTimestampMonotonic=0 RootDirectoryStartOnly=no RemainAfterExit=no GuessMainPID=yes MainPID=24532 ControlPID=0 FileDescriptorStoreMax=0 NFileDescriptorStore=0 StatusErrno=0 Result=success UID=[not set] GID=[not set] NRestarts=0 ExecMainStartTimestamp=Sun 2019-05-19 18:09:07 CST ExecMainStartTimestampMonotonic=1302894330619 ExecMainExitTimestampMonotonic=0 ExecMainPID=24532 ExecMainCode=0 ExecMainStatus=0 ExecStart={ path=/usr/sbin/dockerd ; argv[]=/usr/sbin/dockerd -H fd:// $DOCKER_OPTS ; ignore_errors=no ; start_time=[n/a] ; stop_time=[n/a] ; pid=0 ; code=(null) ; status=0/0 } ExecReload={ path=/bin/kill ; argv[]=/bin/kill -s HUP $MAINPID ; ignore_errors=no ; start_time=[n/a] ; stop_time=[n/a] ; pid=0 ; code=(null) ; status=0/0 } Slice=system.slice ControlGroup=/system.slice/docker.service MemoryCurrent=49815552 CPUUsageNSec=[not set] TasksCurrent=22 IPIngressBytes=18446744073709551615 IPIngressPackets=18446744073709551615 IPEgressBytes=18446744073709551615 IPEgressPackets=18446744073709551615 Delegate=yes DelegateControllers=cpu cpuacct io blkio memory devices pids bpf-firewall bpf-devices CPUAccounting=no CPUWeight=[not set] StartupCPUWeight=[not set] CPUShares=[not set] StartupCPUShares=[not set] CPUQuotaPerSecUSec=infinity IOAccounting=no IOWeight=[not set] StartupIOWeight=[not set] BlockIOAccounting=no BlockIOWeight=[not set] StartupBlockIOWeight=[not set] MemoryAccounting=yes MemoryMin=0 MemoryLow=0 MemoryHigh=infinity MemoryMax=infinity MemorySwapMax=infinity MemoryLimit=infinity DevicePolicy=auto TasksAccounting=yes TasksMax=infinity IPAccounting=no EnvironmentFiles=/etc/default/docker (ignore_errors=yes) UMask=0022 LimitCPU=infinity LimitCPUSoft=infinity LimitFSIZE=infinity LimitFSIZESoft=infinity LimitDATA=infinity LimitDATASoft=infinity LimitSTACK=infinity LimitSTACKSoft=8388608 LimitCORE=infinity LimitCORESoft=infinity LimitRSS=infinity LimitRSSSoft=infinity LimitNOFILE=1048576 LimitNOFILESoft=1048576 LimitAS=infinity LimitASSoft=infinity LimitNPROC=infinity LimitNPROCSoft=infinity LimitMEMLOCK=65536 LimitMEMLOCKSoft=65536 LimitLOCKS=infinity LimitLOCKSSoft=infinity LimitSIGPENDING=30797 LimitSIGPENDINGSoft=30797 LimitMSGQUEUE=819200 LimitMSGQUEUESoft=819200 LimitNICE=0 LimitNICESoft=0 LimitRTPRIO=0 LimitRTPRIOSoft=0 LimitRTTIME=infinity LimitRTTIMESoft=infinity OOMScoreAdjust=0 Nice=0 IOSchedulingClass=0 IOSchedulingPriority=0 CPUSchedulingPolicy=0 CPUSchedulingPriority=0 TimerSlackNSec=50000 CPUSchedulingResetOnFork=no NonBlocking=no StandardInput=null StandardInputData= StandardOutput=journal StandardError=inherit TTYReset=no TTYVHangup=no TTYVTDisallocate=no SyslogPriority=30 SyslogLevelPrefix=yes SyslogLevel=6 SyslogFacility=3 LogLevelMax=-1 LogRateLimitIntervalUSec=0 LogRateLimitBurst=0 SecureBits=0 CapabilityBoundingSet=cap_chown cap_dac_override cap_dac_read_search cap_fowner cap_fsetid cap_kill cap_setgid cap_setuid cap_setpcap cap_linux_immutable cap_net_bind_service cap_net_broadcast cap_net_admin cap_net_raw cap_ipc_lock cap_ipc_owner cap_sys_module cap_sys_rawio cap_sys_chroot cap_sys_ptrace cap_sys_pacct cap_sys_admin cap_sys_boot cap_sys_nice cap_sys_resource cap_sys_time cap_sys_tty_config cap_mknod cap_lease cap_audit_write cap_audit_control cap_setfcap cap_mac_override cap_mac_admin cap_syslog cap_wake_alarm cap_block_suspend AmbientCapabilities= DynamicUser=no RemoveIPC=no MountFlags= PrivateTmp=no PrivateDevices=no ProtectKernelTunables=no ProtectKernelModules=no ProtectControlGroups=no PrivateNetwork=no PrivateUsers=no PrivateMounts=no ProtectHome=no ProtectSystem=no SameProcessGroup=no UtmpMode=init IgnoreSIGPIPE=yes NoNewPrivileges=no SystemCallErrorNumber=0 LockPersonality=no RuntimeDirectoryPreserve=no RuntimeDirectoryMode=0755 StateDirectoryMode=0755 CacheDirectoryMode=0755 LogsDirectoryMode=0755 ConfigurationDirectoryMode=0755 MemoryDenyWriteExecute=no RestrictRealtime=no RestrictNamespaces=no MountAPIVFS=no KeyringMode=private KillMode=process KillSignal=15 FinalKillSignal=9 SendSIGKILL=yes SendSIGHUP=no WatchdogSignal=6 Id=docker.service Names=docker.service Requires=system.slice docker.socket sysinit.target Wants=network-online.target ConsistsOf=docker.socket Conflicts=shutdown.target Before=shutdown.target After=basic.target docker.socket systemd-journald.socket network-online.target sysinit.target firewalld.service system.slice TriggeredBy=docker.socket Documentation=https://docs.docker.com Description=Docker Application Container Engine LoadState=loaded ActiveState=active SubState=running FragmentPath=/lib/systemd/system/docker.service UnitFileState=disabled UnitFilePreset=enabled StateChangeTimestamp=Sun 2019-05-19 18:09:08 CST StateChangeTimestampMonotonic=1302895510693 InactiveExitTimestamp=Sun 2019-05-19 18:09:07 CST InactiveExitTimestampMonotonic=1302894330921 ActiveEnterTimestamp=Sun 2019-05-19 18:09:08 CST ActiveEnterTimestampMonotonic=1302895510693 ActiveExitTimestampMonotonic=0 InactiveEnterTimestampMonotonic=0 CanStart=yes CanStop=yes CanReload=yes CanIsolate=no StopWhenUnneeded=no RefuseManualStart=no RefuseManualStop=no AllowIsolate=no DefaultDependencies=yes OnFailureJobMode=replace IgnoreOnIsolate=no NeedDaemonReload=no JobTimeoutUSec=infinity JobRunningTimeoutUSec=infinity JobTimeoutAction=none ConditionResult=yes AssertResult=yes ConditionTimestamp=Sun 2019-05-19 18:09:07 CST ConditionTimestampMonotonic=1302894329247 AssertTimestamp=Sun 2019-05-19 18:09:07 CST AssertTimestampMonotonic=1302894329247 Transient=no Perpetual=no StartLimitIntervalUSec=1min StartLimitBurst=3 StartLimitAction=none FailureAction=none FailureActionExitStatus=-1 SuccessAction=none SuccessActionExitStatus=-1 InvocationID=85b206450bb04b1c9dc814ced94c3b18 CollectMode=inactive ``` Have `kubectl` ## Kubernetes Output of "`kubectl version`": ``` Client Version: version.Info{Major:"1", Minor:"9", GitVersion:"v1.9.3", GitCommit:"d2835416544f298c919e2ead3be3d0864b52323b", GitTreeState:"clean", BuildDate:"2018-02-07T12:22:21Z", GoVersion:"go1.9.2", Compiler:"gc", Platform:"linux/amd64"} The connection to the server localhost:8080 was refused - did you specify the right host or port? ``` Output of "`kubectl config view`": ``` apiVersion: v1 clusters: [] contexts: [] current-context: "" kind: Config preferences: {} users: [] ``` Output of "`systemctl show kubelet`": ``` Restart=no NotifyAccess=none RestartUSec=100ms TimeoutStartUSec=1min 30s TimeoutStopUSec=1min 30s RuntimeMaxUSec=infinity WatchdogUSec=0 WatchdogTimestampMonotonic=0 RootDirectoryStartOnly=no RemainAfterExit=no GuessMainPID=yes MainPID=0 ControlPID=0 FileDescriptorStoreMax=0 NFileDescriptorStore=0 StatusErrno=0 Result=success UID=[not set] GID=[not set] NRestarts=0 ExecMainStartTimestampMonotonic=0 ExecMainExitTimestampMonotonic=0 ExecMainPID=0 ExecMainCode=0 ExecMainStatus=0 MemoryCurrent=[not set] CPUUsageNSec=[not set] TasksCurrent=[not set] IPIngressBytes=18446744073709551615 IPIngressPackets=18446744073709551615 IPEgressBytes=18446744073709551615 IPEgressPackets=18446744073709551615 Delegate=no CPUAccounting=no CPUWeight=[not set] StartupCPUWeight=[not set] CPUShares=[not set] StartupCPUShares=[not set] CPUQuotaPerSecUSec=infinity IOAccounting=no IOWeight=[not set] StartupIOWeight=[not set] BlockIOAccounting=no BlockIOWeight=[not set] StartupBlockIOWeight=[not set] MemoryAccounting=yes MemoryMin=0 MemoryLow=0 MemoryHigh=infinity MemoryMax=infinity MemorySwapMax=infinity MemoryLimit=infinity DevicePolicy=auto TasksAccounting=yes TasksMax=4915 IPAccounting=no UMask=0022 LimitCPU=infinity LimitCPUSoft=infinity LimitFSIZE=infinity LimitFSIZESoft=infinity LimitDATA=infinity LimitDATASoft=infinity LimitSTACK=infinity LimitSTACKSoft=8388608 LimitCORE=infinity LimitCORESoft=0 LimitRSS=infinity LimitRSSSoft=infinity LimitNOFILE=1048576 LimitNOFILESoft=1048576 LimitAS=infinity LimitASSoft=infinity LimitNPROC=30797 LimitNPROCSoft=30797 LimitMEMLOCK=67108864 LimitMEMLOCKSoft=67108864 LimitLOCKS=infinity LimitLOCKSSoft=infinity LimitSIGPENDING=30797 LimitSIGPENDINGSoft=30797 LimitMSGQUEUE=819200 LimitMSGQUEUESoft=819200 LimitNICE=0 LimitNICESoft=0 LimitRTPRIO=0 LimitRTPRIOSoft=0 LimitRTTIME=infinity LimitRTTIMESoft=infinity OOMScoreAdjust=0 Nice=0 IOSchedulingClass=0 IOSchedulingPriority=0 CPUSchedulingPolicy=0 CPUSchedulingPriority=0 TimerSlackNSec=50000 CPUSchedulingResetOnFork=no NonBlocking=no StandardInput=null StandardInputData= StandardOutput=inherit StandardError=inherit TTYReset=no TTYVHangup=no TTYVTDisallocate=no SyslogPriority=30 SyslogLevelPrefix=yes SyslogLevel=6 SyslogFacility=3 LogLevelMax=-1 LogRateLimitIntervalUSec=0 LogRateLimitBurst=0 SecureBits=0 CapabilityBoundingSet=cap_chown cap_dac_override cap_dac_read_search cap_fowner cap_fsetid cap_kill cap_setgid cap_setuid cap_setpcap cap_linux_immutable cap_net_bind_service cap_net_broadcast cap_net_admin cap_net_raw cap_ipc_lock cap_ipc_owner cap_sys_module cap_sys_rawio cap_sys_chroot cap_sys_ptrace cap_sys_pacct cap_sys_admin cap_sys_boot cap_sys_nice cap_sys_resource cap_sys_time cap_sys_tty_config cap_mknod cap_lease cap_audit_write cap_audit_control cap_setfcap cap_mac_override cap_mac_admin cap_syslog cap_wake_alarm cap_block_suspend AmbientCapabilities= DynamicUser=no RemoveIPC=no MountFlags= PrivateTmp=no PrivateDevices=no ProtectKernelTunables=no ProtectKernelModules=no ProtectControlGroups=no PrivateNetwork=no PrivateUsers=no PrivateMounts=no ProtectHome=no ProtectSystem=no SameProcessGroup=no UtmpMode=init IgnoreSIGPIPE=yes NoNewPrivileges=no SystemCallErrorNumber=0 LockPersonality=no RuntimeDirectoryPreserve=no RuntimeDirectoryMode=0755 StateDirectoryMode=0755 CacheDirectoryMode=0755 LogsDirectoryMode=0755 ConfigurationDirectoryMode=0755 MemoryDenyWriteExecute=no RestrictRealtime=no RestrictNamespaces=no MountAPIVFS=no KeyringMode=private KillMode=control-group KillSignal=15 FinalKillSignal=9 SendSIGKILL=yes SendSIGHUP=no WatchdogSignal=6 Id=kubelet.service Names=kubelet.service Description=kubelet.service LoadState=not-found ActiveState=inactive SubState=dead StateChangeTimestampMonotonic=0 InactiveExitTimestampMonotonic=0 ActiveEnterTimestampMonotonic=0 ActiveExitTimestampMonotonic=0 InactiveEnterTimestampMonotonic=0 CanStart=no CanStop=yes CanReload=no CanIsolate=no StopWhenUnneeded=no RefuseManualStart=no RefuseManualStop=no AllowIsolate=no DefaultDependencies=yes OnFailureJobMode=replace IgnoreOnIsolate=no NeedDaemonReload=no JobTimeoutUSec=infinity JobRunningTimeoutUSec=infinity JobTimeoutAction=none ConditionResult=no AssertResult=no ConditionTimestampMonotonic=0 AssertTimestampMonotonic=0 LoadError=org.freedesktop.systemd1.NoSuchUnit "Unit kubelet.service not found." Transient=no Perpetual=no StartLimitIntervalUSec=10s StartLimitBurst=5 StartLimitAction=none FailureAction=none FailureActionExitStatus=-1 SuccessAction=none SuccessActionExitStatus=-1 CollectMode=inactive ``` No `crio` Have `containerd` ## containerd Output of "`containerd --version`": ``` containerd github.com/containerd/containerd 1.2.4~ds1-1 ``` Output of "`systemctl show containerd`": ``` Type=simple Restart=always NotifyAccess=none RestartUSec=5s TimeoutStartUSec=1min 30s TimeoutStopUSec=1min 30s RuntimeMaxUSec=infinity WatchdogUSec=0 WatchdogTimestampMonotonic=0 RootDirectoryStartOnly=no RemainAfterExit=no GuessMainPID=yes MainPID=703 ControlPID=0 FileDescriptorStoreMax=0 NFileDescriptorStore=0 StatusErrno=0 Result=success UID=[not set] GID=[not set] NRestarts=0 ExecMainStartTimestamp=Sat 2019-05-04 16:14:18 CST ExecMainStartTimestampMonotonic=4874256 ExecMainExitTimestampMonotonic=0 ExecMainPID=703 ExecMainCode=0 ExecMainStatus=0 ExecStartPre={ path=/sbin/modprobe ; argv[]=/sbin/modprobe overlay ; ignore_errors=no ; start_time=[n/a] ; stop_time=[n/a] ; pid=0 ; code=(null) ; status=0/0 } ExecStart={ path=/usr/bin/containerd ; argv[]=/usr/bin/containerd ; ignore_errors=no ; start_time=[n/a] ; stop_time=[n/a] ; pid=0 ; code=(null) ; status=0/0 } Slice=system.slice ControlGroup=/system.slice/containerd.service MemoryCurrent=240029696 CPUUsageNSec=[not set] TasksCurrent=40 IPIngressBytes=18446744073709551615 IPIngressPackets=18446744073709551615 IPEgressBytes=18446744073709551615 IPEgressPackets=18446744073709551615 Delegate=yes DelegateControllers=cpu cpuacct io blkio memory devices pids bpf-firewall bpf-devices CPUAccounting=no CPUWeight=[not set] StartupCPUWeight=[not set] CPUShares=[not set] StartupCPUShares=[not set] CPUQuotaPerSecUSec=infinity IOAccounting=no IOWeight=[not set] StartupIOWeight=[not set] BlockIOAccounting=no BlockIOWeight=[not set] StartupBlockIOWeight=[not set] MemoryAccounting=yes MemoryMin=0 MemoryLow=0 MemoryHigh=infinity MemoryMax=infinity MemorySwapMax=infinity MemoryLimit=infinity DevicePolicy=auto TasksAccounting=yes TasksMax=4915 IPAccounting=no UMask=0022 LimitCPU=infinity LimitCPUSoft=infinity LimitFSIZE=infinity LimitFSIZESoft=infinity LimitDATA=infinity LimitDATASoft=infinity LimitSTACK=infinity LimitSTACKSoft=8388608 LimitCORE=infinity LimitCORESoft=infinity LimitRSS=infinity LimitRSSSoft=infinity LimitNOFILE=1048576 LimitNOFILESoft=1048576 LimitAS=infinity LimitASSoft=infinity LimitNPROC=infinity LimitNPROCSoft=infinity LimitMEMLOCK=65536 LimitMEMLOCKSoft=65536 LimitLOCKS=infinity LimitLOCKSSoft=infinity LimitSIGPENDING=30797 LimitSIGPENDINGSoft=30797 LimitMSGQUEUE=819200 LimitMSGQUEUESoft=819200 LimitNICE=0 LimitNICESoft=0 LimitRTPRIO=0 LimitRTPRIOSoft=0 LimitRTTIME=infinity LimitRTTIMESoft=infinity OOMScoreAdjust=0 Nice=0 IOSchedulingClass=0 IOSchedulingPriority=0 CPUSchedulingPolicy=0 CPUSchedulingPriority=0 TimerSlackNSec=50000 CPUSchedulingResetOnFork=no NonBlocking=no StandardInput=null StandardInputData= StandardOutput=journal StandardError=inherit TTYReset=no TTYVHangup=no TTYVTDisallocate=no SyslogPriority=30 SyslogLevelPrefix=yes SyslogLevel=6 SyslogFacility=3 LogLevelMax=-1 LogRateLimitIntervalUSec=0 LogRateLimitBurst=0 SecureBits=0 CapabilityBoundingSet=cap_chown cap_dac_override cap_dac_read_search cap_fowner cap_fsetid cap_kill cap_setgid cap_setuid cap_setpcap cap_linux_immutable cap_net_bind_service cap_net_broadcast cap_net_admin cap_net_raw cap_ipc_lock cap_ipc_owner cap_sys_module cap_sys_rawio cap_sys_chroot cap_sys_ptrace cap_sys_pacct cap_sys_admin cap_sys_boot cap_sys_nice cap_sys_resource cap_sys_time cap_sys_tty_config cap_mknod cap_lease cap_audit_write cap_audit_control cap_setfcap cap_mac_override cap_mac_admin cap_syslog cap_wake_alarm cap_block_suspend AmbientCapabilities= DynamicUser=no RemoveIPC=no MountFlags= PrivateTmp=no PrivateDevices=no ProtectKernelTunables=no ProtectKernelModules=no ProtectControlGroups=no PrivateNetwork=no PrivateUsers=no PrivateMounts=no ProtectHome=no ProtectSystem=no SameProcessGroup=no UtmpMode=init IgnoreSIGPIPE=yes NoNewPrivileges=no SystemCallErrorNumber=0 LockPersonality=no RuntimeDirectoryPreserve=no RuntimeDirectoryMode=0755 StateDirectoryMode=0755 CacheDirectoryMode=0755 LogsDirectoryMode=0755 ConfigurationDirectoryMode=0755 MemoryDenyWriteExecute=no RestrictRealtime=no RestrictNamespaces=no MountAPIVFS=no KeyringMode=private KillMode=process KillSignal=15 FinalKillSignal=9 SendSIGKILL=yes SendSIGHUP=no WatchdogSignal=6 Id=containerd.service Names=containerd.service Requires=system.slice sysinit.target WantedBy=multi-user.target Conflicts=shutdown.target Before=shutdown.target multi-user.target After=sysinit.target system.slice basic.target systemd-journald.socket network.target Documentation=https://containerd.io man:containerd(1) Description=containerd container runtime LoadState=loaded ActiveState=active SubState=running FragmentPath=/lib/systemd/system/containerd.service UnitFileState=enabled UnitFilePreset=enabled StateChangeTimestamp=Sat 2019-05-04 16:14:18 CST StateChangeTimestampMonotonic=4874299 InactiveExitTimestamp=Sat 2019-05-04 16:14:18 CST InactiveExitTimestampMonotonic=4853433 ActiveEnterTimestamp=Sat 2019-05-04 16:14:18 CST ActiveEnterTimestampMonotonic=4874299 ActiveExitTimestampMonotonic=0 InactiveEnterTimestampMonotonic=0 CanStart=yes CanStop=yes CanReload=no CanIsolate=no StopWhenUnneeded=no RefuseManualStart=no RefuseManualStop=no AllowIsolate=no DefaultDependencies=yes OnFailureJobMode=replace IgnoreOnIsolate=no NeedDaemonReload=no JobTimeoutUSec=infinity JobRunningTimeoutUSec=infinity JobTimeoutAction=none ConditionResult=yes AssertResult=yes ConditionTimestamp=Sat 2019-05-04 16:14:18 CST ConditionTimestampMonotonic=4852476 AssertTimestamp=Sat 2019-05-04 16:14:18 CST AssertTimestampMonotonic=4852477 Transient=no Perpetual=no StartLimitIntervalUSec=10s StartLimitBurst=5 StartLimitAction=none FailureAction=none FailureActionExitStatus=-1 SuccessAction=none SuccessActionExitStatus=-1 InvocationID=d51ff1aa0bda4bf18e3b3cfb4ef6079e CollectMode=inactive ``` Output of "`cat /etc/containerd/config.toml`": ``` cat: /etc/containerd/config.toml: No such file or directory ``` --- # Packages Have `dpkg` Output of "`dpkg -l|egrep "(cc-oci-runtimecc-runtimerunv|kata-proxy|kata-runtime|kata-shim|kata-ksm-throttler|kata-containers-image|linux-container|qemu-)"`": ``` ii qemu-efi-aarch64 0~20181115.85588389-3 all UEFI firmware for 64-bit ARM virtual machines ii qemu-system-arm 1:3.1+dfsg-7 amd64 QEMU full system emulation binaries (arm) ii qemu-system-common 1:3.1+dfsg-7 amd64 QEMU full system emulation binaries (common files) ii qemu-system-data 1:3.1+dfsg-7 all QEMU full system emulation (data files) ii qemu-system-x86 1:3.1+dfsg-7 amd64 QEMU full system emulation binaries (x86) ii qemu-user 1:3.1+dfsg-7 amd64 QEMU user mode emulation binaries ii qemu-utils 1:3.1+dfsg-7 amd64 QEMU utilities ``` No `rpm` ---

[    0.384379] Run /usr/lib/systemd/systemd as init process
[    0.392076] systemd[1]: systemd 241 running in system mode. (+PAM +AUDIT -SELINUX +IMA -APPARMOR -SMACK -SYSVINIT +UTMP +LIBCRYPTSETUP +GCRYPT +GNUTLS +ACL +XZ -LZ4 +SECCOMP +BLKID +ELFUTILS +KMOD -IDN2 -IDN -PCRE2 default-hierarchy=legacy)
[    0.392088] systemd[1]: Detected virtualization kvm.
[    0.392097] systemd[1]: Detected architecture x86-64.
[    0.392101] systemd[1]: Running with unpopulated /etc.
[    0.393250] systemd[1]: System cannot boot: Missing /etc/machine-id and /etc is mounted read-only.
[    0.393411] systemd[1]: Booting up is supported only when:
[    0.393462] systemd[1]: 1) /etc/machine-id exists and is populated.
[    0.393526] systemd[1]: 2) /etc/machine-id exists and is empty.
[    0.393580] systemd[1]: 3) /etc/machine-id is missing and /etc is writable.
[    0.426299] systemd[1]: Reached target Swap.
[    0.428492] systemd[75]: systemd-sysctl.service: Failed to connect stdout to the journal socket, ignoring: No such file or directory
[    0.431393] systemd-sysctl[75]: Couldn't write '16' to 'kernel/sysrq', ignoring: No such file or directory
[    0.462687] IPv6: ADDRCONF(NETDEV_UP): eth0: link is not ready
[    0.493989] pci 0000:00:02.0: PCI bridge to [bus 01]
[    0.494000] pci 0000:00:02.0:   bridge window [io  0xc000-0xcfff]
[    0.494748] pci 0000:00:02.0:   bridge window [mem 0xfe400000-0xfe5fffff]
[    0.495239] pci 0000:00:02.0:   bridge window [mem 0xfe800000-0xfe9fffff 64bit pref]
[    1.388327] IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready
devimc commented 5 years ago

@zhsj thanks, would you mind cleaning up the journal? Also, can you run again dmesg -l err in a container?

zhsj commented 5 years ago

@zhsj thanks, would you mind cleaning up the journal? Also, can you run again dmesg -l err in a container?

The journal on the host? What service log do you need?

devimc commented 5 years ago

@zhsj

The journal on the host?

yes

What service log do you need?

Run a container and kata-collect-data.sh again please

zhsj commented 5 years ago

zsj@debian ~ $ docker run --rm -it --runtime kata-runtime debian dmesg -l err [ 0.390963] systemd[1]: System cannot boot: Missing /etc/machine-id and /etc is mounted read-only. [ 0.391088] systemd[1]: Booting up is supported only when: [ 0.391126] systemd[1]: 1) /etc/machine-id exists and is populated. [ 0.391173] systemd[1]: 2) /etc/machine-id exists and is empty. [ 0.391220] systemd[1]: 3) /etc/machine-id is missing and /etc is writable. zsj@debian ~ $ sudo env PATH=$PATH:/opt/kata/bin /opt/kata/bin/kata-collect-data.sh

Show kata-collect-data.sh details

# Meta details Running `kata-collect-data.sh` version `1.7.0 (commit d4f4644312d2acbfed8a150e49831787f8ebdd90)` at `2019-05-23.00:22:21.224301744+0800`. --- Runtime is `/opt/kata/bin/kata-runtime`. # `kata-env` Output of "`/opt/kata/bin/kata-runtime kata-env`": ```toml [Meta] Version = "1.0.23" [Runtime] Debug = false Trace = false DisableGuestSeccomp = true DisableNewNetNs = false Path = "/opt/kata/bin/kata-runtime" [Runtime.Version] Semver = "1.7.0" Commit = "d4f4644312d2acbfed8a150e49831787f8ebdd90" OCI = "1.0.1-dev" [Runtime.Config] Path = "/opt/kata/share/defaults/kata-containers/configuration-qemu.toml" [Hypervisor] MachineType = "pc" Version = "QEMU emulator version 2.11.2(kata-static)\nCopyright (c) 2003-2017 Fabrice Bellard and the QEMU Project developers" Path = "/opt/kata/bin/qemu-system-x86_64" BlockDeviceDriver = "virtio-scsi" EntropySource = "/dev/urandom" Msize9p = 8192 MemorySlots = 10 Debug = false UseVSock = false SharedFS = "virtio-9p" [Image] Path = "/opt/kata/share/kata-containers/kata-containers-image_clearlinux_1.7.0_agent_43bd707543.img" [Kernel] Path = "/opt/kata/share/kata-containers/vmlinuz-4.19.28-39" Parameters = "init=/usr/lib/systemd/systemd systemd.unit=kata-containers.target systemd.mask=systemd-networkd.service systemd.mask=systemd-networkd.socket systemd.mask=systemd-journald.service systemd.mask=systemd-journald.socket systemd.mask=systemd-journal-flush.service systemd.mask=systemd-journald-dev-log.socket systemd.mask=systemd-udevd.service systemd.mask=systemd-udevd.socket systemd.mask=systemd-udev-trigger.service systemd.mask=systemd-udevd-kernel.socket systemd.mask=systemd-udevd-control.socket systemd.mask=systemd-timesyncd.service systemd.mask=systemd-update-utmp.service systemd.mask=systemd-tmpfiles-setup.service systemd.mask=systemd-tmpfiles-cleanup.service systemd.mask=systemd-tmpfiles-cleanup.timer systemd.mask=tmp.mount systemd.mask=systemd-random-seed.service systemd.mask=systemd-coredump@.service" [Initrd] Path = "" [Proxy] Type = "kataProxy" Version = "kata-proxy version 1.7.0-ea2b0bb14ef7906105d9ac808503292096add170" Path = "/opt/kata/libexec/kata-containers/kata-proxy" Debug = false [Shim] Type = "kataShim" Version = "kata-shim version 1.7.0-7f2ab7726d6baf0b82ff2a35bd50c73f6b4a3d3a" Path = "/opt/kata/libexec/kata-containers/kata-shim" Debug = false [Agent] Type = "kata" Debug = false Trace = false TraceMode = "" TraceType = "" [Host] Kernel = "4.19.0-4-amd64" Architecture = "amd64" VMContainerCapable = true SupportVSocks = true [Host.Distro] Name = "Debian GNU/Linux" Version = "10" [Host.CPU] Vendor = "GenuineIntel" Model = "Intel(R) Core(TM) i7-6500U CPU @ 2.50GHz" [Netmon] Version = "kata-netmon version 1.7.0" Path = "/opt/kata/libexec/kata-containers/kata-netmon" Debug = false Enable = false ``` --- # Runtime config files ## Runtime default config files ``` /etc/kata-containers/configuration.toml /opt/kata/share/defaults/kata-containers/configuration.toml ``` ## Runtime config file contents Output of "`cat "/etc/kata-containers/configuration.toml"`": ```toml # Copyright (c) 2017-2019 Intel Corporation # # SPDX-License-Identifier: Apache-2.0 # # XXX: WARNING: this file is auto-generated. # XXX: # XXX: Source file: "cli/config/configuration-qemu.toml.in" # XXX: Project: # XXX: Name: Kata Containers # XXX: Type: kata [hypervisor.qemu] path = "/opt/kata/bin/qemu-system-x86_64" kernel = "/opt/kata/share/kata-containers/vmlinuz.container" image = "/opt/kata/share/kata-containers/kata-containers.img" machine_type = "pc" # Optional space-separated list of options to pass to the guest kernel. # For example, use `kernel_params = "vsyscall=emulate"` if you are having # trouble running pre-2.15 glibc. # # WARNING: - any parameter specified here will take priority over the default # parameter value of the same name used to start the virtual machine. # Do not set values here unless you understand the impact of doing so as you # may stop the virtual machine from booting. # To see the list of default parameters, enable hypervisor debug, create a # container and look for 'default-kernel-parameters' log entries. kernel_params = "" # Path to the firmware. # If you want that qemu uses the default firmware leave this option empty firmware = "" # Machine accelerators # comma-separated list of machine accelerators to pass to the hypervisor. # For example, `machine_accelerators = "nosmm,nosmbus,nosata,nopit,static-prt,nofw"` machine_accelerators="" # Default number of vCPUs per SB/VM: # unspecified or 0 --> will be set to 1 # < 0 --> will be set to the actual number of physical cores # > 0 <= number of physical cores --> will be set to the specified number # > number of physical cores --> will be set to the actual number of physical cores default_vcpus = 1 # Default maximum number of vCPUs per SB/VM: # unspecified or == 0 --> will be set to the actual number of physical cores or to the maximum number # of vCPUs supported by KVM if that number is exceeded # > 0 <= number of physical cores --> will be set to the specified number # > number of physical cores --> will be set to the actual number of physical cores or to the maximum number # of vCPUs supported by KVM if that number is exceeded # WARNING: Depending of the architecture, the maximum number of vCPUs supported by KVM is used when # the actual number of physical cores is greater than it. # WARNING: Be aware that this value impacts the virtual machine's memory footprint and CPU # the hotplug functionality. For example, `default_maxvcpus = 240` specifies that until 240 vCPUs # can be added to a SB/VM, but the memory footprint will be big. Another example, with # `default_maxvcpus = 8` the memory footprint will be small, but 8 will be the maximum number of # vCPUs supported by the SB/VM. In general, we recommend that you do not edit this variable, # unless you know what are you doing. default_maxvcpus = 0 # Bridges can be used to hot plug devices. # Limitations: # * Currently only pci bridges are supported # * Until 30 devices per bridge can be hot plugged. # * Until 5 PCI bridges can be cold plugged per VM. # This limitation could be a bug in qemu or in the kernel # Default number of bridges per SB/VM: # unspecified or 0 --> will be set to 1 # > 1 <= 5 --> will be set to the specified number # > 5 --> will be set to 5 default_bridges = 1 # Default memory size in MiB for SB/VM. # If unspecified then it will be set 2048 MiB. default_memory = 2048 # # Default memory slots per SB/VM. # If unspecified then it will be set 10. # This is will determine the times that memory will be hotadded to sandbox/VM. #memory_slots = 10 # The size in MiB will be plused to max memory of hypervisor. # It is the memory address space for the NVDIMM devie. # If set block storage driver (block_device_driver) to "nvdimm", # should set memory_offset to the size of block device. # Default 0 #memory_offset = 0 # Disable block device from being used for a container's rootfs. # In case of a storage driver like devicemapper where a container's # root file system is backed by a block device, the block device is passed # directly to the hypervisor for performance reasons. # This flag prevents the block device from being passed to the hypervisor, # 9pfs is used instead to pass the rootfs. disable_block_device_use = false # Shared file system type: # - virtio-9p (default) # - virtio-fs shared_fs = "virtio-9p" # Path to vhost-user-fs daemon. virtio_fs_daemon = "/opt/kata/bin/virtiofsd-x86_64" # Default size of DAX cache in MiB virtio_fs_cache_size = 1024 # Cache mode: # # - none # Metadata, data, and pathname lookup are not cached in guest. They are # always fetched from host and any changes are immediately pushed to host. # # - auto # Metadata and pathname lookup cache expires after a configured amount of # time (default is 1 second). Data is cached while the file is open (close # to open consistency). # # - always # Metadata, data, and pathname lookup are cached in guest and never expire. virtio_fs_cache = "always" # Block storage driver to be used for the hypervisor in case the container # rootfs is backed by a block device. This is virtio-scsi, virtio-blk # or nvdimm. block_device_driver = "virtio-scsi" # Specifies cache-related options will be set to block devices or not. # Default false #block_device_cache_set = true # Specifies cache-related options for block devices. # Denotes whether use of O_DIRECT (bypass the host page cache) is enabled. # Default false #block_device_cache_direct = true # Specifies cache-related options for block devices. # Denotes whether flush requests for the device are ignored. # Default false #block_device_cache_noflush = true # Enable iothreads (data-plane) to be used. This causes IO to be # handled in a separate IO thread. This is currently only implemented # for SCSI. # enable_iothreads = false # Enable pre allocation of VM RAM, default false # Enabling this will result in lower container density # as all of the memory will be allocated and locked # This is useful when you want to reserve all the memory # upfront or in the cases where you want memory latencies # to be very predictable # Default false #enable_mem_prealloc = true # Enable huge pages for VM RAM, default false # Enabling this will result in the VM memory # being allocated using huge pages. # This is useful when you want to use vhost-user network # stacks within the container. This will automatically # result in memory pre allocation #enable_hugepages = true # Enable swap of vm memory. Default false. # The behaviour is undefined if mem_prealloc is also set to true #enable_swap = true # This option changes the default hypervisor and kernel parameters # to enable debug output where available. This extra output is added # to the proxy logs, but only when proxy debug is also enabled. # # Default false #enable_debug = true # Disable the customizations done in the runtime when it detects # that it is running on top a VMM. This will result in the runtime # behaving as it would when running on bare metal. # #disable_nesting_checks = true # This is the msize used for 9p shares. It is the number of bytes # used for 9p packet payload. #msize_9p = 8192 # If true and vsocks are supported, use vsocks to communicate directly # with the agent and no proxy is started, otherwise use unix # sockets and start a proxy to communicate with the agent. # Default false #use_vsock = true # VFIO devices are hotplugged on a bridge by default. # Enable hotplugging on root bus. This may be required for devices with # a large PCI bar, as this is a current limitation with hotplugging on # a bridge. This value is valid for "pc" machine type. # Default false #hotplug_vfio_on_root_bus = true # If host doesn't support vhost_net, set to true. Thus we won't create vhost fds for nics. # Default false #disable_vhost_net = true # # Default entropy source. # The path to a host source of entropy (including a real hardware RNG) # /dev/urandom and /dev/random are two main options. # Be aware that /dev/random is a blocking source of entropy. If the host # runs out of entropy, the VMs boot time will increase leading to get startup # timeouts. # The source of entropy /dev/urandom is non-blocking and provides a # generally acceptable source of entropy. It should work well for pretty much # all practical purposes. #entropy_source= "/dev/urandom" # Path to OCI hook binaries in the *guest rootfs*. # This does not affect host-side hooks which must instead be added to # the OCI spec passed to the runtime. # # You can create a rootfs with hooks by customizing the osbuilder scripts: # https://github.com/kata-containers/osbuilder # # Hooks must be stored in a subdirectory of guest_hook_path according to their # hook type, i.e. "guest_hook_path/{prestart,postart,poststop}". # The agent will scan these directories for executable files and add them, in # lexicographical order, to the lifecycle of the guest container. # Hooks are executed in the runtime namespace of the guest. See the official documentation: # https://github.com/opencontainers/runtime-spec/blob/v1.0.1/config.md#posix-platform-hooks # Warnings will be logged if any error is encountered will scanning for hooks, # but it will not abort container execution. #guest_hook_path = "/usr/share/oci/hooks" [factory] # VM templating support. Once enabled, new VMs are created from template # using vm cloning. They will share the same initial kernel, initramfs and # agent memory by mapping it readonly. It helps speeding up new container # creation and saves a lot of memory if there are many kata containers running # on the same host. # # When disabled, new VMs are created from scratch. # # Note: Requires "initrd=" to be set ("image=" is not supported). # # Default false #enable_template = true # Specifies the path of template. # # Default "/run/vc/vm/template" #template_path = "/run/vc/vm/template" # The number of caches of VMCache: # unspecified or == 0 --> VMCache is disabled # > 0 --> will be set to the specified number # # VMCache is a function that creates VMs as caches before using it. # It helps speed up new container creation. # The function consists of a server and some clients communicating # through Unix socket. The protocol is gRPC in protocols/cache/cache.proto. # The VMCache server will create some VMs and cache them by factory cache. # It will convert the VM to gRPC format and transport it when gets # requestion from clients. # Factory grpccache is the VMCache client. It will request gRPC format # VM and convert it back to a VM. If VMCache function is enabled, # kata-runtime will request VM from factory grpccache when it creates # a new sandbox. # # Default 0 #vm_cache_number = 0 # Specify the address of the Unix socket that is used by VMCache. # # Default /var/run/kata-containers/cache.sock #vm_cache_endpoint = "/var/run/kata-containers/cache.sock" [proxy.kata] path = "/opt/kata/libexec/kata-containers/kata-proxy" # If enabled, proxy messages will be sent to the system log # (default: disabled) #enable_debug = true [shim.kata] path = "/opt/kata/libexec/kata-containers/kata-shim" # If enabled, shim messages will be sent to the system log # (default: disabled) #enable_debug = true # If enabled, the shim will create opentracing.io traces and spans. # (See https://www.jaegertracing.io/docs/getting-started). # # Note: By default, the shim runs in a separate network namespace. Therefore, # to allow it to send trace details to the Jaeger agent running on the host, # it is necessary to set 'disable_new_netns=true' so that it runs in the host # network namespace. # # (default: disabled) #enable_tracing = true [agent.kata] # If enabled, make the agent display debug-level messages. # (default: disabled) #enable_debug = true # Enable agent tracing. # # If enabled, the default trace mode is "dynamic" and the # default trace type is "isolated". The trace mode and type are set # explicity with the `trace_type=` and `trace_mode=` options. # # Notes: # # - Tracing is ONLY enabled when `enable_tracing` is set: explicitly # setting `trace_mode=` and/or `trace_type=` without setting `enable_tracing` # will NOT activate agent tracing. # # - See https://github.com/kata-containers/agent/blob/master/TRACING.md for # full details. # # (default: disabled) #enable_tracing = true # #trace_mode = "dynamic" #trace_type = "isolated" [netmon] # If enabled, the network monitoring process gets started when the # sandbox is created. This allows for the detection of some additional # network being added to the existing network namespace, after the # sandbox has been created. # (default: disabled) #enable_netmon = true # Specify the path to the netmon binary. path = "/opt/kata/libexec/kata-containers/kata-netmon" # If enabled, netmon messages will be sent to the system log # (default: disabled) #enable_debug = true [runtime] # If enabled, the runtime will log additional debug messages to the # system log # (default: disabled) #enable_debug = true # # Internetworking model # Determines how the VM should be connected to the # the container network interface # Options: # # - bridged # Uses a linux bridge to interconnect the container interface to # the VM. Works for most cases except macvlan and ipvlan. # # - macvtap # Used when the Container network interface can be bridged using # macvtap. # # - none # Used when customize network. Only creates a tap device. No veth pair. # # - tcfilter # Uses tc filter rules to redirect traffic from the network interface # provided by plugin to a tap interface connected to the VM. # internetworking_model="tcfilter" # disable guest seccomp # Determines whether container seccomp profiles are passed to the virtual # machine and applied by the kata agent. If set to true, seccomp is not applied # within the guest # (default: true) disable_guest_seccomp=true # If enabled, the runtime will create opentracing.io traces and spans. # (See https://www.jaegertracing.io/docs/getting-started). # (default: disabled) #enable_tracing = true # If enabled, the runtime will not create a network namespace for shim and hypervisor processes. # This option may have some potential impacts to your host. It should only be used when you know what you're doing. # `disable_new_netns` conflicts with `enable_netmon` # `disable_new_netns` conflicts with `internetworking_model=bridged` and `internetworking_model=macvtap`. It works only # with `internetworking_model=none`. The tap device will be in the host network namespace and can connect to a bridge # (like OVS) directly. # If you are using docker, `disable_new_netns` only works with `docker run --net=none` # (default: false) #disable_new_netns = true # Enabled experimental feature list, format: ["a", "b"]. # Experimental features are features not stable enough for production, # They may break compatibility, and are prepared for a big version bump. # Supported experimental features: # 1. "newstore": new persist storage driver which breaks backward compatibility, # expected to move out of experimental in 2.0.0. # (default: []) experimental=[] ``` Output of "`cat "/opt/kata/share/defaults/kata-containers/configuration.toml"`": ```toml # Copyright (c) 2017-2019 Intel Corporation # # SPDX-License-Identifier: Apache-2.0 # # XXX: WARNING: this file is auto-generated. # XXX: # XXX: Source file: "cli/config/configuration-qemu.toml.in" # XXX: Project: # XXX: Name: Kata Containers # XXX: Type: kata [hypervisor.qemu] path = "/opt/kata/bin/qemu-system-x86_64" kernel = "/opt/kata/share/kata-containers/vmlinuz.container" image = "/opt/kata/share/kata-containers/kata-containers.img" machine_type = "pc" # Optional space-separated list of options to pass to the guest kernel. # For example, use `kernel_params = "vsyscall=emulate"` if you are having # trouble running pre-2.15 glibc. # # WARNING: - any parameter specified here will take priority over the default # parameter value of the same name used to start the virtual machine. # Do not set values here unless you understand the impact of doing so as you # may stop the virtual machine from booting. # To see the list of default parameters, enable hypervisor debug, create a # container and look for 'default-kernel-parameters' log entries. kernel_params = "" # Path to the firmware. # If you want that qemu uses the default firmware leave this option empty firmware = "" # Machine accelerators # comma-separated list of machine accelerators to pass to the hypervisor. # For example, `machine_accelerators = "nosmm,nosmbus,nosata,nopit,static-prt,nofw"` machine_accelerators="" # Default number of vCPUs per SB/VM: # unspecified or 0 --> will be set to 1 # < 0 --> will be set to the actual number of physical cores # > 0 <= number of physical cores --> will be set to the specified number # > number of physical cores --> will be set to the actual number of physical cores default_vcpus = 1 # Default maximum number of vCPUs per SB/VM: # unspecified or == 0 --> will be set to the actual number of physical cores or to the maximum number # of vCPUs supported by KVM if that number is exceeded # > 0 <= number of physical cores --> will be set to the specified number # > number of physical cores --> will be set to the actual number of physical cores or to the maximum number # of vCPUs supported by KVM if that number is exceeded # WARNING: Depending of the architecture, the maximum number of vCPUs supported by KVM is used when # the actual number of physical cores is greater than it. # WARNING: Be aware that this value impacts the virtual machine's memory footprint and CPU # the hotplug functionality. For example, `default_maxvcpus = 240` specifies that until 240 vCPUs # can be added to a SB/VM, but the memory footprint will be big. Another example, with # `default_maxvcpus = 8` the memory footprint will be small, but 8 will be the maximum number of # vCPUs supported by the SB/VM. In general, we recommend that you do not edit this variable, # unless you know what are you doing. default_maxvcpus = 0 # Bridges can be used to hot plug devices. # Limitations: # * Currently only pci bridges are supported # * Until 30 devices per bridge can be hot plugged. # * Until 5 PCI bridges can be cold plugged per VM. # This limitation could be a bug in qemu or in the kernel # Default number of bridges per SB/VM: # unspecified or 0 --> will be set to 1 # > 1 <= 5 --> will be set to the specified number # > 5 --> will be set to 5 default_bridges = 1 # Default memory size in MiB for SB/VM. # If unspecified then it will be set 2048 MiB. default_memory = 2048 # # Default memory slots per SB/VM. # If unspecified then it will be set 10. # This is will determine the times that memory will be hotadded to sandbox/VM. #memory_slots = 10 # The size in MiB will be plused to max memory of hypervisor. # It is the memory address space for the NVDIMM devie. # If set block storage driver (block_device_driver) to "nvdimm", # should set memory_offset to the size of block device. # Default 0 #memory_offset = 0 # Disable block device from being used for a container's rootfs. # In case of a storage driver like devicemapper where a container's # root file system is backed by a block device, the block device is passed # directly to the hypervisor for performance reasons. # This flag prevents the block device from being passed to the hypervisor, # 9pfs is used instead to pass the rootfs. disable_block_device_use = false # Shared file system type: # - virtio-9p (default) # - virtio-fs shared_fs = "virtio-9p" # Path to vhost-user-fs daemon. virtio_fs_daemon = "/opt/kata/bin/virtiofsd-x86_64" # Default size of DAX cache in MiB virtio_fs_cache_size = 1024 # Cache mode: # # - none # Metadata, data, and pathname lookup are not cached in guest. They are # always fetched from host and any changes are immediately pushed to host. # # - auto # Metadata and pathname lookup cache expires after a configured amount of # time (default is 1 second). Data is cached while the file is open (close # to open consistency). # # - always # Metadata, data, and pathname lookup are cached in guest and never expire. virtio_fs_cache = "always" # Block storage driver to be used for the hypervisor in case the container # rootfs is backed by a block device. This is virtio-scsi, virtio-blk # or nvdimm. block_device_driver = "virtio-scsi" # Specifies cache-related options will be set to block devices or not. # Default false #block_device_cache_set = true # Specifies cache-related options for block devices. # Denotes whether use of O_DIRECT (bypass the host page cache) is enabled. # Default false #block_device_cache_direct = true # Specifies cache-related options for block devices. # Denotes whether flush requests for the device are ignored. # Default false #block_device_cache_noflush = true # Enable iothreads (data-plane) to be used. This causes IO to be # handled in a separate IO thread. This is currently only implemented # for SCSI. # enable_iothreads = false # Enable pre allocation of VM RAM, default false # Enabling this will result in lower container density # as all of the memory will be allocated and locked # This is useful when you want to reserve all the memory # upfront or in the cases where you want memory latencies # to be very predictable # Default false #enable_mem_prealloc = true # Enable huge pages for VM RAM, default false # Enabling this will result in the VM memory # being allocated using huge pages. # This is useful when you want to use vhost-user network # stacks within the container. This will automatically # result in memory pre allocation #enable_hugepages = true # Enable swap of vm memory. Default false. # The behaviour is undefined if mem_prealloc is also set to true #enable_swap = true # This option changes the default hypervisor and kernel parameters # to enable debug output where available. This extra output is added # to the proxy logs, but only when proxy debug is also enabled. # # Default false #enable_debug = true # Disable the customizations done in the runtime when it detects # that it is running on top a VMM. This will result in the runtime # behaving as it would when running on bare metal. # #disable_nesting_checks = true # This is the msize used for 9p shares. It is the number of bytes # used for 9p packet payload. #msize_9p = 8192 # If true and vsocks are supported, use vsocks to communicate directly # with the agent and no proxy is started, otherwise use unix # sockets and start a proxy to communicate with the agent. # Default false #use_vsock = true # VFIO devices are hotplugged on a bridge by default. # Enable hotplugging on root bus. This may be required for devices with # a large PCI bar, as this is a current limitation with hotplugging on # a bridge. This value is valid for "pc" machine type. # Default false #hotplug_vfio_on_root_bus = true # If host doesn't support vhost_net, set to true. Thus we won't create vhost fds for nics. # Default false #disable_vhost_net = true # # Default entropy source. # The path to a host source of entropy (including a real hardware RNG) # /dev/urandom and /dev/random are two main options. # Be aware that /dev/random is a blocking source of entropy. If the host # runs out of entropy, the VMs boot time will increase leading to get startup # timeouts. # The source of entropy /dev/urandom is non-blocking and provides a # generally acceptable source of entropy. It should work well for pretty much # all practical purposes. #entropy_source= "/dev/urandom" # Path to OCI hook binaries in the *guest rootfs*. # This does not affect host-side hooks which must instead be added to # the OCI spec passed to the runtime. # # You can create a rootfs with hooks by customizing the osbuilder scripts: # https://github.com/kata-containers/osbuilder # # Hooks must be stored in a subdirectory of guest_hook_path according to their # hook type, i.e. "guest_hook_path/{prestart,postart,poststop}". # The agent will scan these directories for executable files and add them, in # lexicographical order, to the lifecycle of the guest container. # Hooks are executed in the runtime namespace of the guest. See the official documentation: # https://github.com/opencontainers/runtime-spec/blob/v1.0.1/config.md#posix-platform-hooks # Warnings will be logged if any error is encountered will scanning for hooks, # but it will not abort container execution. #guest_hook_path = "/usr/share/oci/hooks" [factory] # VM templating support. Once enabled, new VMs are created from template # using vm cloning. They will share the same initial kernel, initramfs and # agent memory by mapping it readonly. It helps speeding up new container # creation and saves a lot of memory if there are many kata containers running # on the same host. # # When disabled, new VMs are created from scratch. # # Note: Requires "initrd=" to be set ("image=" is not supported). # # Default false #enable_template = true # Specifies the path of template. # # Default "/run/vc/vm/template" #template_path = "/run/vc/vm/template" # The number of caches of VMCache: # unspecified or == 0 --> VMCache is disabled # > 0 --> will be set to the specified number # # VMCache is a function that creates VMs as caches before using it. # It helps speed up new container creation. # The function consists of a server and some clients communicating # through Unix socket. The protocol is gRPC in protocols/cache/cache.proto. # The VMCache server will create some VMs and cache them by factory cache. # It will convert the VM to gRPC format and transport it when gets # requestion from clients. # Factory grpccache is the VMCache client. It will request gRPC format # VM and convert it back to a VM. If VMCache function is enabled, # kata-runtime will request VM from factory grpccache when it creates # a new sandbox. # # Default 0 #vm_cache_number = 0 # Specify the address of the Unix socket that is used by VMCache. # # Default /var/run/kata-containers/cache.sock #vm_cache_endpoint = "/var/run/kata-containers/cache.sock" [proxy.kata] path = "/opt/kata/libexec/kata-containers/kata-proxy" # If enabled, proxy messages will be sent to the system log # (default: disabled) #enable_debug = true [shim.kata] path = "/opt/kata/libexec/kata-containers/kata-shim" # If enabled, shim messages will be sent to the system log # (default: disabled) #enable_debug = true # If enabled, the shim will create opentracing.io traces and spans. # (See https://www.jaegertracing.io/docs/getting-started). # # Note: By default, the shim runs in a separate network namespace. Therefore, # to allow it to send trace details to the Jaeger agent running on the host, # it is necessary to set 'disable_new_netns=true' so that it runs in the host # network namespace. # # (default: disabled) #enable_tracing = true [agent.kata] # If enabled, make the agent display debug-level messages. # (default: disabled) #enable_debug = true # Enable agent tracing. # # If enabled, the default trace mode is "dynamic" and the # default trace type is "isolated". The trace mode and type are set # explicity with the `trace_type=` and `trace_mode=` options. # # Notes: # # - Tracing is ONLY enabled when `enable_tracing` is set: explicitly # setting `trace_mode=` and/or `trace_type=` without setting `enable_tracing` # will NOT activate agent tracing. # # - See https://github.com/kata-containers/agent/blob/master/TRACING.md for # full details. # # (default: disabled) #enable_tracing = true # #trace_mode = "dynamic" #trace_type = "isolated" [netmon] # If enabled, the network monitoring process gets started when the # sandbox is created. This allows for the detection of some additional # network being added to the existing network namespace, after the # sandbox has been created. # (default: disabled) #enable_netmon = true # Specify the path to the netmon binary. path = "/opt/kata/libexec/kata-containers/kata-netmon" # If enabled, netmon messages will be sent to the system log # (default: disabled) #enable_debug = true [runtime] # If enabled, the runtime will log additional debug messages to the # system log # (default: disabled) #enable_debug = true # # Internetworking model # Determines how the VM should be connected to the # the container network interface # Options: # # - bridged # Uses a linux bridge to interconnect the container interface to # the VM. Works for most cases except macvlan and ipvlan. # # - macvtap # Used when the Container network interface can be bridged using # macvtap. # # - none # Used when customize network. Only creates a tap device. No veth pair. # # - tcfilter # Uses tc filter rules to redirect traffic from the network interface # provided by plugin to a tap interface connected to the VM. # internetworking_model="tcfilter" # disable guest seccomp # Determines whether container seccomp profiles are passed to the virtual # machine and applied by the kata agent. If set to true, seccomp is not applied # within the guest # (default: true) disable_guest_seccomp=true # If enabled, the runtime will create opentracing.io traces and spans. # (See https://www.jaegertracing.io/docs/getting-started). # (default: disabled) #enable_tracing = true # If enabled, the runtime will not create a network namespace for shim and hypervisor processes. # This option may have some potential impacts to your host. It should only be used when you know what you're doing. # `disable_new_netns` conflicts with `enable_netmon` # `disable_new_netns` conflicts with `internetworking_model=bridged` and `internetworking_model=macvtap`. It works only # with `internetworking_model=none`. The tap device will be in the host network namespace and can connect to a bridge # (like OVS) directly. # If you are using docker, `disable_new_netns` only works with `docker run --net=none` # (default: false) #disable_new_netns = true # Enabled experimental feature list, format: ["a", "b"]. # Experimental features are features not stable enough for production, # They may break compatibility, and are prepared for a big version bump. # Supported experimental features: # 1. "newstore": new persist storage driver which breaks backward compatibility, # expected to move out of experimental in 2.0.0. # (default: []) experimental=[] ``` Config file `/usr/share/defaults/kata-containers/configuration.toml` not found --- # KSM throttler ## version find: ‘/usr/libexec’: No such file or directory Output of "` --version`": ``` /opt/kata/bin/kata-collect-data.sh: line 176: --version: command not found ``` ## systemd service # Image details ```yaml --- osbuilder: url: "https://github.com/kata-containers/osbuilder" version: "unknown" rootfs-creation-time: "2019-05-16T15:45:26.352874446+0000Z" description: "osbuilder rootfs" file-format-version: "0.0.2" architecture: "x86_64" base-distro: name: "Clear" version: "29440" packages: default: - "chrony" - "iptables-bin" - "libudev0-shim" - "systemd" extra: agent: url: "https://github.com/kata-containers/agent" name: "kata-agent" version: "1.7.0-43bd7075430fd62ff713daa2708489005cd20042" agent-is-init-daemon: "no" dax-nvdimm-header: "true" ``` --- # Initrd details No initrd --- # Logfiles ## Runtime logs Recent runtime problems found in system journal: ``` time="2019-05-23T00:22:07.6799983+08:00" level=warning msg="load sandbox devices failed" arch=amd64 command=create container=c1d6f3378082a8d66561fee848f21b6b825d987ce1b4552f763681eb9ca7f077 error="open /run/vc/sbs/c1d6f3378082a8d66561fee848f21b6b825d987ce1b4552f763681eb9ca7f077/devices.json: no such file or directory" name=kata-runtime pid=4261 sandbox=c1d6f3378082a8d66561fee848f21b6b825d987ce1b4552f763681eb9ca7f077 sandboxid=c1d6f3378082a8d66561fee848f21b6b825d987ce1b4552f763681eb9ca7f077 source=virtcontainers subsystem=sandbox ``` ## Proxy logs Recent proxy problems found in system journal: ``` time="2019-05-23T00:22:08.903751772+08:00" level=fatal msg="channel error" error="accept unix /run/vc/sbs/c1d6f3378082a8d66561fee848f21b6b825d987ce1b4552f763681eb9ca7f077/proxy.sock: use of closed network connection" name=kata-proxy pid=4295 sandbox=c1d6f3378082a8d66561fee848f21b6b825d987ce1b4552f763681eb9ca7f077 source=proxy ``` ## Shim logs No recent shim problems found in system journal. ## Throttler logs No recent throttler problems found in system journal. --- # Container manager details Have `docker` ## Docker Output of "`docker version`": ``` Client: Version: 18.09.1 API version: 1.39 Go version: go1.11.6 Git commit: 4c52b90 Built: Sat, 18 May 2019 15:23:52 +0700 OS/Arch: linux/amd64 Experimental: false Server: Engine: Version: 18.09.1 API version: 1.39 (minimum version 1.12) Go version: go1.11.6 Git commit: 4c52b90 Built: Sat May 18 08:23:52 2019 OS/Arch: linux/amd64 Experimental: false ``` Output of "`docker info`": ``` Containers: 0 Running: 0 Paused: 0 Stopped: 0 Images: 3 Server Version: 18.09.1 Storage Driver: overlay2 Backing Filesystem: extfs Supports d_type: true Native Overlay Diff: true Logging Driver: journald Cgroup Driver: cgroupfs Plugins: Volume: local Network: bridge host macvlan null overlay Log: awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog Swarm: inactive Runtimes: runc kata-runtime Default Runtime: runc Init Binary: docker-init containerd version: 9754871865f7fe2f4e74d43e2fc7ccd237edcbce runc version: 1.0.0~rc6+dfsg1-3 init version: v0.18.0 (expected: fec3683b971d9c3ef73f284f176672c44b448662) Security Options: apparmor seccomp Profile: default Kernel Version: 4.19.0-4-amd64 Operating System: Debian GNU/Linux 10 (buster) OSType: linux Architecture: x86_64 CPUs: 4 Total Memory: 7.661GiB Name: debian ID: UTDS:SGWV:BVN6:KYWG:ZN54:CPP2:GC4M:SC22:7475:IJNN:SXXH:QIOI Docker Root Dir: /var/lib/docker Debug Mode (client): false Debug Mode (server): false Registry: https://index.docker.io/v1/ Labels: Experimental: false Insecure Registries: 127.0.0.0/8 Registry Mirrors: https://dockerhub.azk8s.cn/ Live Restore Enabled: false WARNING: No swap limit support ``` Output of "`systemctl show docker`": ``` Type=notify Restart=on-failure NotifyAccess=main RestartUSec=100ms TimeoutStartUSec=infinity TimeoutStopUSec=1min 30s RuntimeMaxUSec=infinity WatchdogUSec=0 WatchdogTimestampMonotonic=0 RootDirectoryStartOnly=no RemainAfterExit=no GuessMainPID=yes MainPID=3742 ControlPID=0 FileDescriptorStoreMax=0 NFileDescriptorStore=0 StatusErrno=0 Result=success UID=[not set] GID=[not set] NRestarts=0 ExecMainStartTimestamp=Thu 2019-05-23 00:20:36 CST ExecMainStartTimestampMonotonic=1584383134905 ExecMainExitTimestampMonotonic=0 ExecMainPID=3742 ExecMainCode=0 ExecMainStatus=0 ExecStart={ path=/usr/sbin/dockerd ; argv[]=/usr/sbin/dockerd -H fd:// $DOCKER_OPTS ; ignore_errors=no ; start_time=[Thu 2019-05-23 00:20:36 CST] ; stop_time=[n/a] ; pid=3742 ; code=(null) ; status=0/0 } ExecReload={ path=/bin/kill ; argv[]=/bin/kill -s HUP $MAINPID ; ignore_errors=no ; start_time=[n/a] ; stop_time=[n/a] ; pid=0 ; code=(null) ; status=0/0 } Slice=system.slice ControlGroup=/system.slice/docker.service MemoryCurrent=172777472 CPUUsageNSec=[not set] TasksCurrent=36 IPIngressBytes=18446744073709551615 IPIngressPackets=18446744073709551615 IPEgressBytes=18446744073709551615 IPEgressPackets=18446744073709551615 Delegate=yes DelegateControllers=cpu cpuacct io blkio memory devices pids bpf-firewall bpf-devices CPUAccounting=no CPUWeight=[not set] StartupCPUWeight=[not set] CPUShares=[not set] StartupCPUShares=[not set] CPUQuotaPerSecUSec=infinity IOAccounting=no IOWeight=[not set] StartupIOWeight=[not set] BlockIOAccounting=no BlockIOWeight=[not set] StartupBlockIOWeight=[not set] MemoryAccounting=yes MemoryMin=0 MemoryLow=0 MemoryHigh=infinity MemoryMax=infinity MemorySwapMax=infinity MemoryLimit=infinity DevicePolicy=auto TasksAccounting=yes TasksMax=infinity IPAccounting=no EnvironmentFiles=/etc/default/docker (ignore_errors=yes) UMask=0022 LimitCPU=infinity LimitCPUSoft=infinity LimitFSIZE=infinity LimitFSIZESoft=infinity LimitDATA=infinity LimitDATASoft=infinity LimitSTACK=infinity LimitSTACKSoft=8388608 LimitCORE=infinity LimitCORESoft=infinity LimitRSS=infinity LimitRSSSoft=infinity LimitNOFILE=1048576 LimitNOFILESoft=1048576 LimitAS=infinity LimitASSoft=infinity LimitNPROC=infinity LimitNPROCSoft=infinity LimitMEMLOCK=65536 LimitMEMLOCKSoft=65536 LimitLOCKS=infinity LimitLOCKSSoft=infinity LimitSIGPENDING=30797 LimitSIGPENDINGSoft=30797 LimitMSGQUEUE=819200 LimitMSGQUEUESoft=819200 LimitNICE=0 LimitNICESoft=0 LimitRTPRIO=0 LimitRTPRIOSoft=0 LimitRTTIME=infinity LimitRTTIMESoft=infinity OOMScoreAdjust=0 Nice=0 IOSchedulingClass=0 IOSchedulingPriority=0 CPUSchedulingPolicy=0 CPUSchedulingPriority=0 TimerSlackNSec=50000 CPUSchedulingResetOnFork=no NonBlocking=no StandardInput=null StandardInputData= StandardOutput=journal StandardError=inherit TTYReset=no TTYVHangup=no TTYVTDisallocate=no SyslogPriority=30 SyslogLevelPrefix=yes SyslogLevel=6 SyslogFacility=3 LogLevelMax=-1 LogRateLimitIntervalUSec=0 LogRateLimitBurst=0 SecureBits=0 CapabilityBoundingSet=cap_chown cap_dac_override cap_dac_read_search cap_fowner cap_fsetid cap_kill cap_setgid cap_setuid cap_setpcap cap_linux_immutable cap_net_bind_service cap_net_broadcast cap_net_admin cap_net_raw cap_ipc_lock cap_ipc_owner cap_sys_module cap_sys_rawio cap_sys_chroot cap_sys_ptrace cap_sys_pacct cap_sys_admin cap_sys_boot cap_sys_nice cap_sys_resource cap_sys_time cap_sys_tty_config cap_mknod cap_lease cap_audit_write cap_audit_control cap_setfcap cap_mac_override cap_mac_admin cap_syslog cap_wake_alarm cap_block_suspend AmbientCapabilities= DynamicUser=no RemoveIPC=no MountFlags= PrivateTmp=no PrivateDevices=no ProtectKernelTunables=no ProtectKernelModules=no ProtectControlGroups=no PrivateNetwork=no PrivateUsers=no PrivateMounts=no ProtectHome=no ProtectSystem=no SameProcessGroup=no UtmpMode=init IgnoreSIGPIPE=yes NoNewPrivileges=no SystemCallErrorNumber=0 LockPersonality=no RuntimeDirectoryPreserve=no RuntimeDirectoryMode=0755 StateDirectoryMode=0755 CacheDirectoryMode=0755 LogsDirectoryMode=0755 ConfigurationDirectoryMode=0755 MemoryDenyWriteExecute=no RestrictRealtime=no RestrictNamespaces=no MountAPIVFS=no KeyringMode=private KillMode=process KillSignal=15 FinalKillSignal=9 SendSIGKILL=yes SendSIGHUP=no WatchdogSignal=6 Id=docker.service Names=docker.service Requires=system.slice docker.socket sysinit.target Wants=network-online.target ConsistsOf=docker.socket Conflicts=shutdown.target Before=shutdown.target After=basic.target docker.socket systemd-journald.socket network-online.target sysinit.target firewalld.service system.slice TriggeredBy=docker.socket Documentation=https://docs.docker.com Description=Docker Application Container Engine LoadState=loaded ActiveState=active SubState=running FragmentPath=/lib/systemd/system/docker.service UnitFileState=disabled UnitFilePreset=enabled StateChangeTimestamp=Thu 2019-05-23 00:20:37 CST StateChangeTimestampMonotonic=1584384202619 InactiveExitTimestamp=Thu 2019-05-23 00:20:36 CST InactiveExitTimestampMonotonic=1584383135638 ActiveEnterTimestamp=Thu 2019-05-23 00:20:37 CST ActiveEnterTimestampMonotonic=1584384202619 ActiveExitTimestamp=Thu 2019-05-23 00:20:35 CST ActiveExitTimestampMonotonic=1584382100974 InactiveEnterTimestamp=Thu 2019-05-23 00:20:36 CST InactiveEnterTimestampMonotonic=1584383112568 CanStart=yes CanStop=yes CanReload=yes CanIsolate=no StopWhenUnneeded=no RefuseManualStart=no RefuseManualStop=no AllowIsolate=no DefaultDependencies=yes OnFailureJobMode=replace IgnoreOnIsolate=no NeedDaemonReload=no JobTimeoutUSec=infinity JobRunningTimeoutUSec=infinity JobTimeoutAction=none ConditionResult=yes AssertResult=yes ConditionTimestamp=Thu 2019-05-23 00:20:36 CST ConditionTimestampMonotonic=1584383131210 AssertTimestamp=Thu 2019-05-23 00:20:36 CST AssertTimestampMonotonic=1584383131211 Transient=no Perpetual=no StartLimitIntervalUSec=1min StartLimitBurst=3 StartLimitAction=none FailureAction=none FailureActionExitStatus=-1 SuccessAction=none SuccessActionExitStatus=-1 InvocationID=d79b20dcbdde45af97033d17ff93b9eb CollectMode=inactive ``` Have `kubectl` ## Kubernetes Output of "`kubectl version`": ``` Client Version: version.Info{Major:"1", Minor:"9", GitVersion:"v1.9.3", GitCommit:"d2835416544f298c919e2ead3be3d0864b52323b", GitTreeState:"clean", BuildDate:"2018-02-07T12:22:21Z", GoVersion:"go1.9.2", Compiler:"gc", Platform:"linux/amd64"} The connection to the server localhost:8080 was refused - did you specify the right host or port? ``` Output of "`kubectl config view`": ``` apiVersion: v1 clusters: [] contexts: [] current-context: "" kind: Config preferences: {} users: [] ``` Output of "`systemctl show kubelet`": ``` Restart=no NotifyAccess=none RestartUSec=100ms TimeoutStartUSec=1min 30s TimeoutStopUSec=1min 30s RuntimeMaxUSec=infinity WatchdogUSec=0 WatchdogTimestampMonotonic=0 RootDirectoryStartOnly=no RemainAfterExit=no GuessMainPID=yes MainPID=0 ControlPID=0 FileDescriptorStoreMax=0 NFileDescriptorStore=0 StatusErrno=0 Result=success UID=[not set] GID=[not set] NRestarts=0 ExecMainStartTimestampMonotonic=0 ExecMainExitTimestampMonotonic=0 ExecMainPID=0 ExecMainCode=0 ExecMainStatus=0 MemoryCurrent=[not set] CPUUsageNSec=[not set] TasksCurrent=[not set] IPIngressBytes=18446744073709551615 IPIngressPackets=18446744073709551615 IPEgressBytes=18446744073709551615 IPEgressPackets=18446744073709551615 Delegate=no CPUAccounting=no CPUWeight=[not set] StartupCPUWeight=[not set] CPUShares=[not set] StartupCPUShares=[not set] CPUQuotaPerSecUSec=infinity IOAccounting=no IOWeight=[not set] StartupIOWeight=[not set] BlockIOAccounting=no BlockIOWeight=[not set] StartupBlockIOWeight=[not set] MemoryAccounting=yes MemoryMin=0 MemoryLow=0 MemoryHigh=infinity MemoryMax=infinity MemorySwapMax=infinity MemoryLimit=infinity DevicePolicy=auto TasksAccounting=yes TasksMax=4915 IPAccounting=no UMask=0022 LimitCPU=infinity LimitCPUSoft=infinity LimitFSIZE=infinity LimitFSIZESoft=infinity LimitDATA=infinity LimitDATASoft=infinity LimitSTACK=infinity LimitSTACKSoft=8388608 LimitCORE=infinity LimitCORESoft=0 LimitRSS=infinity LimitRSSSoft=infinity LimitNOFILE=1048576 LimitNOFILESoft=1048576 LimitAS=infinity LimitASSoft=infinity LimitNPROC=30797 LimitNPROCSoft=30797 LimitMEMLOCK=67108864 LimitMEMLOCKSoft=67108864 LimitLOCKS=infinity LimitLOCKSSoft=infinity LimitSIGPENDING=30797 LimitSIGPENDINGSoft=30797 LimitMSGQUEUE=819200 LimitMSGQUEUESoft=819200 LimitNICE=0 LimitNICESoft=0 LimitRTPRIO=0 LimitRTPRIOSoft=0 LimitRTTIME=infinity LimitRTTIMESoft=infinity OOMScoreAdjust=0 Nice=0 IOSchedulingClass=0 IOSchedulingPriority=0 CPUSchedulingPolicy=0 CPUSchedulingPriority=0 TimerSlackNSec=50000 CPUSchedulingResetOnFork=no NonBlocking=no StandardInput=null StandardInputData= StandardOutput=inherit StandardError=inherit TTYReset=no TTYVHangup=no TTYVTDisallocate=no SyslogPriority=30 SyslogLevelPrefix=yes SyslogLevel=6 SyslogFacility=3 LogLevelMax=-1 LogRateLimitIntervalUSec=0 LogRateLimitBurst=0 SecureBits=0 CapabilityBoundingSet=cap_chown cap_dac_override cap_dac_read_search cap_fowner cap_fsetid cap_kill cap_setgid cap_setuid cap_setpcap cap_linux_immutable cap_net_bind_service cap_net_broadcast cap_net_admin cap_net_raw cap_ipc_lock cap_ipc_owner cap_sys_module cap_sys_rawio cap_sys_chroot cap_sys_ptrace cap_sys_pacct cap_sys_admin cap_sys_boot cap_sys_nice cap_sys_resource cap_sys_time cap_sys_tty_config cap_mknod cap_lease cap_audit_write cap_audit_control cap_setfcap cap_mac_override cap_mac_admin cap_syslog cap_wake_alarm cap_block_suspend AmbientCapabilities= DynamicUser=no RemoveIPC=no MountFlags= PrivateTmp=no PrivateDevices=no ProtectKernelTunables=no ProtectKernelModules=no ProtectControlGroups=no PrivateNetwork=no PrivateUsers=no PrivateMounts=no ProtectHome=no ProtectSystem=no SameProcessGroup=no UtmpMode=init IgnoreSIGPIPE=yes NoNewPrivileges=no SystemCallErrorNumber=0 LockPersonality=no RuntimeDirectoryPreserve=no RuntimeDirectoryMode=0755 StateDirectoryMode=0755 CacheDirectoryMode=0755 LogsDirectoryMode=0755 ConfigurationDirectoryMode=0755 MemoryDenyWriteExecute=no RestrictRealtime=no RestrictNamespaces=no MountAPIVFS=no KeyringMode=private KillMode=control-group KillSignal=15 FinalKillSignal=9 SendSIGKILL=yes SendSIGHUP=no WatchdogSignal=6 Id=kubelet.service Names=kubelet.service Description=kubelet.service LoadState=not-found ActiveState=inactive SubState=dead StateChangeTimestampMonotonic=0 InactiveExitTimestampMonotonic=0 ActiveEnterTimestampMonotonic=0 ActiveExitTimestampMonotonic=0 InactiveEnterTimestampMonotonic=0 CanStart=no CanStop=yes CanReload=no CanIsolate=no StopWhenUnneeded=no RefuseManualStart=no RefuseManualStop=no AllowIsolate=no DefaultDependencies=yes OnFailureJobMode=replace IgnoreOnIsolate=no NeedDaemonReload=no JobTimeoutUSec=infinity JobRunningTimeoutUSec=infinity JobTimeoutAction=none ConditionResult=no AssertResult=no ConditionTimestampMonotonic=0 AssertTimestampMonotonic=0 LoadError=org.freedesktop.systemd1.NoSuchUnit "Unit kubelet.service not found." Transient=no Perpetual=no StartLimitIntervalUSec=10s StartLimitBurst=5 StartLimitAction=none FailureAction=none FailureActionExitStatus=-1 SuccessAction=none SuccessActionExitStatus=-1 CollectMode=inactive ``` No `crio` Have `containerd` ## containerd Output of "`containerd --version`": ``` containerd github.com/containerd/containerd 1.2.4~ds1-1 ``` Output of "`systemctl show containerd`": ``` Type=simple Restart=always NotifyAccess=none RestartUSec=5s TimeoutStartUSec=1min 30s TimeoutStopUSec=1min 30s RuntimeMaxUSec=infinity WatchdogUSec=0 WatchdogTimestampMonotonic=0 RootDirectoryStartOnly=no RemainAfterExit=no GuessMainPID=yes MainPID=0 ControlPID=0 FileDescriptorStoreMax=0 NFileDescriptorStore=0 StatusErrno=0 Result=success UID=[not set] GID=[not set] NRestarts=0 ExecMainStartTimestamp=Sat 2019-05-04 16:14:18 CST ExecMainStartTimestampMonotonic=4874256 ExecMainExitTimestamp=Thu 2019-05-23 00:06:38 CST ExecMainExitTimestampMonotonic=1583545438473 ExecMainPID=703 ExecMainCode=1 ExecMainStatus=0 ExecStartPre={ path=/sbin/modprobe ; argv[]=/sbin/modprobe overlay ; ignore_errors=no ; start_time=[n/a] ; stop_time=[n/a] ; pid=0 ; code=(null) ; status=0/0 } ExecStart={ path=/usr/bin/containerd ; argv[]=/usr/bin/containerd ; ignore_errors=no ; start_time=[Sat 2019-05-04 16:14:18 CST] ; stop_time=[Thu 2019-05-23 00:06:38 CST] ; pid=703 ; code=exited ; status=0 } Slice=system.slice MemoryCurrent=[not set] CPUUsageNSec=[not set] TasksCurrent=[not set] IPIngressBytes=18446744073709551615 IPIngressPackets=18446744073709551615 IPEgressBytes=18446744073709551615 IPEgressPackets=18446744073709551615 Delegate=yes DelegateControllers=cpu cpuacct io blkio memory devices pids bpf-firewall bpf-devices CPUAccounting=no CPUWeight=[not set] StartupCPUWeight=[not set] CPUShares=[not set] StartupCPUShares=[not set] CPUQuotaPerSecUSec=infinity IOAccounting=no IOWeight=[not set] StartupIOWeight=[not set] BlockIOAccounting=no BlockIOWeight=[not set] StartupBlockIOWeight=[not set] MemoryAccounting=yes MemoryMin=0 MemoryLow=0 MemoryHigh=infinity MemoryMax=infinity MemorySwapMax=infinity MemoryLimit=infinity DevicePolicy=auto TasksAccounting=yes TasksMax=4915 IPAccounting=no UMask=0022 LimitCPU=infinity LimitCPUSoft=infinity LimitFSIZE=infinity LimitFSIZESoft=infinity LimitDATA=infinity LimitDATASoft=infinity LimitSTACK=infinity LimitSTACKSoft=8388608 LimitCORE=infinity LimitCORESoft=infinity LimitRSS=infinity LimitRSSSoft=infinity LimitNOFILE=1048576 LimitNOFILESoft=1048576 LimitAS=infinity LimitASSoft=infinity LimitNPROC=infinity LimitNPROCSoft=infinity LimitMEMLOCK=65536 LimitMEMLOCKSoft=65536 LimitLOCKS=infinity LimitLOCKSSoft=infinity LimitSIGPENDING=30797 LimitSIGPENDINGSoft=30797 LimitMSGQUEUE=819200 LimitMSGQUEUESoft=819200 LimitNICE=0 LimitNICESoft=0 LimitRTPRIO=0 LimitRTPRIOSoft=0 LimitRTTIME=infinity LimitRTTIMESoft=infinity OOMScoreAdjust=0 Nice=0 IOSchedulingClass=0 IOSchedulingPriority=0 CPUSchedulingPolicy=0 CPUSchedulingPriority=0 TimerSlackNSec=50000 CPUSchedulingResetOnFork=no NonBlocking=no StandardInput=null StandardInputData= StandardOutput=journal StandardError=inherit TTYReset=no TTYVHangup=no TTYVTDisallocate=no SyslogPriority=30 SyslogLevelPrefix=yes SyslogLevel=6 SyslogFacility=3 LogLevelMax=-1 LogRateLimitIntervalUSec=0 LogRateLimitBurst=0 SecureBits=0 CapabilityBoundingSet=cap_chown cap_dac_override cap_dac_read_search cap_fowner cap_fsetid cap_kill cap_setgid cap_setuid cap_setpcap cap_linux_immutable cap_net_bind_service cap_net_broadcast cap_net_admin cap_net_raw cap_ipc_lock cap_ipc_owner cap_sys_module cap_sys_rawio cap_sys_chroot cap_sys_ptrace cap_sys_pacct cap_sys_admin cap_sys_boot cap_sys_nice cap_sys_resource cap_sys_time cap_sys_tty_config cap_mknod cap_lease cap_audit_write cap_audit_control cap_setfcap cap_mac_override cap_mac_admin cap_syslog cap_wake_alarm cap_block_suspend AmbientCapabilities= DynamicUser=no RemoveIPC=no MountFlags= PrivateTmp=no PrivateDevices=no ProtectKernelTunables=no ProtectKernelModules=no ProtectControlGroups=no PrivateNetwork=no PrivateUsers=no PrivateMounts=no ProtectHome=no ProtectSystem=no SameProcessGroup=no UtmpMode=init IgnoreSIGPIPE=yes NoNewPrivileges=no SystemCallErrorNumber=0 LockPersonality=no RuntimeDirectoryPreserve=no RuntimeDirectoryMode=0755 StateDirectoryMode=0755 CacheDirectoryMode=0755 LogsDirectoryMode=0755 ConfigurationDirectoryMode=0755 MemoryDenyWriteExecute=no RestrictRealtime=no RestrictNamespaces=no MountAPIVFS=no KeyringMode=private KillMode=process KillSignal=15 FinalKillSignal=9 SendSIGKILL=yes SendSIGHUP=no WatchdogSignal=6 Id=containerd.service Names=containerd.service Requires=system.slice sysinit.target WantedBy=multi-user.target Conflicts=shutdown.target Before=shutdown.target multi-user.target After=sysinit.target system.slice basic.target systemd-journald.socket network.target Documentation=https://containerd.io man:containerd(1) Description=containerd container runtime LoadState=loaded ActiveState=inactive SubState=dead FragmentPath=/lib/systemd/system/containerd.service UnitFileState=enabled UnitFilePreset=enabled StateChangeTimestamp=Thu 2019-05-23 00:06:38 CST StateChangeTimestampMonotonic=1583545438508 InactiveExitTimestamp=Sat 2019-05-04 16:14:18 CST InactiveExitTimestampMonotonic=4853433 ActiveEnterTimestamp=Sat 2019-05-04 16:14:18 CST ActiveEnterTimestampMonotonic=4874299 ActiveExitTimestamp=Thu 2019-05-23 00:06:38 CST ActiveExitTimestampMonotonic=1583545435347 InactiveEnterTimestamp=Thu 2019-05-23 00:06:38 CST InactiveEnterTimestampMonotonic=1583545438508 CanStart=yes CanStop=yes CanReload=no CanIsolate=no StopWhenUnneeded=no RefuseManualStart=no RefuseManualStop=no AllowIsolate=no DefaultDependencies=yes OnFailureJobMode=replace IgnoreOnIsolate=no NeedDaemonReload=no JobTimeoutUSec=infinity JobRunningTimeoutUSec=infinity JobTimeoutAction=none ConditionResult=yes AssertResult=yes ConditionTimestamp=Sat 2019-05-04 16:14:18 CST ConditionTimestampMonotonic=4852476 AssertTimestamp=Sat 2019-05-04 16:14:18 CST AssertTimestampMonotonic=4852477 Transient=no Perpetual=no StartLimitIntervalUSec=10s StartLimitBurst=5 StartLimitAction=none FailureAction=none FailureActionExitStatus=-1 SuccessAction=none SuccessActionExitStatus=-1 InvocationID=d51ff1aa0bda4bf18e3b3cfb4ef6079e CollectMode=inactive ``` Output of "`cat /etc/containerd/config.toml`": ``` cat: /etc/containerd/config.toml: No such file or directory ``` --- # Packages Have `dpkg` Output of "`dpkg -l|egrep "(cc-oci-runtimecc-runtimerunv|kata-proxy|kata-runtime|kata-shim|kata-ksm-throttler|kata-containers-image|linux-container|qemu-)"`": ``` ii qemu-efi-aarch64 0~20181115.85588389-3 all UEFI firmware for 64-bit ARM virtual machines ii qemu-system-arm 1:3.1+dfsg-7 amd64 QEMU full system emulation binaries (arm) ii qemu-system-common 1:3.1+dfsg-7 amd64 QEMU full system emulation binaries (common files) ii qemu-system-data 1:3.1+dfsg-7 all QEMU full system emulation (data files) ii qemu-system-x86 1:3.1+dfsg-7 amd64 QEMU full system emulation binaries (x86) ii qemu-user 1:3.1+dfsg-7 amd64 QEMU user mode emulation binaries ii qemu-utils 1:3.1+dfsg-7 amd64 QEMU utilities ``` No `rpm` ---

journalctl -a on host

``` zsj@debian ~ $ sudo journalctl -a|cat -- Logs begin at Thu 2019-05-23 00:22:01 CST, end at Thu 2019-05-23 00:25:41 CST. -- May 23 00:22:01 debian systemd[1]: Stopping Journal Service... May 23 00:22:01 debian systemd-journald[3969]: Received SIGTERM from PID 1 (systemd). May 23 00:22:01 debian systemd[1]: systemd-journald.service: Succeeded. May 23 00:22:01 debian systemd[1]: Stopped Journal Service. May 23 00:22:01 debian systemd[1]: Starting Journal Service... May 23 00:22:01 debian systemd-journald[4212]: Journal started May 23 00:22:01 debian systemd-journald[4212]: Runtime journal (/run/log/journal/594f0bcb944c411bab79efa43b63a605) is 8.0M, max 78.4M, 70.4M free. May 23 00:22:01 debian sudo[4209]: pam_unix(sudo:session): session closed for user root May 23 00:22:01 debian systemd[1]: Started Journal Service. May 23 00:22:01 debian systemd[1]: Starting Flush Journal to Persistent Storage... May 23 00:22:01 debian systemd[1]: Started Flush Journal to Persistent Storage. May 23 00:22:07 debian systemd[938]: var-lib-docker-overlay2-32ef262befbc3bc9b125aa86d6ba9929b8a3564552a82f93ff51c2accd57fc28\x2dinit-merged.mount: Succeeded. May 23 00:22:07 debian systemd[1]: var-lib-docker-overlay2-32ef262befbc3bc9b125aa86d6ba9929b8a3564552a82f93ff51c2accd57fc28\x2dinit-merged.mount: Succeeded. May 23 00:22:07 debian systemd[1]: var-lib-docker-overlay2-32ef262befbc3bc9b125aa86d6ba9929b8a3564552a82f93ff51c2accd57fc28-merged.mount: Succeeded. May 23 00:22:07 debian NetworkManager[644]: [1558542127.3947] manager: (veth6eb4c04): new Veth device (/org/freedesktop/NetworkManager/Devices/39) May 23 00:22:07 debian systemd-udevd[4240]: link_config: autonegotiation is unset or enabled, the speed and duplex are not writable. May 23 00:22:07 debian systemd-udevd[4240]: Using default interface naming scheme 'v240'. May 23 00:22:07 debian systemd-udevd[4240]: Could not generate persistent MAC address for veth6eb4c04: No such file or directory May 23 00:22:07 debian systemd-udevd[4241]: link_config: autonegotiation is unset or enabled, the speed and duplex are not writable. May 23 00:22:07 debian systemd-udevd[4241]: Using default interface naming scheme 'v240'. May 23 00:22:07 debian systemd-udevd[4241]: Could not generate persistent MAC address for veth1507ba6: No such file or directory May 23 00:22:07 debian kernel: docker0: port 1(veth1507ba6) entered blocking state May 23 00:22:07 debian kernel: docker0: port 1(veth1507ba6) entered disabled state May 23 00:22:07 debian kernel: device veth1507ba6 entered promiscuous mode May 23 00:22:07 debian kernel: IPv6: ADDRCONF(NETDEV_UP): veth1507ba6: link is not ready May 23 00:22:07 debian NetworkManager[644]: [1558542127.4175] manager: (veth1507ba6): new Veth device (/org/freedesktop/NetworkManager/Devices/40) May 23 00:22:07 debian dockerd[3742]: time="2019-05-23T00:22:07.480617371+08:00" level=info msg="shim docker-containerd-shim started" address=/containerd-shim/moby/c1d6f3378082a8d66561fee848f21b6b825d987ce1b4552f763681eb9ca7f077/shim.sock debug=false pid=4251 May 23 00:22:07 debian kata-runtime[4261]: time="2019-05-23T00:22:07.511267726+08:00" level=info msg="loaded configuration" arch=amd64 command=create file=/opt/kata/share/defaults/kata-containers/configuration-qemu.toml format=TOML name=kata-runtime pid=4261 source=katautils May 23 00:22:07 debian kata-runtime[4261]: time="2019-05-23T00:22:07.511457083+08:00" level=info arch=amd64 arguments="\"create --bundle /var/run/docker/containerd/daemon/io.containerd.runtime.v1.linux/moby/c1d6f3378082a8d66561fee848f21b6b825d987ce1b4552f763681eb9ca7f077 --pid-file /var/run/docker/containerd/daemon/io.containerd.runtime.v1.linux/moby/c1d6f3378082a8d66561fee848f21b6b825d987ce1b4552f763681eb9ca7f077/init.pid --console-socket /tmp/pty415815916/pty.sock c1d6f3378082a8d66561fee848f21b6b825d987ce1b4552f763681eb9ca7f077\"" command=create commit=d4f4644312d2acbfed8a150e49831787f8ebdd90 name=kata-runtime pid=4261 source=runtime version=1.7.0 May 23 00:22:07 debian kata-runtime[4261]: time="2019-05-23T00:22:07.513072174+08:00" level=info msg="shm-size detected: 67108864" arch=amd64 command=create container=c1d6f3378082a8d66561fee848f21b6b825d987ce1b4552f763681eb9ca7f077 name=kata-runtime pid=4261 source=virtcontainers subsystem=oci May 23 00:22:07 debian kata-runtime[4261]: time="2019-05-23T00:22:07.515672267+08:00" level=info msg="create netns" arch=amd64 command=create container=c1d6f3378082a8d66561fee848f21b6b825d987ce1b4552f763681eb9ca7f077 name=kata-runtime netns=/var/run/netns/cni-1f94bb17-4697-13f4-91c9-418def85537e pid=4261 source=katautils May 23 00:22:07 debian kernel: eth0: renamed from veth6eb4c04 May 23 00:22:07 debian NetworkManager[644]: [1558542127.6495] device (veth1507ba6): carrier: link connected May 23 00:22:07 debian NetworkManager[644]: [1558542127.6501] device (docker0): carrier: link connected May 23 00:22:07 debian kernel: IPv6: ADDRCONF(NETDEV_CHANGE): veth1507ba6: link becomes ready May 23 00:22:07 debian kernel: docker0: port 1(veth1507ba6) entered blocking state May 23 00:22:07 debian kernel: docker0: port 1(veth1507ba6) entered forwarding state May 23 00:22:07 debian kata-runtime[4261]: time="2019-05-23T00:22:07.6799983+08:00" level=warning msg="load sandbox devices failed" arch=amd64 command=create container=c1d6f3378082a8d66561fee848f21b6b825d987ce1b4552f763681eb9ca7f077 error="open /run/vc/sbs/c1d6f3378082a8d66561fee848f21b6b825d987ce1b4552f763681eb9ca7f077/devices.json: no such file or directory" name=kata-runtime pid=4261 sandbox=c1d6f3378082a8d66561fee848f21b6b825d987ce1b4552f763681eb9ca7f077 sandboxid=c1d6f3378082a8d66561fee848f21b6b825d987ce1b4552f763681eb9ca7f077 source=virtcontainers subsystem=sandbox May 23 00:22:07 debian kata-runtime[4261]: time="2019-05-23T00:22:07.680362202+08:00" level=info msg="adding volume" arch=amd64 command=create container=c1d6f3378082a8d66561fee848f21b6b825d987ce1b4552f763681eb9ca7f077 name=kata-runtime pid=4261 source=virtcontainers subsystem=qemu volume-type=virtio-9p May 23 00:22:07 debian kata-runtime[4261]: time="2019-05-23T00:22:07.681225106+08:00" level=info msg="Endpoints found after scan" arch=amd64 command=create container=c1d6f3378082a8d66561fee848f21b6b825d987ce1b4552f763681eb9ca7f077 endpoints="[0xc000334500]" name=kata-runtime pid=4261 source=virtcontainers subsystem=network May 23 00:22:07 debian kata-runtime[4261]: time="2019-05-23T00:22:07.682271414+08:00" level=info msg="Attaching endpoint" arch=amd64 command=create container=c1d6f3378082a8d66561fee848f21b6b825d987ce1b4552f763681eb9ca7f077 endpoint-type=virtual hotplug=false name=kata-runtime pid=4261 source=virtcontainers subsystem=network May 23 00:22:07 debian kata-runtime[4261]: time="2019-05-23T00:22:07.686563479+08:00" level=info msg="Starting VM" arch=amd64 command=create container=c1d6f3378082a8d66561fee848f21b6b825d987ce1b4552f763681eb9ca7f077 name=kata-runtime pid=4261 sandbox=c1d6f3378082a8d66561fee848f21b6b825d987ce1b4552f763681eb9ca7f077 source=virtcontainers subsystem=sandbox May 23 00:22:07 debian kata-runtime[4261]: time="2019-05-23T00:22:07.686957599+08:00" level=info msg="Adding extra file [0xc0000b0bd0 0xc0000b0bb8]" arch=amd64 command=create container=c1d6f3378082a8d66561fee848f21b6b825d987ce1b4552f763681eb9ca7f077 name=kata-runtime pid=4261 source=virtcontainers subsystem=qmp May 23 00:22:07 debian kernel: eth0: Caught tx_queue_len zero misconfig May 23 00:22:07 debian kata-runtime[4261]: time="2019-05-23T00:22:07.687103598+08:00" level=info msg="launching /opt/kata/bin/qemu-system-x86_64 with: [-name sandbox-c1d6f3378082a8d66561fee848f21b6b825d987ce1b4552f763681eb9ca7f077 -uuid 7eb43345-e672-4616-bb91-859feeead781 -machine pc,accel=kvm,kernel_irqchip,nvdimm -cpu host -qmp unix:/run/vc/vm/c1d6f3378082a8d66561fee848f21b6b825d987ce1b4552f763681eb9ca7f077/qmp.sock,server,nowait -m 2048M,slots=10,maxmem=8869M -device pci-bridge,bus=pci.0,id=pci-bridge-0,chassis_nr=1,shpc=on,addr=2,romfile= -device virtio-serial-pci,disable-modern=false,id=serial0,romfile= -device virtconsole,chardev=charconsole0,id=console0 -chardev socket,id=charconsole0,path=/run/vc/vm/c1d6f3378082a8d66561fee848f21b6b825d987ce1b4552f763681eb9ca7f077/console.sock,server,nowait -device nvdimm,id=nv0,memdev=mem0 -object memory-backend-file,id=mem0,mem-path=/opt/kata/share/kata-containers/kata-containers-image_clearlinux_1.7.0_agent_43bd707543.img,size=134217728 -device virtio-scsi-pci,id=scsi0,disable-modern=false,romfile= -object rng-random,id=rng0,filename=/dev/urandom -device virtio-rng,rng=rng0,romfile= -device virtserialport,chardev=charch0,id=channel0,name=agent.channel.0 -chardev socket,id=charch0,path=/run/vc/vm/c1d6f3378082a8d66561fee848f21b6b825d987ce1b4552f763681eb9ca7f077/kata.sock,server,nowait -device virtio-9p-pci,disable-modern=false,fsdev=extra-9p-kataShared,mount_tag=kataShared,romfile= -fsdev local,id=extra-9p-kataShared,path=/run/kata-containers/shared/sandboxes/c1d6f3378082a8d66561fee848f21b6b825d987ce1b4552f763681eb9ca7f077,security_model=none -netdev tap,id=network-0,vhost=on,vhostfds=3,fds=4 -device driver=virtio-net-pci,netdev=network-0,mac=02:42:ac:11:00:02,disable-modern=false,mq=on,vectors=4,romfile= -global kvm-pit.lost_tick_policy=discard -vga none -no-user-config -nodefaults -nographic -daemonize -kernel /opt/kata/share/kata-containers/vmlinuz-4.19.28-39 -append tsc=reliable no_timer_check rcupdate.rcu_expedited=1 i8042.direct=1 i8042.dumbkbd=1 i8042.nopnp=1 i8042.noaux=1 noreplace-smp reboot=k console=hvc0 console=hvc1 iommu=off cryptomgr.notests net.ifnames=0 pci=lastbus=0 root=/dev/pmem0p1 rootflags=dax,data=ordered,errors=remount-ro ro rootfstype=ext4 quiet systemd.show_status=false panic=1 nr_cpus=4 agent.use_vsock=false init=/usr/lib/systemd/systemd systemd.unit=kata-containers.target systemd.mask=systemd-networkd.service systemd.mask=systemd-networkd.socket systemd.mask=systemd-journald.service systemd.mask=systemd-journald.socket systemd.mask=systemd-journal-flush.service systemd.mask=systemd-journald-dev-log.socket systemd.mask=systemd-udevd.service systemd.mask=systemd-udevd.socket systemd.mask=systemd-udev-trigger.service systemd.mask=systemd-udevd-kernel.socket systemd.mask=systemd-udevd-control.socket systemd.mask=systemd-timesyncd.service systemd.mask=systemd-update-utmp.service systemd.mask=systemd-tmpfiles-setup.service systemd.mask=systemd-tmpfiles-cleanup.service systemd.mask=systemd-tmpfiles-cleanup.timer systemd.mask=tmp.mount systemd.mask=systemd-random-seed.service systemd.mask=systemd-coredump@.service -pidfile /run/vc/vm/c1d6f3378082a8d66561fee848f21b6b825d987ce1b4552f763681eb9ca7f077/pid -smp 1,cores=1,threads=1,sockets=4,maxcpus=4]" arch=amd64 command=create container=c1d6f3378082a8d66561fee848f21b6b825d987ce1b4552f763681eb9ca7f077 name=kata-runtime pid=4261 source=virtcontainers subsystem=qmp May 23 00:22:07 debian kata-runtime[4261]: time="2019-05-23T00:22:07.76416569+08:00" level=info msg="{\"QMP\": {\"version\": {\"qemu\": {\"micro\": 2, \"minor\": 11, \"major\": 2}, \"package\": \"(kata-static)\"}, \"capabilities\": []}}" arch=amd64 command=create container=c1d6f3378082a8d66561fee848f21b6b825d987ce1b4552f763681eb9ca7f077 name=kata-runtime pid=4261 source=virtcontainers subsystem=qmp May 23 00:22:07 debian kata-runtime[4261]: time="2019-05-23T00:22:07.764459952+08:00" level=info msg="QMP details" arch=amd64 command=create container=c1d6f3378082a8d66561fee848f21b6b825d987ce1b4552f763681eb9ca7f077 name=kata-runtime pid=4261 qmp-capabilities= qmp-major-version=2 qmp-micro-version=2 qmp-minor-version=11 source=virtcontainers subsystem=qemu May 23 00:22:07 debian kata-runtime[4261]: time="2019-05-23T00:22:07.764560575+08:00" level=info msg="{\"execute\":\"qmp_capabilities\"}" arch=amd64 command=create container=c1d6f3378082a8d66561fee848f21b6b825d987ce1b4552f763681eb9ca7f077 name=kata-runtime pid=4261 source=virtcontainers subsystem=qmp May 23 00:22:07 debian kata-runtime[4261]: time="2019-05-23T00:22:07.765325918+08:00" level=info msg="{\"return\": {}}" arch=amd64 command=create container=c1d6f3378082a8d66561fee848f21b6b825d987ce1b4552f763681eb9ca7f077 name=kata-runtime pid=4261 source=virtcontainers subsystem=qmp May 23 00:22:07 debian kata-runtime[4261]: time="2019-05-23T00:22:07.765730691+08:00" level=info msg="VM started" arch=amd64 command=create container=c1d6f3378082a8d66561fee848f21b6b825d987ce1b4552f763681eb9ca7f077 name=kata-runtime pid=4261 sandbox=c1d6f3378082a8d66561fee848f21b6b825d987ce1b4552f763681eb9ca7f077 source=virtcontainers subsystem=sandbox May 23 00:22:07 debian kata-runtime[4261]: time="2019-05-23T00:22:07.766400269+08:00" level=info msg="proxy started" arch=amd64 command=create container=c1d6f3378082a8d66561fee848f21b6b825d987ce1b4552f763681eb9ca7f077 name=kata-runtime pid=4261 proxy-pid=4295 proxy-url="unix:///run/vc/sbs/c1d6f3378082a8d66561fee848f21b6b825d987ce1b4552f763681eb9ca7f077/proxy.sock" sandbox=c1d6f3378082a8d66561fee848f21b6b825d987ce1b4552f763681eb9ca7f077 source=virtcontainers subsystem=kata_agent May 23 00:22:07 debian kata-runtime[4261]: time="2019-05-23T00:22:07.766460765+08:00" level=info msg="New client" arch=amd64 command=create container=c1d6f3378082a8d66561fee848f21b6b825d987ce1b4552f763681eb9ca7f077 name=kata-runtime pid=4261 source=virtcontainers subsystem=kata_agent url="unix:///run/vc/sbs/c1d6f3378082a8d66561fee848f21b6b825d987ce1b4552f763681eb9ca7f077/proxy.sock" May 23 00:22:08 debian kata-runtime[4261]: time="2019-05-23T00:22:08.388905989+08:00" level=info msg="New client" arch=amd64 command=create container=c1d6f3378082a8d66561fee848f21b6b825d987ce1b4552f763681eb9ca7f077 name=kata-runtime pid=4261 source=virtcontainers subsystem=kata_agent url="unix:///run/vc/sbs/c1d6f3378082a8d66561fee848f21b6b825d987ce1b4552f763681eb9ca7f077/proxy.sock" May 23 00:22:08 debian kata-runtime[4261]: time="2019-05-23T00:22:08.391356783+08:00" level=info msg="New client" arch=amd64 command=create container=c1d6f3378082a8d66561fee848f21b6b825d987ce1b4552f763681eb9ca7f077 name=kata-runtime pid=4261 source=virtcontainers subsystem=kata_agent url="unix:///run/vc/sbs/c1d6f3378082a8d66561fee848f21b6b825d987ce1b4552f763681eb9ca7f077/proxy.sock" May 23 00:22:08 debian kata-runtime[4261]: time="2019-05-23T00:22:08.39315258+08:00" level=info msg="New client" arch=amd64 command=create container=c1d6f3378082a8d66561fee848f21b6b825d987ce1b4552f763681eb9ca7f077 name=kata-runtime pid=4261 source=virtcontainers subsystem=kata_agent url="unix:///run/vc/sbs/c1d6f3378082a8d66561fee848f21b6b825d987ce1b4552f763681eb9ca7f077/proxy.sock" May 23 00:22:08 debian kata-runtime[4261]: time="2019-05-23T00:22:08.400201804+08:00" level=info msg="Agent started in the sandbox" arch=amd64 command=create container=c1d6f3378082a8d66561fee848f21b6b825d987ce1b4552f763681eb9ca7f077 name=kata-runtime pid=4261 sandbox=c1d6f3378082a8d66561fee848f21b6b825d987ce1b4552f763681eb9ca7f077 source=virtcontainers subsystem=sandbox May 23 00:22:08 debian kata-runtime[4261]: time="2019-05-23T00:22:08.400289937+08:00" level=info msg="New client" arch=amd64 command=create container=c1d6f3378082a8d66561fee848f21b6b825d987ce1b4552f763681eb9ca7f077 name=kata-runtime pid=4261 source=virtcontainers subsystem=kata_agent url="unix:///run/vc/sbs/c1d6f3378082a8d66561fee848f21b6b825d987ce1b4552f763681eb9ca7f077/proxy.sock" May 23 00:22:08 debian kata-runtime[4261]: time="2019-05-23T00:22:08.401901263+08:00" level=info msg="{\"QMP\": {\"version\": {\"qemu\": {\"micro\": 2, \"minor\": 11, \"major\": 2}, \"package\": \"(kata-static)\"}, \"capabilities\": []}}" arch=amd64 command=create container=c1d6f3378082a8d66561fee848f21b6b825d987ce1b4552f763681eb9ca7f077 name=kata-runtime pid=4261 source=virtcontainers subsystem=qmp May 23 00:22:08 debian kata-runtime[4261]: time="2019-05-23T00:22:08.402059081+08:00" level=info msg="{\"execute\":\"qmp_capabilities\"}" arch=amd64 command=create container=c1d6f3378082a8d66561fee848f21b6b825d987ce1b4552f763681eb9ca7f077 name=kata-runtime pid=4261 source=virtcontainers subsystem=qmp May 23 00:22:08 debian kata-runtime[4261]: time="2019-05-23T00:22:08.402633256+08:00" level=info msg="{\"return\": {}}" arch=amd64 command=create container=c1d6f3378082a8d66561fee848f21b6b825d987ce1b4552f763681eb9ca7f077 name=kata-runtime pid=4261 source=virtcontainers subsystem=qmp May 23 00:22:08 debian kata-runtime[4261]: time="2019-05-23T00:22:08.402721297+08:00" level=info msg="New client" arch=amd64 command=create container=c1d6f3378082a8d66561fee848f21b6b825d987ce1b4552f763681eb9ca7f077 name=kata-runtime pid=4261 source=virtcontainers subsystem=kata_agent url="unix:///run/vc/sbs/c1d6f3378082a8d66561fee848f21b6b825d987ce1b4552f763681eb9ca7f077/proxy.sock" May 23 00:22:08 debian kata-runtime[4261]: time="2019-05-23T00:22:08.404959745+08:00" level=info msg="device details" arch=amd64 command=create container=c1d6f3378082a8d66561fee848f21b6b825d987ce1b4552f763681eb9ca7f077 device-major=0 device-minor=64 mount-point=/var/lib/docker/overlay2/32ef262befbc3bc9b125aa86d6ba9929b8a3564552a82f93ff51c2accd57fc28/merged name=kata-runtime pid=4261 sandbox=c1d6f3378082a8d66561fee848f21b6b825d987ce1b4552f763681eb9ca7f077 source=virtcontainers subsystem=container May 23 00:22:08 debian kata-runtime[4261]: time="2019-05-23T00:22:08.413420076+08:00" level=info msg="Using sandbox shm" arch=amd64 command=create container=c1d6f3378082a8d66561fee848f21b6b825d987ce1b4552f763681eb9ca7f077 name=kata-runtime pid=4261 shm-size=67108864 source=virtcontainers subsystem=kata_agent May 23 00:22:08 debian kata-runtime[4261]: time="2019-05-23T00:22:08.413832176+08:00" level=info msg="New client" arch=amd64 command=create container=c1d6f3378082a8d66561fee848f21b6b825d987ce1b4552f763681eb9ca7f077 name=kata-runtime pid=4261 source=virtcontainers subsystem=kata_agent url="unix:///run/vc/sbs/c1d6f3378082a8d66561fee848f21b6b825d987ce1b4552f763681eb9ca7f077/proxy.sock" May 23 00:22:08 debian kata-runtime[4261]: time="2019-05-23T00:22:08.490378958+08:00" level=info msg="{\"execute\":\"query-cpus\"}" arch=amd64 command=create container=c1d6f3378082a8d66561fee848f21b6b825d987ce1b4552f763681eb9ca7f077 name=kata-runtime pid=4261 source=virtcontainers subsystem=qmp May 23 00:22:08 debian kata-runtime[4261]: time="2019-05-23T00:22:08.490930137+08:00" level=info msg="{\"return\": [{\"arch\": \"x86\", \"current\": true, \"props\": {\"core-id\": 0, \"thread-id\": 0, \"node-id\": 0, \"socket-id\": 0}, \"CPU\": 0, \"qom_path\": \"/machine/unattached/device[0]\", \"pc\": -2122605150, \"halted\": true, \"thread_id\": 4293}]}" arch=amd64 command=create container=c1d6f3378082a8d66561fee848f21b6b825d987ce1b4552f763681eb9ca7f077 name=kata-runtime pid=4261 source=virtcontainers subsystem=qmp May 23 00:22:08 debian kata-runtime[4261]: time="2019-05-23T00:22:08.492598851+08:00" level=info msg="release sandbox" arch=amd64 command=create container=c1d6f3378082a8d66561fee848f21b6b825d987ce1b4552f763681eb9ca7f077 name=kata-runtime pid=4261 sandbox=c1d6f3378082a8d66561fee848f21b6b825d987ce1b4552f763681eb9ca7f077 source=virtcontainers subsystem=sandbox May 23 00:22:08 debian kata-runtime[4314]: time="2019-05-23T00:22:08.527664259+08:00" level=info msg="loaded configuration" arch=amd64 command=state file=/opt/kata/share/defaults/kata-containers/configuration-qemu.toml format=TOML name=kata-runtime pid=4314 source=katautils May 23 00:22:08 debian kata-runtime[4314]: time="2019-05-23T00:22:08.527837625+08:00" level=info arch=amd64 arguments="\"state c1d6f3378082a8d66561fee848f21b6b825d987ce1b4552f763681eb9ca7f077\"" command=state commit=d4f4644312d2acbfed8a150e49831787f8ebdd90 name=kata-runtime pid=4314 source=runtime version=1.7.0 May 23 00:22:08 debian kata-runtime[4314]: time="2019-05-23T00:22:08.527935374+08:00" level=info msg="fetch sandbox" arch=amd64 command=state container=c1d6f3378082a8d66561fee848f21b6b825d987ce1b4552f763681eb9ca7f077 name=kata-runtime pid=4314 source=virtcontainers May 23 00:22:08 debian kata-runtime[4314]: time="2019-05-23T00:22:08.529938807+08:00" level=info msg="endpoint unmarshalled" arch=amd64 command=state container=c1d6f3378082a8d66561fee848f21b6b825d987ce1b4552f763681eb9ca7f077 endpoint="&{{{21a8c2c9-5e0d-487c-8982-f7d04616fe67 br0_kata {tap0_kata 02:42:ac:11:00:02 []} [0xc0000b1028] [0xc0000b1030]} {eth0 6e:c0:6d:d2:12:e4 []} 4} {{{35 1500 0 eth0 02:42:ac:11:00:02 up|broadcast|multicast 69699 36 0 0xc0005d2240 0 0xc00003dc00 ether up 0 0 0} veth} [172.17.0.2/16 eth0] [{Ifindex: 35 Dst: Src: Gw: 172.17.0.1 Flags: [] Table: 254} {Ifindex: 35 Dst: 172.17.0.0/16 Src: 172.17.0.2 Gw: Flags: [] Table: 254}] {[] [] []}} virtual }" endpoint-type=virtual name=kata-runtime pid=4314 source=virtcontainers subsystem=network May 23 00:22:08 debian kata-runtime[4314]: time="2019-05-23T00:22:08.530356729+08:00" level=info msg="release sandbox" arch=amd64 command=state container=c1d6f3378082a8d66561fee848f21b6b825d987ce1b4552f763681eb9ca7f077 name=kata-runtime pid=4314 sandbox=c1d6f3378082a8d66561fee848f21b6b825d987ce1b4552f763681eb9ca7f077 source=virtcontainers subsystem=sandbox May 23 00:22:08 debian kata-runtime[4331]: time="2019-05-23T00:22:08.56021406+08:00" level=info msg="loaded configuration" arch=amd64 command=start file=/opt/kata/share/defaults/kata-containers/configuration-qemu.toml format=TOML name=kata-runtime pid=4331 source=katautils May 23 00:22:08 debian kata-runtime[4331]: time="2019-05-23T00:22:08.560371808+08:00" level=info arch=amd64 arguments="\"start c1d6f3378082a8d66561fee848f21b6b825d987ce1b4552f763681eb9ca7f077\"" command=start commit=d4f4644312d2acbfed8a150e49831787f8ebdd90 name=kata-runtime pid=4331 source=runtime version=1.7.0 May 23 00:22:08 debian kata-runtime[4331]: time="2019-05-23T00:22:08.560482572+08:00" level=info msg="fetch sandbox" arch=amd64 command=start container=c1d6f3378082a8d66561fee848f21b6b825d987ce1b4552f763681eb9ca7f077 name=kata-runtime pid=4331 source=virtcontainers May 23 00:22:08 debian kata-runtime[4331]: time="2019-05-23T00:22:08.562416971+08:00" level=info msg="endpoint unmarshalled" arch=amd64 command=start container=c1d6f3378082a8d66561fee848f21b6b825d987ce1b4552f763681eb9ca7f077 endpoint="&{{{21a8c2c9-5e0d-487c-8982-f7d04616fe67 br0_kata {tap0_kata 02:42:ac:11:00:02 []} [0xc000011038] [0xc000011040]} {eth0 6e:c0:6d:d2:12:e4 []} 4} {{{35 1500 0 eth0 02:42:ac:11:00:02 up|broadcast|multicast 69699 36 0 0xc00028a240 0 0xc00003c6c0 ether up 0 0 0} veth} [172.17.0.2/16 eth0] [{Ifindex: 35 Dst: Src: Gw: 172.17.0.1 Flags: [] Table: 254} {Ifindex: 35 Dst: 172.17.0.0/16 Src: 172.17.0.2 Gw: Flags: [] Table: 254}] {[] [] []}} virtual }" endpoint-type=virtual name=kata-runtime pid=4331 source=virtcontainers subsystem=network May 23 00:22:08 debian kata-runtime[4331]: time="2019-05-23T00:22:08.562857957+08:00" level=info msg="release sandbox" arch=amd64 command=start container=c1d6f3378082a8d66561fee848f21b6b825d987ce1b4552f763681eb9ca7f077 name=kata-runtime pid=4331 sandbox=c1d6f3378082a8d66561fee848f21b6b825d987ce1b4552f763681eb9ca7f077 source=virtcontainers subsystem=sandbox May 23 00:22:08 debian kata-runtime[4331]: time="2019-05-23T00:22:08.563884604+08:00" level=info msg="fetch sandbox" arch=amd64 command=start container=c1d6f3378082a8d66561fee848f21b6b825d987ce1b4552f763681eb9ca7f077 name=kata-runtime pid=4331 sandbox=c1d6f3378082a8d66561fee848f21b6b825d987ce1b4552f763681eb9ca7f077 source=virtcontainers May 23 00:22:08 debian kata-runtime[4331]: time="2019-05-23T00:22:08.565199837+08:00" level=info msg="endpoint unmarshalled" arch=amd64 command=start container=c1d6f3378082a8d66561fee848f21b6b825d987ce1b4552f763681eb9ca7f077 endpoint="&{{{21a8c2c9-5e0d-487c-8982-f7d04616fe67 br0_kata {tap0_kata 02:42:ac:11:00:02 []} [0xc000011738] [0xc000011740]} {eth0 6e:c0:6d:d2:12:e4 []} 4} {{{35 1500 0 eth0 02:42:ac:11:00:02 up|broadcast|multicast 69699 36 0 0xc00028a540 0 0xc000218240 ether up 0 0 0} veth} [172.17.0.2/16 eth0] [{Ifindex: 35 Dst: Src: Gw: 172.17.0.1 Flags: [] Table: 254} {Ifindex: 35 Dst: 172.17.0.0/16 Src: 172.17.0.2 Gw: Flags: [] Table: 254}] {[] [] []}} virtual }" endpoint-type=virtual name=kata-runtime pid=4331 sandbox=c1d6f3378082a8d66561fee848f21b6b825d987ce1b4552f763681eb9ca7f077 source=virtcontainers subsystem=network May 23 00:22:08 debian kata-runtime[4331]: time="2019-05-23T00:22:08.565490583+08:00" level=info msg="New client" arch=amd64 command=start container=c1d6f3378082a8d66561fee848f21b6b825d987ce1b4552f763681eb9ca7f077 name=kata-runtime pid=4331 sandbox=c1d6f3378082a8d66561fee848f21b6b825d987ce1b4552f763681eb9ca7f077 source=virtcontainers subsystem=kata_agent url="unix:///run/vc/sbs/c1d6f3378082a8d66561fee848f21b6b825d987ce1b4552f763681eb9ca7f077/proxy.sock" May 23 00:22:08 debian kata-runtime[4331]: time="2019-05-23T00:22:08.591797122+08:00" level=info msg="Sandbox is started" arch=amd64 command=start container=c1d6f3378082a8d66561fee848f21b6b825d987ce1b4552f763681eb9ca7f077 name=kata-runtime pid=4331 sandbox=c1d6f3378082a8d66561fee848f21b6b825d987ce1b4552f763681eb9ca7f077 source=virtcontainers subsystem=sandbox May 23 00:22:08 debian kata-runtime[4331]: time="2019-05-23T00:22:08.592701027+08:00" level=info msg="release sandbox" arch=amd64 command=start container=c1d6f3378082a8d66561fee848f21b6b825d987ce1b4552f763681eb9ca7f077 name=kata-runtime pid=4331 sandbox=c1d6f3378082a8d66561fee848f21b6b825d987ce1b4552f763681eb9ca7f077 source=virtcontainers subsystem=sandbox May 23 00:22:08 debian kata-runtime[4341]: time="2019-05-23T00:22:08.625153173+08:00" level=info msg="loaded configuration" arch=amd64 command=state file=/opt/kata/share/defaults/kata-containers/configuration-qemu.toml format=TOML name=kata-runtime pid=4341 source=katautils May 23 00:22:08 debian kata-runtime[4341]: time="2019-05-23T00:22:08.62531849+08:00" level=info arch=amd64 arguments="\"state c1d6f3378082a8d66561fee848f21b6b825d987ce1b4552f763681eb9ca7f077\"" command=state commit=d4f4644312d2acbfed8a150e49831787f8ebdd90 name=kata-runtime pid=4341 source=runtime version=1.7.0 May 23 00:22:08 debian kata-runtime[4341]: time="2019-05-23T00:22:08.625421391+08:00" level=info msg="fetch sandbox" arch=amd64 command=state container=c1d6f3378082a8d66561fee848f21b6b825d987ce1b4552f763681eb9ca7f077 name=kata-runtime pid=4341 source=virtcontainers May 23 00:22:08 debian kata-runtime[4341]: time="2019-05-23T00:22:08.627393098+08:00" level=info msg="endpoint unmarshalled" arch=amd64 command=state container=c1d6f3378082a8d66561fee848f21b6b825d987ce1b4552f763681eb9ca7f077 endpoint="&{{{21a8c2c9-5e0d-487c-8982-f7d04616fe67 br0_kata {tap0_kata 02:42:ac:11:00:02 []} [0xc0000b1218] [0xc0000b1220]} {eth0 6e:c0:6d:d2:12:e4 []} 4} {{{35 1500 0 eth0 02:42:ac:11:00:02 up|broadcast|multicast 69699 36 0 0xc0002e8240 0 0xc0003846e0 ether up 0 0 0} veth} [172.17.0.2/16 eth0] [{Ifindex: 35 Dst: Src: Gw: 172.17.0.1 Flags: [] Table: 254} {Ifindex: 35 Dst: 172.17.0.0/16 Src: 172.17.0.2 Gw: Flags: [] Table: 254}] {[] [] []}} virtual }" endpoint-type=virtual name=kata-runtime pid=4341 source=virtcontainers subsystem=network May 23 00:22:08 debian kata-runtime[4341]: time="2019-05-23T00:22:08.627809507+08:00" level=info msg="release sandbox" arch=amd64 command=state container=c1d6f3378082a8d66561fee848f21b6b825d987ce1b4552f763681eb9ca7f077 name=kata-runtime pid=4341 sandbox=c1d6f3378082a8d66561fee848f21b6b825d987ce1b4552f763681eb9ca7f077 source=virtcontainers subsystem=sandbox May 23 00:22:08 debian c1d6f3378082[3742]: [ 0.390963] systemd[1]: System cannot boot: Missing /etc/machine-id and /etc is mounted read-only. May 23 00:22:08 debian c1d6f3378082[3742]: [ 0.391088] systemd[1]: Booting up is supported only when: May 23 00:22:08 debian c1d6f3378082[3742]: [ 0.391126] systemd[1]: 1) /etc/machine-id exists and is populated. May 23 00:22:08 debian c1d6f3378082[3742]: [ 0.391173] systemd[1]: 2) /etc/machine-id exists and is empty. May 23 00:22:08 debian c1d6f3378082[3742]: [ 0.391220] systemd[1]: 3) /etc/machine-id is missing and /etc is writable. May 23 00:22:08 debian kata-runtime[4356]: time="2019-05-23T00:22:08.720385195+08:00" level=info msg="loaded configuration" arch=amd64 command=state file=/opt/kata/share/defaults/kata-containers/configuration-qemu.toml format=TOML name=kata-runtime pid=4356 source=katautils May 23 00:22:08 debian kata-runtime[4356]: time="2019-05-23T00:22:08.72053545+08:00" level=info arch=amd64 arguments="\"state c1d6f3378082a8d66561fee848f21b6b825d987ce1b4552f763681eb9ca7f077\"" command=state commit=d4f4644312d2acbfed8a150e49831787f8ebdd90 name=kata-runtime pid=4356 source=runtime version=1.7.0 May 23 00:22:08 debian kata-runtime[4356]: time="2019-05-23T00:22:08.720644175+08:00" level=info msg="fetch sandbox" arch=amd64 command=state container=c1d6f3378082a8d66561fee848f21b6b825d987ce1b4552f763681eb9ca7f077 name=kata-runtime pid=4356 source=virtcontainers May 23 00:22:08 debian kata-runtime[4356]: time="2019-05-23T00:22:08.722622897+08:00" level=info msg="endpoint unmarshalled" arch=amd64 command=state container=c1d6f3378082a8d66561fee848f21b6b825d987ce1b4552f763681eb9ca7f077 endpoint="&{{{21a8c2c9-5e0d-487c-8982-f7d04616fe67 br0_kata {tap0_kata 02:42:ac:11:00:02 []} [0xc0000109b0] [0xc0000109b8]} {eth0 6e:c0:6d:d2:12:e4 []} 4} {{{35 1500 0 eth0 02:42:ac:11:00:02 up|broadcast|multicast 69699 36 0 0xc0003fa180 0 0xc000384de0 ether up 0 0 0} veth} [172.17.0.2/16 eth0] [{Ifindex: 35 Dst: Src: Gw: 172.17.0.1 Flags: [] Table: 254} {Ifindex: 35 Dst: 172.17.0.0/16 Src: 172.17.0.2 Gw: Flags: [] Table: 254}] {[] [] []}} virtual }" endpoint-type=virtual name=kata-runtime pid=4356 source=virtcontainers subsystem=network May 23 00:22:08 debian kata-runtime[4356]: time="2019-05-23T00:22:08.723018937+08:00" level=info msg="container isn't running" arch=amd64 command=state container=c1d6f3378082a8d66561fee848f21b6b825d987ce1b4552f763681eb9ca7f077 name=kata-runtime pid=4311 source=virtcontainers state=running May 23 00:22:08 debian kata-runtime[4356]: time="2019-05-23T00:22:08.723058266+08:00" level=info msg="New client" arch=amd64 command=state container=c1d6f3378082a8d66561fee848f21b6b825d987ce1b4552f763681eb9ca7f077 name=kata-runtime pid=4356 source=virtcontainers subsystem=kata_agent url="unix:///run/vc/sbs/c1d6f3378082a8d66561fee848f21b6b825d987ce1b4552f763681eb9ca7f077/proxy.sock" May 23 00:22:08 debian kata-runtime[4356]: time="2019-05-23T00:22:08.724317364+08:00" level=info msg="New client" arch=amd64 command=state container=c1d6f3378082a8d66561fee848f21b6b825d987ce1b4552f763681eb9ca7f077 name=kata-runtime pid=4356 source=virtcontainers subsystem=kata_agent url="unix:///run/vc/sbs/c1d6f3378082a8d66561fee848f21b6b825d987ce1b4552f763681eb9ca7f077/proxy.sock" May 23 00:22:08 debian kata-runtime[4356]: time="2019-05-23T00:22:08.725650617+08:00" level=info msg="New client" arch=amd64 command=state container=c1d6f3378082a8d66561fee848f21b6b825d987ce1b4552f763681eb9ca7f077 name=kata-runtime pid=4356 source=virtcontainers subsystem=kata_agent url="unix:///run/vc/sbs/c1d6f3378082a8d66561fee848f21b6b825d987ce1b4552f763681eb9ca7f077/proxy.sock" May 23 00:22:08 debian systemd[1]: run-kata\x2dcontainers-shared-sandboxes-c1d6f3378082a8d66561fee848f21b6b825d987ce1b4552f763681eb9ca7f077-c1d6f3378082a8d66561fee848f21b6b825d987ce1b4552f763681eb9ca7f077\x2d3f3e37033774d727\x2dresolv.conf.mount: Succeeded. May 23 00:22:08 debian systemd[938]: run-kata\x2dcontainers-shared-sandboxes-c1d6f3378082a8d66561fee848f21b6b825d987ce1b4552f763681eb9ca7f077-c1d6f3378082a8d66561fee848f21b6b825d987ce1b4552f763681eb9ca7f077\x2d3f3e37033774d727\x2dresolv.conf.mount: Succeeded. May 23 00:22:08 debian systemd[938]: run-kata\x2dcontainers-shared-sandboxes-c1d6f3378082a8d66561fee848f21b6b825d987ce1b4552f763681eb9ca7f077-c1d6f3378082a8d66561fee848f21b6b825d987ce1b4552f763681eb9ca7f077\x2dbf7444f0d17d3a91\x2dhostname.mount: Succeeded. May 23 00:22:08 debian systemd[1]: run-kata\x2dcontainers-shared-sandboxes-c1d6f3378082a8d66561fee848f21b6b825d987ce1b4552f763681eb9ca7f077-c1d6f3378082a8d66561fee848f21b6b825d987ce1b4552f763681eb9ca7f077\x2dbf7444f0d17d3a91\x2dhostname.mount: Succeeded. May 23 00:22:08 debian systemd[1]: run-kata\x2dcontainers-shared-sandboxes-c1d6f3378082a8d66561fee848f21b6b825d987ce1b4552f763681eb9ca7f077-c1d6f3378082a8d66561fee848f21b6b825d987ce1b4552f763681eb9ca7f077\x2d9c7acabf209753a4\x2dhosts.mount: Succeeded. May 23 00:22:08 debian systemd[938]: run-kata\x2dcontainers-shared-sandboxes-c1d6f3378082a8d66561fee848f21b6b825d987ce1b4552f763681eb9ca7f077-c1d6f3378082a8d66561fee848f21b6b825d987ce1b4552f763681eb9ca7f077\x2d9c7acabf209753a4\x2dhosts.mount: Succeeded. May 23 00:22:08 debian systemd[1]: run-kata\x2dcontainers-shared-sandboxes-c1d6f3378082a8d66561fee848f21b6b825d987ce1b4552f763681eb9ca7f077-c1d6f3378082a8d66561fee848f21b6b825d987ce1b4552f763681eb9ca7f077-rootfs.mount: Succeeded. May 23 00:22:08 debian systemd[938]: run-kata\x2dcontainers-shared-sandboxes-c1d6f3378082a8d66561fee848f21b6b825d987ce1b4552f763681eb9ca7f077-c1d6f3378082a8d66561fee848f21b6b825d987ce1b4552f763681eb9ca7f077-rootfs.mount: Succeeded. May 23 00:22:08 debian kata-runtime[4356]: time="2019-05-23T00:22:08.83186862+08:00" level=info msg="release sandbox" arch=amd64 command=state container=c1d6f3378082a8d66561fee848f21b6b825d987ce1b4552f763681eb9ca7f077 name=kata-runtime pid=4356 sandbox=c1d6f3378082a8d66561fee848f21b6b825d987ce1b4552f763681eb9ca7f077 source=virtcontainers subsystem=sandbox May 23 00:22:08 debian kata-runtime[4371]: time="2019-05-23T00:22:08.892560298+08:00" level=info msg="loaded configuration" arch=amd64 command=delete file=/opt/kata/share/defaults/kata-containers/configuration-qemu.toml format=TOML name=kata-runtime pid=4371 source=katautils May 23 00:22:08 debian kata-runtime[4371]: time="2019-05-23T00:22:08.892707809+08:00" level=info arch=amd64 arguments="\"delete c1d6f3378082a8d66561fee848f21b6b825d987ce1b4552f763681eb9ca7f077\"" command=delete commit=d4f4644312d2acbfed8a150e49831787f8ebdd90 name=kata-runtime pid=4371 source=runtime version=1.7.0 May 23 00:22:08 debian kata-runtime[4371]: time="2019-05-23T00:22:08.892818917+08:00" level=info msg="fetch sandbox" arch=amd64 command=delete container=c1d6f3378082a8d66561fee848f21b6b825d987ce1b4552f763681eb9ca7f077 name=kata-runtime pid=4371 source=virtcontainers May 23 00:22:08 debian kata-runtime[4371]: time="2019-05-23T00:22:08.894909362+08:00" level=info msg="endpoint unmarshalled" arch=amd64 command=delete container=c1d6f3378082a8d66561fee848f21b6b825d987ce1b4552f763681eb9ca7f077 endpoint="&{{{21a8c2c9-5e0d-487c-8982-f7d04616fe67 br0_kata {tap0_kata 02:42:ac:11:00:02 []} [0xc000010b30] [0xc000010b38]} {eth0 6e:c0:6d:d2:12:e4 []} 4} {{{35 1500 0 eth0 02:42:ac:11:00:02 up|broadcast|multicast 69699 36 0 0xc0002ac240 0 0xc0004b1c60 ether up 0 0 0} veth} [172.17.0.2/16 eth0] [{Ifindex: 35 Dst: Src: Gw: 172.17.0.1 Flags: [] Table: 254} {Ifindex: 35 Dst: 172.17.0.0/16 Src: 172.17.0.2 Gw: Flags: [] Table: 254}] {[] [] []}} virtual }" endpoint-type=virtual name=kata-runtime pid=4371 source=virtcontainers subsystem=network May 23 00:22:08 debian kata-runtime[4371]: time="2019-05-23T00:22:08.895353606+08:00" level=info msg="release sandbox" arch=amd64 command=delete container=c1d6f3378082a8d66561fee848f21b6b825d987ce1b4552f763681eb9ca7f077 name=kata-runtime pid=4371 sandbox=c1d6f3378082a8d66561fee848f21b6b825d987ce1b4552f763681eb9ca7f077 source=virtcontainers subsystem=sandbox May 23 00:22:08 debian kata-runtime[4371]: time="2019-05-23T00:22:08.896434035+08:00" level=info msg="fetch sandbox" arch=amd64 command=delete container=c1d6f3378082a8d66561fee848f21b6b825d987ce1b4552f763681eb9ca7f077 name=kata-runtime pid=4371 sandbox=c1d6f3378082a8d66561fee848f21b6b825d987ce1b4552f763681eb9ca7f077 source=virtcontainers May 23 00:22:08 debian kata-runtime[4371]: time="2019-05-23T00:22:08.897751209+08:00" level=info msg="endpoint unmarshalled" arch=amd64 command=delete container=c1d6f3378082a8d66561fee848f21b6b825d987ce1b4552f763681eb9ca7f077 endpoint="&{{{21a8c2c9-5e0d-487c-8982-f7d04616fe67 br0_kata {tap0_kata 02:42:ac:11:00:02 []} [0xc000011230] [0xc000011238]} {eth0 6e:c0:6d:d2:12:e4 []} 4} {{{35 1500 0 eth0 02:42:ac:11:00:02 up|broadcast|multicast 69699 36 0 0xc0002ac540 0 0xc0003f9720 ether up 0 0 0} veth} [172.17.0.2/16 eth0] [{Ifindex: 35 Dst: Src: Gw: 172.17.0.1 Flags: [] Table: 254} {Ifindex: 35 Dst: 172.17.0.0/16 Src: 172.17.0.2 Gw: Flags: [] Table: 254}] {[] [] []}} virtual }" endpoint-type=virtual name=kata-runtime pid=4371 sandbox=c1d6f3378082a8d66561fee848f21b6b825d987ce1b4552f763681eb9ca7f077 source=virtcontainers subsystem=network May 23 00:22:08 debian kata-runtime[4371]: time="2019-05-23T00:22:08.898065421+08:00" level=info msg="release sandbox" arch=amd64 command=delete container=c1d6f3378082a8d66561fee848f21b6b825d987ce1b4552f763681eb9ca7f077 name=kata-runtime pid=4371 sandbox=c1d6f3378082a8d66561fee848f21b6b825d987ce1b4552f763681eb9ca7f077 source=virtcontainers subsystem=sandbox May 23 00:22:08 debian kata-runtime[4371]: time="2019-05-23T00:22:08.898131507+08:00" level=info msg="fetch sandbox" arch=amd64 command=delete container=c1d6f3378082a8d66561fee848f21b6b825d987ce1b4552f763681eb9ca7f077 name=kata-runtime pid=4371 sandbox=c1d6f3378082a8d66561fee848f21b6b825d987ce1b4552f763681eb9ca7f077 source=virtcontainers May 23 00:22:08 debian kata-runtime[4371]: time="2019-05-23T00:22:08.900121337+08:00" level=info msg="endpoint unmarshalled" arch=amd64 command=delete container=c1d6f3378082a8d66561fee848f21b6b825d987ce1b4552f763681eb9ca7f077 endpoint="&{{{21a8c2c9-5e0d-487c-8982-f7d04616fe67 br0_kata {tap0_kata 02:42:ac:11:00:02 []} [0xc0000112c8] [0xc0000112d0]} {eth0 6e:c0:6d:d2:12:e4 []} 4} {{{35 1500 0 eth0 02:42:ac:11:00:02 up|broadcast|multicast 69699 36 0 0xc0002ac840 0 0xc000384260 ether up 0 0 0} veth} [172.17.0.2/16 eth0] [{Ifindex: 35 Dst: Src: Gw: 172.17.0.1 Flags: [] Table: 254} {Ifindex: 35 Dst: 172.17.0.0/16 Src: 172.17.0.2 Gw: Flags: [] Table: 254}] {[] [] []}} virtual }" endpoint-type=virtual name=kata-runtime pid=4371 sandbox=c1d6f3378082a8d66561fee848f21b6b825d987ce1b4552f763681eb9ca7f077 source=virtcontainers subsystem=network May 23 00:22:08 debian kata-runtime[4371]: time="2019-05-23T00:22:08.900690614+08:00" level=info msg="Container already stopped" arch=amd64 command=delete container=c1d6f3378082a8d66561fee848f21b6b825d987ce1b4552f763681eb9ca7f077 name=kata-runtime pid=4371 sandbox=c1d6f3378082a8d66561fee848f21b6b825d987ce1b4552f763681eb9ca7f077 source=virtcontainers subsystem=container May 23 00:22:08 debian kata-runtime[4371]: time="2019-05-23T00:22:08.900768651+08:00" level=info msg="Stopping sandbox in the VM" arch=amd64 command=delete container=c1d6f3378082a8d66561fee848f21b6b825d987ce1b4552f763681eb9ca7f077 name=kata-runtime pid=4371 sandbox=c1d6f3378082a8d66561fee848f21b6b825d987ce1b4552f763681eb9ca7f077 source=virtcontainers subsystem=sandbox May 23 00:22:08 debian kata-runtime[4371]: time="2019-05-23T00:22:08.900840533+08:00" level=info msg="New client" arch=amd64 command=delete container=c1d6f3378082a8d66561fee848f21b6b825d987ce1b4552f763681eb9ca7f077 name=kata-runtime pid=4371 sandbox=c1d6f3378082a8d66561fee848f21b6b825d987ce1b4552f763681eb9ca7f077 source=virtcontainers subsystem=kata_agent url="unix:///run/vc/sbs/c1d6f3378082a8d66561fee848f21b6b825d987ce1b4552f763681eb9ca7f077/proxy.sock" May 23 00:22:08 debian kata-runtime[4371]: time="2019-05-23T00:22:08.903716872+08:00" level=info msg="Stopping VM" arch=amd64 command=delete container=c1d6f3378082a8d66561fee848f21b6b825d987ce1b4552f763681eb9ca7f077 name=kata-runtime pid=4371 sandbox=c1d6f3378082a8d66561fee848f21b6b825d987ce1b4552f763681eb9ca7f077 source=virtcontainers subsystem=sandbox May 23 00:22:08 debian kata-runtime[4371]: time="2019-05-23T00:22:08.903803092+08:00" level=info msg="Stopping Sandbox" arch=amd64 command=delete container=c1d6f3378082a8d66561fee848f21b6b825d987ce1b4552f763681eb9ca7f077 name=kata-runtime pid=4371 sandbox=c1d6f3378082a8d66561fee848f21b6b825d987ce1b4552f763681eb9ca7f077 source=virtcontainers subsystem=qemu May 23 00:22:08 debian kata-proxy[4295]: time="2019-05-23T00:22:08.903751772+08:00" level=fatal msg="channel error" error="accept unix /run/vc/sbs/c1d6f3378082a8d66561fee848f21b6b825d987ce1b4552f763681eb9ca7f077/proxy.sock: use of closed network connection" name=kata-proxy pid=4295 sandbox=c1d6f3378082a8d66561fee848f21b6b825d987ce1b4552f763681eb9ca7f077 source=proxy May 23 00:22:08 debian kata-runtime[4371]: time="2019-05-23T00:22:08.90412051+08:00" level=info msg="{\"QMP\": {\"version\": {\"qemu\": {\"micro\": 2, \"minor\": 11, \"major\": 2}, \"package\": \"(kata-static)\"}, \"capabilities\": []}}" arch=amd64 command=delete container=c1d6f3378082a8d66561fee848f21b6b825d987ce1b4552f763681eb9ca7f077 name=kata-runtime pid=4371 sandbox=c1d6f3378082a8d66561fee848f21b6b825d987ce1b4552f763681eb9ca7f077 source=virtcontainers subsystem=qmp May 23 00:22:08 debian kata-runtime[4371]: time="2019-05-23T00:22:08.90435262+08:00" level=info msg="{\"execute\":\"qmp_capabilities\"}" arch=amd64 command=delete container=c1d6f3378082a8d66561fee848f21b6b825d987ce1b4552f763681eb9ca7f077 name=kata-runtime pid=4371 sandbox=c1d6f3378082a8d66561fee848f21b6b825d987ce1b4552f763681eb9ca7f077 source=virtcontainers subsystem=qmp May 23 00:22:08 debian kata-runtime[4371]: time="2019-05-23T00:22:08.905020606+08:00" level=info msg="{\"return\": {}}" arch=amd64 command=delete container=c1d6f3378082a8d66561fee848f21b6b825d987ce1b4552f763681eb9ca7f077 name=kata-runtime pid=4371 sandbox=c1d6f3378082a8d66561fee848f21b6b825d987ce1b4552f763681eb9ca7f077 source=virtcontainers subsystem=qmp May 23 00:22:08 debian kata-runtime[4371]: time="2019-05-23T00:22:08.905127509+08:00" level=info msg="{\"execute\":\"quit\"}" arch=amd64 command=delete container=c1d6f3378082a8d66561fee848f21b6b825d987ce1b4552f763681eb9ca7f077 name=kata-runtime pid=4371 sandbox=c1d6f3378082a8d66561fee848f21b6b825d987ce1b4552f763681eb9ca7f077 source=virtcontainers subsystem=qmp May 23 00:22:08 debian kata-runtime[4371]: time="2019-05-23T00:22:08.905511754+08:00" level=info msg="{\"return\": {}}" arch=amd64 command=delete container=c1d6f3378082a8d66561fee848f21b6b825d987ce1b4552f763681eb9ca7f077 name=kata-runtime pid=4371 sandbox=c1d6f3378082a8d66561fee848f21b6b825d987ce1b4552f763681eb9ca7f077 source=virtcontainers subsystem=qmp May 23 00:22:08 debian kata-runtime[4371]: time="2019-05-23T00:22:08.90557048+08:00" level=info msg="{\"timestamp\": {\"seconds\": 1558542128, \"microseconds\": 905504}, \"event\": \"SHUTDOWN\", \"data\": {\"guest\": false}}" arch=amd64 command=delete container=c1d6f3378082a8d66561fee848f21b6b825d987ce1b4552f763681eb9ca7f077 name=kata-runtime pid=4371 sandbox=c1d6f3378082a8d66561fee848f21b6b825d987ce1b4552f763681eb9ca7f077 source=virtcontainers subsystem=qmp May 23 00:22:08 debian kata-runtime[4371]: time="2019-05-23T00:22:08.906726699+08:00" level=info msg="cleanup vm path" arch=amd64 command=delete container=c1d6f3378082a8d66561fee848f21b6b825d987ce1b4552f763681eb9ca7f077 dir=/run/vc/vm/c1d6f3378082a8d66561fee848f21b6b825d987ce1b4552f763681eb9ca7f077 link=/run/vc/vm/c1d6f3378082a8d66561fee848f21b6b825d987ce1b4552f763681eb9ca7f077 name=kata-runtime pid=4371 sandbox=c1d6f3378082a8d66561fee848f21b6b825d987ce1b4552f763681eb9ca7f077 source=virtcontainers subsystem=qemu May 23 00:22:08 debian kata-runtime[4371]: time="2019-05-23T00:22:08.906895087+08:00" level=info msg="Detaching endpoint" arch=amd64 command=delete container=c1d6f3378082a8d66561fee848f21b6b825d987ce1b4552f763681eb9ca7f077 endpoint-type=virtual hotunplug=false name=kata-runtime pid=4371 sandbox=c1d6f3378082a8d66561fee848f21b6b825d987ce1b4552f763681eb9ca7f077 source=virtcontainers subsystem=network May 23 00:22:08 debian kata-runtime[4371]: time="2019-05-23T00:22:08.950206783+08:00" level=info msg="Network namespace \"/var/run/netns/cni-1f94bb17-4697-13f4-91c9-418def85537e\" deleted" arch=amd64 command=delete container=c1d6f3378082a8d66561fee848f21b6b825d987ce1b4552f763681eb9ca7f077 name=kata-runtime pid=4371 sandbox=c1d6f3378082a8d66561fee848f21b6b825d987ce1b4552f763681eb9ca7f077 source=virtcontainers subsystem=network May 23 00:22:08 debian kernel: docker0: port 1(veth1507ba6) entered disabled state May 23 00:22:08 debian systemd[1]: run-netns-cni\x2d1f94bb17\x2d4697\x2d13f4\x2d91c9\x2d418def85537e.mount: Succeeded. May 23 00:22:08 debian systemd[938]: run-netns-cni\x2d1f94bb17\x2d4697\x2d13f4\x2d91c9\x2d418def85537e.mount: Succeeded. May 23 00:22:08 debian kata-runtime[4371]: time="2019-05-23T00:22:08.96926925+08:00" level=info msg="release sandbox" arch=amd64 command=delete container=c1d6f3378082a8d66561fee848f21b6b825d987ce1b4552f763681eb9ca7f077 name=kata-runtime pid=4371 sandbox=c1d6f3378082a8d66561fee848f21b6b825d987ce1b4552f763681eb9ca7f077 source=virtcontainers subsystem=sandbox May 23 00:22:08 debian kata-runtime[4371]: time="2019-05-23T00:22:08.969418875+08:00" level=info msg="fetch sandbox" arch=amd64 command=delete container=c1d6f3378082a8d66561fee848f21b6b825d987ce1b4552f763681eb9ca7f077 name=kata-runtime pid=4371 sandbox=c1d6f3378082a8d66561fee848f21b6b825d987ce1b4552f763681eb9ca7f077 source=virtcontainers May 23 00:22:08 debian kata-runtime[4371]: time="2019-05-23T00:22:08.971784356+08:00" level=info msg="endpoint unmarshalled" arch=amd64 command=delete container=c1d6f3378082a8d66561fee848f21b6b825d987ce1b4552f763681eb9ca7f077 endpoint="&{{{21a8c2c9-5e0d-487c-8982-f7d04616fe67 br0_kata {tap0_kata 02:42:ac:11:00:02 []} [0xc000010150] [0xc000010158]} {eth0 6e:c0:6d:d2:12:e4 []} 4} {{{35 1500 0 eth0 02:42:ac:11:00:02 up|broadcast|multicast 69699 36 0 0xc000218180 0 0xc000560ec0 ether up 0 0 0} veth} [172.17.0.2/16 eth0] [{Ifindex: 35 Dst: Src: Gw: 172.17.0.1 Flags: [] Table: 254} {Ifindex: 35 Dst: 172.17.0.0/16 Src: 172.17.0.2 Gw: Flags: [] Table: 254}] {[] [] []}} virtual }" endpoint-type=virtual name=kata-runtime pid=4371 sandbox=c1d6f3378082a8d66561fee848f21b6b825d987ce1b4552f763681eb9ca7f077 source=virtcontainers subsystem=network May 23 00:22:08 debian kata-runtime[4371]: time="2019-05-23T00:22:08.975798137+08:00" level=info msg="cleanup agent" arch=amd64 command=delete container=c1d6f3378082a8d66561fee848f21b6b825d987ce1b4552f763681eb9ca7f077 name=kata-runtime path=/run/kata-containers/shared/sandboxes/c1d6f3378082a8d66561fee848f21b6b825d987ce1b4552f763681eb9ca7f077 pid=4371 sandbox=c1d6f3378082a8d66561fee848f21b6b825d987ce1b4552f763681eb9ca7f077 source=virtcontainers subsystem=kata_agent May 23 00:22:08 debian kata-runtime[4371]: time="2019-05-23T00:22:08.976327414+08:00" level=info msg="release sandbox" arch=amd64 command=delete container=c1d6f3378082a8d66561fee848f21b6b825d987ce1b4552f763681eb9ca7f077 name=kata-runtime pid=4371 sandbox=c1d6f3378082a8d66561fee848f21b6b825d987ce1b4552f763681eb9ca7f077 source=virtcontainers subsystem=sandbox May 23 00:22:08 debian dockerd[3742]: time="2019-05-23T00:22:08.979212842+08:00" level=info msg="shim reaped" id=c1d6f3378082a8d66561fee848f21b6b825d987ce1b4552f763681eb9ca7f077 May 23 00:22:08 debian dockerd[3742]: time="2019-05-23T00:22:08.989140442+08:00" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" May 23 00:22:08 debian kernel: veth6eb4c04: renamed from eth0 May 23 00:22:09 debian systemd-udevd[4240]: link_config: autonegotiation is unset or enabled, the speed and duplex are not writable. May 23 00:22:09 debian NetworkManager[644]: [1558542129.0417] manager: (veth6eb4c04): new Veth device (/org/freedesktop/NetworkManager/Devices/41) May 23 00:22:09 debian kernel: docker0: port 1(veth1507ba6) entered disabled state May 23 00:22:09 debian kernel: device veth1507ba6 left promiscuous mode May 23 00:22:09 debian kernel: docker0: port 1(veth1507ba6) entered disabled state May 23 00:22:09 debian NetworkManager[644]: [1558542129.0849] device (veth1507ba6): released from master device docker0 May 23 00:22:09 debian systemd[1]: run-docker-netns-c87518a96bf0.mount: Succeeded. May 23 00:22:09 debian systemd[938]: run-docker-netns-c87518a96bf0.mount: Succeeded. May 23 00:22:09 debian systemd[938]: var-lib-docker-containers-c1d6f3378082a8d66561fee848f21b6b825d987ce1b4552f763681eb9ca7f077-mounts-shm.mount: Succeeded. May 23 00:22:09 debian systemd[1]: var-lib-docker-containers-c1d6f3378082a8d66561fee848f21b6b825d987ce1b4552f763681eb9ca7f077-mounts-shm.mount: Succeeded. May 23 00:22:09 debian systemd[938]: var-lib-docker-overlay2-32ef262befbc3bc9b125aa86d6ba9929b8a3564552a82f93ff51c2accd57fc28-merged.mount: Succeeded. May 23 00:22:09 debian systemd[1]: var-lib-docker-overlay2-32ef262befbc3bc9b125aa86d6ba9929b8a3564552a82f93ff51c2accd57fc28-merged.mount: Succeeded. May 23 00:22:21 debian sudo[4395]: zsj : TTY=pts/2 ; PWD=/home/zsj ; USER=root ; COMMAND=/usr/bin/env PATH=/home/zsj/bin:/usr/local/bin:/usr/bin:/bin:/usr/local/sbin:/usr/sbin:/sbin:/opt/kata/bin /opt/kata/bin/kata-collect-data.sh May 23 00:22:21 debian sudo[4395]: pam_unix(sudo:session): session opened for user root by zsj(uid=0) May 23 00:22:21 debian kernel: loop0: p1 May 23 00:22:21 debian kernel: EXT4-fs (loop0p1): mounted filesystem without journal. Opts: noload May 23 00:22:21 debian systemd[1]: tmp-tmp.K3ypdx47ud.mount: Succeeded. May 23 00:22:21 debian systemd[938]: tmp-tmp.K3ypdx47ud.mount: Succeeded. May 23 00:22:21 debian kernel: __loop_clr_fd: partition scan of loop0 failed (rc=-22) May 23 00:22:22 debian sudo[4395]: pam_unix(sudo:session): session closed for user root May 23 00:25:01 debian CRON[4681]: pam_unix(cron:session): session opened for user root by (uid=0) May 23 00:25:01 debian CRON[4682]: (root) CMD (command -v debian-sa1 > /dev/null && debian-sa1 1 1) May 23 00:25:01 debian CRON[4681]: pam_unix(cron:session): session closed for user root May 23 00:25:41 debian sudo[4688]: zsj : TTY=pts/2 ; PWD=/home/zsj ; USER=root ; COMMAND=/bin/journalctl -a May 23 00:25:41 debian sudo[4688]: pam_unix(sudo:session): session opened for user root by zsj(uid=0) ```

devimc commented 5 years ago

thanks @zhsj , now I can reproduce it using kata snap 1.7.0, the fix is here https://github.com/kata-containers/osbuilder/pull/297

zhsj commented 5 years ago

As kata-containers/osbuilder#295 and kata-containers/osbuilder#297 are merged, this should be closed now.