kata-containers / runtime

Kata Containers version 1.x runtime (for version 2.x see https://github.com/kata-containers/kata-containers).
https://katacontainers.io/
Apache License 2.0
2.1k stars 375 forks source link

Issues starting container in VM when using virtio-fs, virtiofsd segfaulting #1886

Closed awprice closed 4 years ago

awprice commented 5 years ago

Description of problem

We've seen the occasional issue when starting containers in a VM using virtio-fs. We believe the VM/sandbox is being stopped before all of the containers can be started in the VM/sandbox. From my limited knowledge I think virtiofsd is quitting for some reason - see Actual Result.

We are using the following branch: https://github.com/kata-containers/runtime/tree/stable-1.8 with https://github.com/kata-containers/runtime/pull/1882 merged in manually.

Expected result

For all of the containers in the pod to start inside the VM properly.

Actual result

Pods aren't started correctly due to the VM/sandbox stopping early before all containers can be started.

The following events are observed on in the pod's events:

  Type     Reason     Age                From                                     Message
  ----     ------     ----               ----                                     -------
  Normal   Scheduled  81s                default-scheduler                        Successfully assigned 04f89dff-3593-423c-8999-aa98791bdf6b/04f89dff-3593-423c-8999-aa98791bdf6b to ip-10-151-116-186.ec2.internal
  Normal   Pulling    79s                kubelet, ip-10-151-116-186.ec2.internal  pulling image "<image 1>"
  Normal   Pulled     79s                kubelet, ip-10-151-116-186.ec2.internal  Successfully pulled image "<image 1>"
  Normal   Started    79s                kubelet, ip-10-151-116-186.ec2.internal  Started container
  Normal   Pulling    79s                kubelet, ip-10-151-116-186.ec2.internal  pulling image "<image 2>"
  Normal   Created    79s                kubelet, ip-10-151-116-186.ec2.internal  Created container
  Normal   Created    78s                kubelet, ip-10-151-116-186.ec2.internal  Created container
  Normal   Pulled     78s                kubelet, ip-10-151-116-186.ec2.internal  Successfully pulled image "<image 2>"
  Normal   Created    78s                kubelet, ip-10-151-116-186.ec2.internal  Created container
  Normal   Started    78s                kubelet, ip-10-151-116-186.ec2.internal  Started container
  Normal   Pulled     78s                kubelet, ip-10-151-116-186.ec2.internal  Successfully pulled image "<image 3>"
  Normal   Pulling    78s                kubelet, ip-10-151-116-186.ec2.internal  pulling image "<image 3>"
  Warning  Failed     33s                kubelet, ip-10-151-116-186.ec2.internal  Error: failed to create containerd task: transport is closing: unavailable
  Normal   Pulled     32s (x2 over 33s)  kubelet, ip-10-151-116-186.ec2.internal  Successfully pulled image "<image 4>"
  Warning  Failed     32s (x2 over 33s)  kubelet, ip-10-151-116-186.ec2.internal  Error: failed to get sandbox container task: no running task found: task 186c22cf5c16b5f417f271156c4279d462f25c600b8cf1c95f37e84d05547658 not found: not found
  Normal   Pulling    32s (x2 over 33s)  kubelet, ip-10-151-116-186.ec2.internal  pulling image "<image 5>"
  Normal   Pulled     32s (x2 over 33s)  kubelet, ip-10-151-116-186.ec2.internal  Successfully pulled image "<image 5>"
  Warning  Failed     32s (x2 over 33s)  kubelet, ip-10-151-116-186.ec2.internal  Error: failed to get sandbox container task: no running task found: task 186c22cf5c16b5f417f271156c4279d462f25c600b8cf1c95f37e84d05547658 not found: not found
  Normal   Pulling    31s (x3 over 33s)  kubelet, ip-10-151-116-186.ec2.internal  pulling image "<image 4>"

I also found the following containerd logs that may be relevant:

Jul 18 01:43:14 ip-10-151-116-186.ec2.internal kata[41442]: time="2019-07-18T01:43:14.335059887Z" level=info msg="virtiofsd quits" ID=186c22cf5c16b5f417f271156c4279d462f25c600b8cf1c95f37e84d05547658 source=virtcontainers subsystem=qemu
Jul 18 01:43:14 ip-10-151-116-186.ec2.internal kata[41442]: time="2019-07-18T01:43:14.335210491Z" level=info msg="Stopping Sandbox" ID=186c22cf5c16b5f417f271156c4279d462f25c600b8cf1c95f37e84d05547658 source=virtcontainers subsystem=qemu
Jul 18 01:43:14 ip-10-151-116-186.ec2.internal containerd[25310]: time="2019-07-18T01:43:14.335059887Z" level=info msg="virtiofsd quits" ID=186c22cf5c16b5f417f271156c4279d462f25c600b8cf1c95f37e84d05547658 source=virtcontainers subsystem=qemu
Jul 18 01:43:14 ip-10-151-116-186.ec2.internal containerd[25310]: time="2019-07-18T01:43:14.335210491Z" level=info msg="Stopping Sandbox" ID=186c22cf5c16b5f417f271156c4279d462f25c600b8cf1c95f37e84d05547658 source=virtcontainers subsystem=qemu
Jul 18 01:43:14 ip-10-151-116-186.ec2.internal containerd[25310]: time="2019-07-18T01:43:14.335308751Z" level=info msg="{\"execute\":\"quit\"}" ID=186c22cf5c16b5f417f271156c4279d462f25c600b8cf1c95f37e84d05547658 source=virtcontainers subsystem=qmp
Jul 18 01:43:14 ip-10-151-116-186.ec2.internal containerd[25310]: time="2019-07-18T01:43:14.335673853Z" level=info msg="{\"return\": {}}" ID=186c22cf5c16b5f417f271156c4279d462f25c600b8cf1c95f37e84d05547658 source=virtcontainers subsystem=qmp
Jul 18 01:43:14 ip-10-151-116-186.ec2.internal containerd[25310]: time="2019-07-18T01:43:14.335750905Z" level=info msg="{\"timestamp\": {\"seconds\": 1563414194, \"microseconds\": 335655}, \"event\": \"SHUTDOWN\", \"data\": {\"guest\": false, \"reason\": \"host-qmp-quit\"}}" ID=186c22cf5c16b5f417f271156c4279d462f25c600b8cf1c95f37e84d05547658 source=virtcontainers subsystem=qmp
Jul 18 01:43:14 ip-10-151-116-186.ec2.internal kata[41442]: time="2019-07-18T01:43:14.335308751Z" level=info msg="{\"execute\":\"quit\"}" ID=186c22cf5c16b5f417f271156c4279d462f25c600b8cf1c95f37e84d05547658 source=virtcontainers subsystem=qmp
Jul 18 01:43:14 ip-10-151-116-186.ec2.internal containerd[25310]: time="2019-07-18T01:43:14.335932191Z" level=info msg="cleanup vm path" ID=186c22cf5c16b5f417f271156c4279d462f25c600b8cf1c95f37e84d05547658 dir=/run/vc/vm/186c22cf5c16b5f417f271156c4279d462f25c600b8cf1c95f37e84d05547658 link=/run/vc/vm/186c22cf5c16b5f417f271156c4279d462f25c600b8cf1c95f37e84d05547658 source=virtcontainers subsystem=qemu
Jul 18 01:43:14 ip-10-151-116-186.ec2.internal kata[41442]: time="2019-07-18T01:43:14.335673853Z" level=info msg="{\"return\": {}}" ID=186c22cf5c16b5f417f271156c4279d462f25c600b8cf1c95f37e84d05547658 source=virtcontainers subsystem=qmp
Jul 18 01:43:14 ip-10-151-116-186.ec2.internal kata[41442]: time="2019-07-18T01:43:14.335750905Z" level=info msg="{\"timestamp\": {\"seconds\": 1563414194, \"microseconds\": 335655}, \"event\": \"SHUTDOWN\", \"data\": {\"guest\": false, \"reason\": \"host-qmp-quit\"}}" ID=186c22cf5c16b5f417f271156c4279d462f25c600b8cf1c95f37e84d05547658 source=virtcontainers subsystem=qmp
Jul 18 01:43:14 ip-10-151-116-186.ec2.internal kata[41442]: time="2019-07-18T01:43:14.335932191Z" level=info msg="cleanup vm path" ID=186c22cf5c16b5f417f271156c4279d462f25c600b8cf1c95f37e84d05547658 dir=/run/vc/vm/186c22cf5c16b5f417f271156c4279d462f25c600b8cf1c95f37e84d05547658 link=/run/vc/vm/186c22cf5c16b5f417f271156c4279d462f25c600b8cf1c95f37e84d05547658 source=virtcontainers subsystem=qemu
Jul 18 01:43:14 ip-10-151-116-186.ec2.internal kata[41442]: time="2019-07-18T01:43:14.360344842Z" level=error msg="Wait for process failed" container=186c22cf5c16b5f417f271156c4279d462f25c600b8cf1c95f37e84d05547658 error="rpc error: code = Unavailable desc = transport is closing" pid=186c22cf5c16b5f417f271156c4279d462f25c600b8cf1c95f37e84d05547658
Jul 18 01:43:14 ip-10-151-116-186.ec2.internal containerd[25310]: time="2019-07-18T01:43:14.360344842Z" level=error msg="Wait for process failed" container=186c22cf5c16b5f417f271156c4279d462f25c600b8cf1c95f37e84d05547658 error="rpc error: code = Unavailable desc = transport is closing" pid=186c22cf5c16b5f417f271156c4279d462f25c600b8cf1c95f37e84d05547658
Jul 18 01:43:34 ip-10-151-116-186.ec2.internal kata[41442]: time="2019-07-18T01:43:34.360244149Z" level=error msg="Wait for process failed" container=f78e80b23b9c329c159011c1b0c4f1c706446f14d20849717ee15e9d75d3b4c5 error="rpc error: code = Unavailable desc = all SubConns are in TransientFailure, latest connection error: connection error: desc = \"transport: Error while dialing rpc error: code = DeadlineExceeded desc = timed out connecting to unix socket ////run/vc/vm/186c22cf5c16b5f417f271156c4279d462f25c600b8cf1c95f37e84d05547658/kata.sock\"" pid=f78e80b23b9c329c159011c1b0c4f1c706446f14d20849717ee15e9d75d3b4c5

The log message virtiofsd quits is interesting, looking at the source code, if virtiofsd quits, the sandbox is shut down, see https://github.com/kata-containers/runtime/blob/e89195e70e5d72101f567c13144c199fb5a2d18d/virtcontainers/qemu.go#L683-L684

So maybe virtiofsd is quitting for some reason?

Show kata-collect-data.sh details

# Meta details Running `kata-collect-data.sh` version `1.8.0-rc0 (commit 43f2680e4c45f673765958b6b0566e514f217f6e)` at `2019-07-19.06:20:44.741140811+0000`. --- Runtime is `/opt/kata/bin/kata-runtime`. # `kata-env` Output of "`/opt/kata/bin/kata-runtime kata-env`": ```toml [Meta] Version = "1.0.23" [Runtime] Debug = false Trace = false DisableGuestSeccomp = true DisableNewNetNs = false Path = "/opt/kata/bin/kata-runtime" [Runtime.Version] Semver = "1.8.0" Commit = "31b8cb3fbc7b1de9c94201aed7c54ddb73e97587" OCI = "1.0.1-dev" [Runtime.Config] Path = "/etc/kata-containers/configuration.toml" [Hypervisor] MachineType = "virt" Version = "NEMU (like QEMU) version 4.0.0 (kata-static)\nCopyright (c) 2003-2019 Fabrice Bellard and the QEMU Project developers" Path = "/opt/kata/bin/nemu-system-x86_64" BlockDeviceDriver = "virtio-scsi" EntropySource = "/dev/urandom" Msize9p = 8192 MemorySlots = 10 Debug = false UseVSock = false SharedFS = "virtio-fs" [Image] Path = "/opt/kata/share/kata-containers/kata-containers-ubuntu-console-1.7.2-913c8fd.img" [Kernel] Path = "/opt/kata/share/kata-containers/vmlinuz-4.19.52-44" Parameters = "init=/usr/lib/systemd/systemd systemd.unit=kata-containers.target systemd.mask=systemd-networkd.service systemd.mask=systemd-networkd.socket" [Initrd] Path = "" [Proxy] Type = "kataProxy" Version = "kata-proxy version 1.8.0-rc0-ff08b9676ace92047d6ed6a5b96cde559c0963f4" Path = "/opt/kata/libexec/kata-containers/kata-proxy" Debug = false [Shim] Type = "kataShim" Version = "kata-shim version 1.8.0-rc0-9f25c0dde30937121783cc493e063808cc0cc0ad" Path = "/opt/kata/libexec/kata-containers/kata-shim" Debug = false [Agent] Type = "kata" Debug = false Trace = false TraceMode = "" TraceType = "" [Host] Kernel = "4.19.50-coreos-r1" Architecture = "amd64" VMContainerCapable = true SupportVSocks = true [Host.Distro] Name = "Container Linux by CoreOS" Version = "2135.4.0" [Host.CPU] Vendor = "GenuineIntel" Model = "Intel(R) Xeon(R) CPU E5-2686 v4 @ 2.30GHz" [Netmon] Version = "kata-netmon version 1.8.0-rc0" Path = "/opt/kata/libexec/kata-containers/kata-netmon" Debug = false Enable = false ``` --- # Runtime config files ## Runtime default config files ``` /etc/kata-containers/configuration.toml /usr/share/defaults/kata-containers/configuration.toml ``` ## Runtime config file contents Output of "`cat "/etc/kata-containers/configuration.toml"`": ```toml # Copyright (c) 2019 Intel Corporation # # SPDX-License-Identifier: Apache-2.0 # # XXX: WARNING: this file is auto-generated. # XXX: # XXX: Source file: "cli/config/configuration-nemu.toml.in" # XXX: Project: # XXX: Name: Kata Containers # XXX: Type: kata # nemu utilizes the 'qemu' hypervisor template type, since it closely matches qemu [hypervisor.qemu] path = "/opt/kata/bin/nemu-system-x86_64" kernel = "/opt/kata/share/kata-containers/vmlinuz.container" image = "/opt/kata/share/kata-containers/kata-containers.img" machine_type = "virt" # Optional space-separated list of options to pass to the guest kernel. # For example, use `kernel_params = "vsyscall=emulate"` if you are having # trouble running pre-2.15 glibc. # # WARNING: - any parameter specified here will take priority over the default # parameter value of the same name used to start the virtual machine. # Do not set values here unless you understand the impact of doing so as you # may stop the virtual machine from booting. # To see the list of default parameters, enable hypervisor debug, create a # container and look for 'default-kernel-parameters' log entries. kernel_params = "" # Path to the firmware. # If you want that qemu uses the default firmware leave this option empty firmware = "/opt/kata/share/kata-nemu/OVMF.fd" # Machine accelerators # comma-separated list of machine accelerators to pass to the hypervisor. # For example, `machine_accelerators = "nosmm,nosmbus,nosata,nopit,static-prt,nofw"` machine_accelerators="" # Default number of vCPUs per SB/VM: # unspecified or 0 --> will be set to 1 # < 0 --> will be set to the actual number of physical cores # > 0 <= number of physical cores --> will be set to the specified number # > number of physical cores --> will be set to the actual number of physical cores default_vcpus = 1 # Default maximum number of vCPUs per SB/VM: # unspecified or == 0 --> will be set to the actual number of physical cores or to the maximum number # of vCPUs supported by KVM if that number is exceeded # > 0 <= number of physical cores --> will be set to the specified number # > number of physical cores --> will be set to the actual number of physical cores or to the maximum number # of vCPUs supported by KVM if that number is exceeded # WARNING: Depending of the architecture, the maximum number of vCPUs supported by KVM is used when # the actual number of physical cores is greater than it. # WARNING: Be aware that this value impacts the virtual machine's memory footprint and CPU # the hotplug functionality. For example, `default_maxvcpus = 240` specifies that until 240 vCPUs # can be added to a SB/VM, but the memory footprint will be big. Another example, with # `default_maxvcpus = 8` the memory footprint will be small, but 8 will be the maximum number of # vCPUs supported by the SB/VM. In general, we recommend that you do not edit this variable, # unless you know what are you doing. default_maxvcpus = 0 # Bridges can be used to hot plug devices. # Limitations: # * Currently only pci bridges are supported # * Until 30 devices per bridge can be hot plugged. # * Until 5 PCI bridges can be cold plugged per VM. # This limitation could be a bug in the kernel # Default number of bridges per SB/VM: # unspecified or 0 --> will be set to 1 # > 1 <= 5 --> will be set to the specified number # > 5 --> will be set to 5 default_bridges = 1 # Default memory size in MiB for SB/VM. # If unspecified then it will be set 2048 MiB. default_memory = 2048 # # Default memory slots per SB/VM. # If unspecified then it will be set 10. # This is will determine the times that memory will be hotadded to sandbox/VM. #memory_slots = 10 # The size in MiB will be plused to max memory of hypervisor. # It is the memory address space for the NVDIMM devie. # If set block storage driver (block_device_driver) to "nvdimm", # should set memory_offset to the size of block device. # Default 0 #memory_offset = 0 # Disable block device from being used for a container's rootfs. # In case of a storage driver like devicemapper where a container's # root file system is backed by a block device, the block device is passed # directly to the hypervisor for performance reasons. # This flag prevents the block device from being passed to the hypervisor, # 9pfs is used instead to pass the rootfs. disable_block_device_use = false # Shared file system type: # - virtio-fs (default) # - virtio-9p shared_fs = "virtio-fs" # Path to vhost-user-fs daemon. virtio_fs_daemon = "/opt/kata/bin/virtiofsd-x86_64" # Default size of DAX cache in MiB virtio_fs_cache_size = 2048 # Cache mode: # # - none # Metadata, data, and pathname lookup are not cached in guest. They are # always fetched from host and any changes are immediately pushed to host. # # - auto # Metadata and pathname lookup cache expires after a configured amount of # time (default is 1 second). Data is cached while the file is open (close # to open consistency). # # - always # Metadata, data, and pathname lookup are cached in guest and never expire. virtio_fs_cache = "auto" # Block storage driver to be used for the hypervisor in case the container # rootfs is backed by a block device. This is virtio-scsi, virtio-blk # or nvdimm. block_device_driver = "virtio-scsi" # Specifies cache-related options will be set to block devices or not. # Default false #block_device_cache_set = true # Specifies cache-related options for block devices. # Denotes whether use of O_DIRECT (bypass the host page cache) is enabled. # Default false #block_device_cache_direct = true # Specifies cache-related options for block devices. # Denotes whether flush requests for the device are ignored. # Default false #block_device_cache_noflush = true # Enable iothreads (data-plane) to be used. This causes IO to be # handled in a separate IO thread. This is currently only implemented # for SCSI. # enable_iothreads = false # Enable pre allocation of VM RAM, default false # Enabling this will result in lower container density # as all of the memory will be allocated and locked # This is useful when you want to reserve all the memory # upfront or in the cases where you want memory latencies # to be very predictable # Default false #enable_mem_prealloc = true # Enable huge pages for VM RAM, default false # Enabling this will result in the VM memory # being allocated using huge pages. # This is useful when you want to use vhost-user network # stacks within the container. This will automatically # result in memory pre allocation enable_hugepages = false # Enable swap of vm memory. Default false. # The behaviour is undefined if mem_prealloc is also set to true #enable_swap = true # This option changes the default hypervisor and kernel parameters # to enable debug output where available. This extra output is added # to the proxy logs, but only when proxy debug is also enabled. # # Default false #enable_debug = true # Disable the customizations done in the runtime when it detects # that it is running on top a VMM. This will result in the runtime # behaving as it would when running on bare metal. # #disable_nesting_checks = true # This is the msize used for 9p shares. It is the number of bytes # used for 9p packet payload. #msize_9p = 8192 # If true and vsocks are supported, use vsocks to communicate directly # with the agent (no proxy is started). # Default true #use_vsock = true # VFIO devices are hotplugged on a bridge by default. # Enable hotplugging on root bus. This may be required for devices with # a large PCI bar, as this is a current limitation with hotplugging on # a bridge. This value is valid for "pc" machine type. # Default false #hotplug_vfio_on_root_bus = true # If host doesn't support vhost_net, set to true. Thus we won't create vhost fds for nics. # Default false #disable_vhost_net = true # # Default entropy source. # The path to a host source of entropy (including a real hardware RNG) # /dev/urandom and /dev/random are two main options. # Be aware that /dev/random is a blocking source of entropy. If the host # runs out of entropy, the VMs boot time will increase leading to get startup # timeouts. # The source of entropy /dev/urandom is non-blocking and provides a # generally acceptable source of entropy. It should work well for pretty much # all practical purposes. #entropy_source= "/dev/urandom" # Path to OCI hook binaries in the *guest rootfs*. # This does not affect host-side hooks which must instead be added to # the OCI spec passed to the runtime. # # You can create a rootfs with hooks by customizing the osbuilder scripts: # https://github.com/kata-containers/osbuilder # # Hooks must be stored in a subdirectory of guest_hook_path according to their # hook type, i.e. "guest_hook_path/{prestart,postart,poststop}". # The agent will scan these directories for executable files and add them, in # lexicographical order, to the lifecycle of the guest container. # Hooks are executed in the runtime namespace of the guest. See the official documentation: # https://github.com/opencontainers/runtime-spec/blob/v1.0.1/config.md#posix-platform-hooks # Warnings will be logged if any error is encountered will scanning for hooks, # but it will not abort container execution. #guest_hook_path = "/usr/share/oci/hooks" [factory] # VM templating support. Once enabled, new VMs are created from template # using vm cloning. They will share the same initial kernel, initramfs and # agent memory by mapping it readonly. It helps speeding up new container # creation and saves a lot of memory if there are many kata containers running # on the same host. # # When disabled, new VMs are created from scratch. # # Note: Requires "initrd=" to be set ("image=" is not supported). # # Default false #enable_template = true # Specifies the path of template. # # Default "/run/vc/vm/template" #template_path = "/run/vc/vm/template" # The number of caches of VMCache: # unspecified or == 0 --> VMCache is disabled # > 0 --> will be set to the specified number # # VMCache is a function that creates VMs as caches before using it. # It helps speed up new container creation. # The function consists of a server and some clients communicating # through Unix socket. The protocol is gRPC in protocols/cache/cache.proto. # The VMCache server will create some VMs and cache them by factory cache. # It will convert the VM to gRPC format and transport it when gets # requestion from clients. # Factory grpccache is the VMCache client. It will request gRPC format # VM and convert it back to a VM. If VMCache function is enabled, # kata-runtime will request VM from factory grpccache when it creates # a new sandbox. # # Default 0 #vm_cache_number = 0 # Specify the address of the Unix socket that is used by VMCache. # # Default /var/run/kata-containers/cache.sock #vm_cache_endpoint = "/var/run/kata-containers/cache.sock" [proxy.kata] path = "/opt/kata/libexec/kata-containers/kata-proxy" # If enabled, proxy messages will be sent to the system log # (default: disabled) #enable_debug = true [shim.kata] path = "/opt/kata/libexec/kata-containers/kata-shim" # If enabled, shim messages will be sent to the system log # (default: disabled) #enable_debug = true # If enabled, the shim will create opentracing.io traces and spans. # (See https://www.jaegertracing.io/docs/getting-started). # # Note: By default, the shim runs in a separate network namespace. Therefore, # to allow it to send trace details to the Jaeger agent running on the host, # it is necessary to set 'disable_new_netns=true' so that it runs in the host # network namespace. # # (default: disabled) #enable_tracing = true [agent.kata] # If enabled, make the agent display debug-level messages. # (default: disabled) #enable_debug = true # Enable agent tracing. # # If enabled, the default trace mode is "dynamic" and the # default trace type is "isolated". The trace mode and type are set # explicity with the `trace_type=` and `trace_mode=` options. # # Notes: # # - Tracing is ONLY enabled when `enable_tracing` is set: explicitly # setting `trace_mode=` and/or `trace_type=` without setting `enable_tracing` # will NOT activate agent tracing. # # - See https://github.com/kata-containers/agent/blob/master/TRACING.md for # full details. # # (default: disabled) #enable_tracing = true # #trace_mode = "dynamic" #trace_type = "isolated" [netmon] # If enabled, the network monitoring process gets started when the # sandbox is created. This allows for the detection of some additional # network being added to the existing network namespace, after the # sandbox has been created. # (default: disabled) #enable_netmon = true # Specify the path to the netmon binary. path = "/opt/kata/libexec/kata-containers/kata-netmon" # If enabled, netmon messages will be sent to the system log # (default: disabled) #enable_debug = true [runtime] # If enabled, the runtime will log additional debug messages to the # system log # (default: disabled) #enable_debug = true # # Internetworking model # Determines how the VM should be connected to the # the container network interface # Options: # # - bridged # Uses a linux bridge to interconnect the container interface to # the VM. Works for most cases except macvlan and ipvlan. # # - macvtap # Used when the Container network interface can be bridged using # macvtap. # # - none # Used when customize network. Only creates a tap device. No veth pair. # # - tcfilter # Uses tc filter rules to redirect traffic from the network interface # provided by plugin to a tap interface connected to the VM. # internetworking_model="tcfilter" # disable guest seccomp # Determines whether container seccomp profiles are passed to the virtual # machine and applied by the kata agent. If set to true, seccomp is not applied # within the guest # (default: true) disable_guest_seccomp=true # If enabled, the runtime will create opentracing.io traces and spans. # (See https://www.jaegertracing.io/docs/getting-started). # (default: disabled) #enable_tracing = true # If enabled, the runtime will not create a network namespace for shim and hypervisor processes. # This option may have some potential impacts to your host. It should only be used when you know what you're doing. # `disable_new_netns` conflicts with `enable_netmon` # `disable_new_netns` conflicts with `internetworking_model=bridged` and `internetworking_model=macvtap`. It works only # with `internetworking_model=none`. The tap device will be in the host network namespace and can connect to a bridge # (like OVS) directly. # If you are using docker, `disable_new_netns` only works with `docker run --net=none` # (default: false) #disable_new_netns = true # Enabled experimental feature list, format: ["a", "b"]. # Experimental features are features not stable enough for production, # They may break compatibility, and are prepared for a big version bump. # Supported experimental features: # 1. "newstore": new persist storage driver which breaks backward compatibility, # expected to move out of experimental in 2.0.0. # (default: []) experimental=[] ``` Output of "`cat "/opt/kata/share/defaults/kata-containers/configuration.toml"`": ```toml # Copyright (c) 2017-2019 Intel Corporation # # SPDX-License-Identifier: Apache-2.0 # # XXX: WARNING: this file is auto-generated. # XXX: # XXX: Source file: "cli/config/configuration-qemu.toml.in" # XXX: Project: # XXX: Name: Kata Containers # XXX: Type: kata [hypervisor.qemu] path = "/opt/kata/bin/qemu-system-x86_64" kernel = "/opt/kata/share/kata-containers/vmlinuz.container" image = "/opt/kata/share/kata-containers/kata-containers.img" machine_type = "pc" # Optional space-separated list of options to pass to the guest kernel. # For example, use `kernel_params = "vsyscall=emulate"` if you are having # trouble running pre-2.15 glibc. # # WARNING: - any parameter specified here will take priority over the default # parameter value of the same name used to start the virtual machine. # Do not set values here unless you understand the impact of doing so as you # may stop the virtual machine from booting. # To see the list of default parameters, enable hypervisor debug, create a # container and look for 'default-kernel-parameters' log entries. kernel_params = "" # Path to the firmware. # If you want that qemu uses the default firmware leave this option empty firmware = "" # Machine accelerators # comma-separated list of machine accelerators to pass to the hypervisor. # For example, `machine_accelerators = "nosmm,nosmbus,nosata,nopit,static-prt,nofw"` machine_accelerators="" # Default number of vCPUs per SB/VM: # unspecified or 0 --> will be set to 1 # < 0 --> will be set to the actual number of physical cores # > 0 <= number of physical cores --> will be set to the specified number # > number of physical cores --> will be set to the actual number of physical cores default_vcpus = 1 # Default maximum number of vCPUs per SB/VM: # unspecified or == 0 --> will be set to the actual number of physical cores or to the maximum number # of vCPUs supported by KVM if that number is exceeded # > 0 <= number of physical cores --> will be set to the specified number # > number of physical cores --> will be set to the actual number of physical cores or to the maximum number # of vCPUs supported by KVM if that number is exceeded # WARNING: Depending of the architecture, the maximum number of vCPUs supported by KVM is used when # the actual number of physical cores is greater than it. # WARNING: Be aware that this value impacts the virtual machine's memory footprint and CPU # the hotplug functionality. For example, `default_maxvcpus = 240` specifies that until 240 vCPUs # can be added to a SB/VM, but the memory footprint will be big. Another example, with # `default_maxvcpus = 8` the memory footprint will be small, but 8 will be the maximum number of # vCPUs supported by the SB/VM. In general, we recommend that you do not edit this variable, # unless you know what are you doing. default_maxvcpus = 0 # Bridges can be used to hot plug devices. # Limitations: # * Currently only pci bridges are supported # * Until 30 devices per bridge can be hot plugged. # * Until 5 PCI bridges can be cold plugged per VM. # This limitation could be a bug in qemu or in the kernel # Default number of bridges per SB/VM: # unspecified or 0 --> will be set to 1 # > 1 <= 5 --> will be set to the specified number # > 5 --> will be set to 5 default_bridges = 1 # Default memory size in MiB for SB/VM. # If unspecified then it will be set 2048 MiB. default_memory = 2048 # # Default memory slots per SB/VM. # If unspecified then it will be set 10. # This is will determine the times that memory will be hotadded to sandbox/VM. #memory_slots = 10 # The size in MiB will be plused to max memory of hypervisor. # It is the memory address space for the NVDIMM devie. # If set block storage driver (block_device_driver) to "nvdimm", # should set memory_offset to the size of block device. # Default 0 #memory_offset = 0 # Disable block device from being used for a container's rootfs. # In case of a storage driver like devicemapper where a container's # root file system is backed by a block device, the block device is passed # directly to the hypervisor for performance reasons. # This flag prevents the block device from being passed to the hypervisor, # 9pfs is used instead to pass the rootfs. disable_block_device_use = false # Shared file system type: # - virtio-9p (default) # - virtio-fs shared_fs = "virtio-9p" # Path to vhost-user-fs daemon. virtio_fs_daemon = "/opt/kata/bin/virtiofsd-x86_64" # Default size of DAX cache in MiB virtio_fs_cache_size = 1024 # Cache mode: # # - none # Metadata, data, and pathname lookup are not cached in guest. They are # always fetched from host and any changes are immediately pushed to host. # # - auto # Metadata and pathname lookup cache expires after a configured amount of # time (default is 1 second). Data is cached while the file is open (close # to open consistency). # # - always # Metadata, data, and pathname lookup are cached in guest and never expire. virtio_fs_cache = "always" # Block storage driver to be used for the hypervisor in case the container # rootfs is backed by a block device. This is virtio-scsi, virtio-blk # or nvdimm. block_device_driver = "virtio-scsi" # Specifies cache-related options will be set to block devices or not. # Default false #block_device_cache_set = true # Specifies cache-related options for block devices. # Denotes whether use of O_DIRECT (bypass the host page cache) is enabled. # Default false #block_device_cache_direct = true # Specifies cache-related options for block devices. # Denotes whether flush requests for the device are ignored. # Default false #block_device_cache_noflush = true # Enable iothreads (data-plane) to be used. This causes IO to be # handled in a separate IO thread. This is currently only implemented # for SCSI. # enable_iothreads = false # Enable pre allocation of VM RAM, default false # Enabling this will result in lower container density # as all of the memory will be allocated and locked # This is useful when you want to reserve all the memory # upfront or in the cases where you want memory latencies # to be very predictable # Default false #enable_mem_prealloc = true # Enable huge pages for VM RAM, default false # Enabling this will result in the VM memory # being allocated using huge pages. # This is useful when you want to use vhost-user network # stacks within the container. This will automatically # result in memory pre allocation #enable_hugepages = true # Enable file based guest memory support. The default is an empty string which # will disable this feature. In the case of virtio-fs, this is enabled # automatically and '/dev/shm' is used as the backing folder. # This option will be ignored if VM templating is enabled. #file_mem_backend = "" # Enable swap of vm memory. Default false. # The behaviour is undefined if mem_prealloc is also set to true #enable_swap = true # This option changes the default hypervisor and kernel parameters # to enable debug output where available. This extra output is added # to the proxy logs, but only when proxy debug is also enabled. # # Default false #enable_debug = true # Disable the customizations done in the runtime when it detects # that it is running on top a VMM. This will result in the runtime # behaving as it would when running on bare metal. # #disable_nesting_checks = true # This is the msize used for 9p shares. It is the number of bytes # used for 9p packet payload. #msize_9p = 8192 # If true and vsocks are supported, use vsocks to communicate directly # with the agent and no proxy is started, otherwise use unix # sockets and start a proxy to communicate with the agent. # Default false #use_vsock = true # VFIO devices are hotplugged on a bridge by default. # Enable hotplugging on root bus. This may be required for devices with # a large PCI bar, as this is a current limitation with hotplugging on # a bridge. This value is valid for "pc" machine type. # Default false #hotplug_vfio_on_root_bus = true # If host doesn't support vhost_net, set to true. Thus we won't create vhost fds for nics. # Default false #disable_vhost_net = true # # Default entropy source. # The path to a host source of entropy (including a real hardware RNG) # /dev/urandom and /dev/random are two main options. # Be aware that /dev/random is a blocking source of entropy. If the host # runs out of entropy, the VMs boot time will increase leading to get startup # timeouts. # The source of entropy /dev/urandom is non-blocking and provides a # generally acceptable source of entropy. It should work well for pretty much # all practical purposes. #entropy_source= "/dev/urandom" # Path to OCI hook binaries in the *guest rootfs*. # This does not affect host-side hooks which must instead be added to # the OCI spec passed to the runtime. # # You can create a rootfs with hooks by customizing the osbuilder scripts: # https://github.com/kata-containers/osbuilder # # Hooks must be stored in a subdirectory of guest_hook_path according to their # hook type, i.e. "guest_hook_path/{prestart,postart,poststop}". # The agent will scan these directories for executable files and add them, in # lexicographical order, to the lifecycle of the guest container. # Hooks are executed in the runtime namespace of the guest. See the official documentation: # https://github.com/opencontainers/runtime-spec/blob/v1.0.1/config.md#posix-platform-hooks # Warnings will be logged if any error is encountered will scanning for hooks, # but it will not abort container execution. #guest_hook_path = "/usr/share/oci/hooks" [factory] # VM templating support. Once enabled, new VMs are created from template # using vm cloning. They will share the same initial kernel, initramfs and # agent memory by mapping it readonly. It helps speeding up new container # creation and saves a lot of memory if there are many kata containers running # on the same host. # # When disabled, new VMs are created from scratch. # # Note: Requires "initrd=" to be set ("image=" is not supported). # # Default false #enable_template = true # Specifies the path of template. # # Default "/run/vc/vm/template" #template_path = "/run/vc/vm/template" # The number of caches of VMCache: # unspecified or == 0 --> VMCache is disabled # > 0 --> will be set to the specified number # # VMCache is a function that creates VMs as caches before using it. # It helps speed up new container creation. # The function consists of a server and some clients communicating # through Unix socket. The protocol is gRPC in protocols/cache/cache.proto. # The VMCache server will create some VMs and cache them by factory cache. # It will convert the VM to gRPC format and transport it when gets # requestion from clients. # Factory grpccache is the VMCache client. It will request gRPC format # VM and convert it back to a VM. If VMCache function is enabled, # kata-runtime will request VM from factory grpccache when it creates # a new sandbox. # # Default 0 #vm_cache_number = 0 # Specify the address of the Unix socket that is used by VMCache. # # Default /var/run/kata-containers/cache.sock #vm_cache_endpoint = "/var/run/kata-containers/cache.sock" [proxy.kata] path = "/opt/kata/libexec/kata-containers/kata-proxy" # If enabled, proxy messages will be sent to the system log # (default: disabled) #enable_debug = true [shim.kata] path = "/opt/kata/libexec/kata-containers/kata-shim" # If enabled, shim messages will be sent to the system log # (default: disabled) #enable_debug = true # If enabled, the shim will create opentracing.io traces and spans. # (See https://www.jaegertracing.io/docs/getting-started). # # Note: By default, the shim runs in a separate network namespace. Therefore, # to allow it to send trace details to the Jaeger agent running on the host, # it is necessary to set 'disable_new_netns=true' so that it runs in the host # network namespace. # # (default: disabled) #enable_tracing = true [agent.kata] # If enabled, make the agent display debug-level messages. # (default: disabled) #enable_debug = true # Enable agent tracing. # # If enabled, the default trace mode is "dynamic" and the # default trace type is "isolated". The trace mode and type are set # explicity with the `trace_type=` and `trace_mode=` options. # # Notes: # # - Tracing is ONLY enabled when `enable_tracing` is set: explicitly # setting `trace_mode=` and/or `trace_type=` without setting `enable_tracing` # will NOT activate agent tracing. # # - See https://github.com/kata-containers/agent/blob/master/TRACING.md for # full details. # # (default: disabled) #enable_tracing = true # #trace_mode = "dynamic" #trace_type = "isolated" [netmon] # If enabled, the network monitoring process gets started when the # sandbox is created. This allows for the detection of some additional # network being added to the existing network namespace, after the # sandbox has been created. # (default: disabled) #enable_netmon = true # Specify the path to the netmon binary. path = "/opt/kata/libexec/kata-containers/kata-netmon" # If enabled, netmon messages will be sent to the system log # (default: disabled) #enable_debug = true [runtime] # If enabled, the runtime will log additional debug messages to the # system log # (default: disabled) #enable_debug = true # # Internetworking model # Determines how the VM should be connected to the # the container network interface # Options: # # - bridged # Uses a linux bridge to interconnect the container interface to # the VM. Works for most cases except macvlan and ipvlan. # # - macvtap # Used when the Container network interface can be bridged using # macvtap. # # - none # Used when customize network. Only creates a tap device. No veth pair. # # - tcfilter # Uses tc filter rules to redirect traffic from the network interface # provided by plugin to a tap interface connected to the VM. # internetworking_model="tcfilter" # disable guest seccomp # Determines whether container seccomp profiles are passed to the virtual # machine and applied by the kata agent. If set to true, seccomp is not applied # within the guest # (default: true) disable_guest_seccomp=true # If enabled, the runtime will create opentracing.io traces and spans. # (See https://www.jaegertracing.io/docs/getting-started). # (default: disabled) #enable_tracing = true # If enabled, the runtime will not create a network namespace for shim and hypervisor processes. # This option may have some potential impacts to your host. It should only be used when you know what you're doing. # `disable_new_netns` conflicts with `enable_netmon` # `disable_new_netns` conflicts with `internetworking_model=bridged` and `internetworking_model=macvtap`. It works only # with `internetworking_model=none`. The tap device will be in the host network namespace and can connect to a bridge # (like OVS) directly. # If you are using docker, `disable_new_netns` only works with `docker run --net=none` # (default: false) #disable_new_netns = true # Enabled experimental feature list, format: ["a", "b"]. # Experimental features are features not stable enough for production, # They may break compatibility, and are prepared for a big version bump. # Supported experimental features: # 1. "newstore": new persist storage driver which breaks backward compatibility, # expected to move out of experimental in 2.0.0. # (default: []) experimental=[] ``` Config file `/usr/share/defaults/kata-containers/configuration.toml` not found --- # KSM throttler ## version Output of "` --version`": ``` ./kata-collect-data.sh: line 178: --version: command not found ``` ## systemd service # Image details ```yaml --- osbuilder: url: "https://github.com/kata-containers/osbuilder" version: "unknown" rootfs-creation-time: "2019-07-05T02:08:07.059974670+0000Z" description: "osbuilder rootfs" file-format-version: "0.0.2" architecture: "x86_64" base-distro: name: "bionic" version: "18.04" packages: default: - "systemd,iptables,init,chrony,fuse,bash" extra: - "bash" - "fuse" agent: url: "https://github.com/kata-containers/agent" name: "kata-agent" version: "1.7.2-152276729a0b9027fbbdac34ccc27587bba77025" agent-is-init-daemon: "no" ``` --- # Initrd details No initrd --- # Logfiles ## Runtime logs No recent runtime problems found in system journal. ## Proxy logs No recent proxy problems found in system journal. ## Shim logs No recent shim problems found in system journal. ## Throttler logs No recent throttler problems found in system journal. --- # Container manager details Have `docker` ## Docker Output of "`docker version`": ``` Client: Version: 18.06.3-ce API version: 1.38 Go version: go1.10.8 Git commit: d7080c1 Built: Tue Feb 19 23:07:53 2019 OS/Arch: linux/amd64 Experimental: false Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running? ``` Output of "`docker info`": ``` Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running? ``` Output of "`systemctl show docker`": ``` Restart=no NotifyAccess=none RestartUSec=100ms TimeoutStartUSec=1min 30s TimeoutStopUSec=1min 30s RuntimeMaxUSec=infinity WatchdogUSec=0 WatchdogTimestampMonotonic=0 RootDirectoryStartOnly=no RemainAfterExit=no GuessMainPID=yes MainPID=0 ControlPID=0 FileDescriptorStoreMax=0 NFileDescriptorStore=0 StatusErrno=0 Result=success UID=[not set] GID=[not set] NRestarts=0 ExecMainStartTimestampMonotonic=0 ExecMainExitTimestampMonotonic=0 ExecMainPID=0 ExecMainCode=0 ExecMainStatus=0 MemoryCurrent=[not set] CPUUsageNSec=[not set] TasksCurrent=[not set] IPIngressBytes=18446744073709551615 IPIngressPackets=18446744073709551615 IPEgressBytes=18446744073709551615 IPEgressPackets=18446744073709551615 Delegate=no CPUAccounting=no CPUWeight=[not set] StartupCPUWeight=[not set] CPUShares=[not set] StartupCPUShares=[not set] CPUQuotaPerSecUSec=infinity IOAccounting=no IOWeight=[not set] StartupIOWeight=[not set] BlockIOAccounting=no BlockIOWeight=[not set] StartupBlockIOWeight=[not set] MemoryAccounting=yes MemoryMin=0 MemoryLow=0 MemoryHigh=infinity MemoryMax=infinity MemorySwapMax=infinity MemoryLimit=infinity DevicePolicy=auto TasksAccounting=yes TasksMax=73727 IPAccounting=no UMask=0022 LimitCPU=infinity LimitCPUSoft=infinity LimitFSIZE=infinity LimitFSIZESoft=infinity LimitDATA=infinity LimitDATASoft=infinity LimitSTACK=infinity LimitSTACKSoft=8388608 LimitCORE=infinity LimitCORESoft=infinity LimitRSS=infinity LimitRSSSoft=infinity LimitNOFILE=1073741816 LimitNOFILESoft=1073741816 LimitAS=infinity LimitASSoft=infinity LimitNPROC=2063281 LimitNPROCSoft=2063281 LimitMEMLOCK=67108864 LimitMEMLOCKSoft=67108864 LimitLOCKS=infinity LimitLOCKSSoft=infinity LimitSIGPENDING=2063281 LimitSIGPENDINGSoft=2063281 LimitMSGQUEUE=819200 LimitMSGQUEUESoft=819200 LimitNICE=0 LimitNICESoft=0 LimitRTPRIO=0 LimitRTPRIOSoft=0 LimitRTTIME=infinity LimitRTTIMESoft=infinity OOMScoreAdjust=0 Nice=0 IOSchedulingClass=0 IOSchedulingPriority=0 CPUSchedulingPolicy=0 CPUSchedulingPriority=0 TimerSlackNSec=50000 CPUSchedulingResetOnFork=no NonBlocking=no StandardInput=null StandardInputData= StandardOutput=inherit StandardError=inherit TTYReset=no TTYVHangup=no TTYVTDisallocate=no SyslogPriority=30 SyslogLevelPrefix=yes SyslogLevel=6 SyslogFacility=3 LogLevelMax=-1 LogRateLimitIntervalUSec=0 LogRateLimitBurst=0 SecureBits=0 CapabilityBoundingSet=cap_chown cap_dac_override cap_dac_read_search cap_fowner cap_fsetid cap_kill cap_setgid cap_setuid cap_setpcap cap_linux_immutable cap_net_bind_service cap_net_broadcast cap_net_admin cap_net_raw cap_ipc_lock cap_ipc_owner cap_sys_module cap_sys_rawio cap_sys_chroot cap_sys_ptrace cap_sys_pacct cap_sys_admin cap_sys_boot cap_sys_nice cap_sys_resource cap_sys_time cap_sys_tty_config cap_mknod cap_lease cap_audit_write cap_audit_control cap_setfcap cap_mac_override cap_mac_admin cap_syslog cap_wake_alarm cap_block_suspend AmbientCapabilities= DynamicUser=no RemoveIPC=no MountFlags= PrivateTmp=no PrivateDevices=no ProtectKernelTunables=no ProtectKernelModules=no ProtectControlGroups=no PrivateNetwork=no PrivateUsers=no PrivateMounts=no ProtectHome=no ProtectSystem=no SameProcessGroup=no UtmpMode=init IgnoreSIGPIPE=yes NoNewPrivileges=no SystemCallErrorNumber=0 LockPersonality=no RuntimeDirectoryPreserve=no RuntimeDirectoryMode=0755 StateDirectoryMode=0755 CacheDirectoryMode=0755 LogsDirectoryMode=0755 ConfigurationDirectoryMode=0755 MemoryDenyWriteExecute=no RestrictRealtime=no RestrictNamespaces=no MountAPIVFS=no KeyringMode=private KillMode=control-group KillSignal=15 FinalKillSignal=9 SendSIGKILL=yes SendSIGHUP=no WatchdogSignal=6 Id=docker.service Names=docker.service WantedBy=kitt-init.service ConsistsOf=docker.socket Before=kitt-init.service After=docker.socket TriggeredBy=docker.socket Description=docker.service LoadState=masked ActiveState=inactive SubState=dead FragmentPath=/dev/null UnitFileState=masked StateChangeTimestampMonotonic=0 InactiveExitTimestampMonotonic=0 ActiveEnterTimestampMonotonic=0 ActiveExitTimestampMonotonic=0 InactiveEnterTimestampMonotonic=0 CanStart=no CanStop=yes CanReload=no CanIsolate=no StopWhenUnneeded=no RefuseManualStart=no RefuseManualStop=no AllowIsolate=no DefaultDependencies=yes OnFailureJobMode=replace IgnoreOnIsolate=no NeedDaemonReload=no JobTimeoutUSec=infinity JobRunningTimeoutUSec=infinity JobTimeoutAction=none ConditionResult=no AssertResult=no ConditionTimestampMonotonic=0 AssertTimestampMonotonic=0 LoadError=org.freedesktop.systemd1.UnitMasked "Unit docker.service is masked." Transient=no Perpetual=no StartLimitIntervalUSec=10s StartLimitBurst=5 StartLimitAction=none FailureAction=none FailureActionExitStatus=-1 SuccessAction=none SuccessActionExitStatus=-1 CollectMode=inactive ``` No `kubectl` No `crio` Have `containerd` ## containerd Output of "`containerd --version`": ``` containerd github.com/containerd/containerd v1.2.0-575-gb99a66c2 b99a66c267d04740628634d7d038f9ce1753b339 ``` Output of "`systemctl show containerd`": ``` Type=simple Restart=always NotifyAccess=none RestartUSec=5s TimeoutStartUSec=1min 30s TimeoutStopUSec=1min 30s RuntimeMaxUSec=infinity WatchdogUSec=0 WatchdogTimestampMonotonic=0 RootDirectoryStartOnly=no RemainAfterExit=no GuessMainPID=yes MainPID=31077 ControlPID=0 FileDescriptorStoreMax=0 NFileDescriptorStore=0 StatusErrno=0 Result=success UID=[not set] GID=[not set] NRestarts=0 ExecMainStartTimestamp=Fri 2019-07-19 04:59:42 UTC ExecMainStartTimestampMonotonic=1810428852 ExecMainExitTimestampMonotonic=0 ExecMainPID=31077 ExecMainCode=0 ExecMainStatus=0 ExecStartPre={ path=/sbin/modprobe ; argv[]=/sbin/modprobe overlay ; ignore_errors=no ; start_time=[Fri 2019-07-19 04:59:40 UTC] ; stop_time=[Fri 2019-07-19 04:59:40 UTC] ; pid=30996 ; code=exited ; status=0 } ExecStartPre={ path=/opt/bin/containerd-init.sh ; argv[]=/opt/bin/containerd-init.sh ; ignore_errors=no ; start_time=[Fri 2019-07-19 04:59:40 UTC] ; stop_time=[Fri 2019-07-19 04:59:42 UTC] ; pid=30999 ; code=exited ; status=0 } ExecStart={ path=/opt/containerd/bin/containerd ; argv[]=/opt/containerd/bin/containerd --log-level=info --config=/etc/containerd/config.toml ; ignore_errors=no ; start_time=[Fri 2019-07-19 04:59:42 UTC] ; stop_time=[n/a] ; pid=31077 ; code=(null) ; status=0/0 } Slice=system.slice ControlGroup=/system.slice/containerd.service MemoryCurrent=10033016832 CPUUsageNSec=[not set] TasksCurrent=335 IPIngressBytes=18446744073709551615 IPIngressPackets=18446744073709551615 IPEgressBytes=18446744073709551615 IPEgressPackets=18446744073709551615 Delegate=yes DelegateControllers=cpu cpuacct io blkio memory devices pids bpf-firewall bpf-devices CPUAccounting=no CPUWeight=[not set] StartupCPUWeight=[not set] CPUShares=[not set] StartupCPUShares=[not set] CPUQuotaPerSecUSec=infinity IOAccounting=no IOWeight=[not set] StartupIOWeight=[not set] BlockIOAccounting=no BlockIOWeight=[not set] StartupBlockIOWeight=[not set] MemoryAccounting=yes MemoryMin=0 MemoryLow=0 MemoryHigh=infinity MemoryMax=infinity MemorySwapMax=infinity MemoryLimit=infinity DevicePolicy=auto TasksAccounting=yes TasksMax=73727 IPAccounting=no UMask=0022 LimitCPU=infinity LimitCPUSoft=infinity LimitFSIZE=infinity LimitFSIZESoft=infinity LimitDATA=infinity LimitDATASoft=infinity LimitSTACK=infinity LimitSTACKSoft=8388608 LimitCORE=infinity LimitCORESoft=infinity LimitRSS=infinity LimitRSSSoft=infinity LimitNOFILE=1048576 LimitNOFILESoft=1048576 LimitAS=infinity LimitASSoft=infinity LimitNPROC=infinity LimitNPROCSoft=infinity LimitMEMLOCK=65536 LimitMEMLOCKSoft=65536 LimitLOCKS=infinity LimitLOCKSSoft=infinity LimitSIGPENDING=2063281 LimitSIGPENDINGSoft=2063281 LimitMSGQUEUE=819200 LimitMSGQUEUESoft=819200 LimitNICE=0 LimitNICESoft=0 LimitRTPRIO=0 LimitRTPRIOSoft=0 LimitRTTIME=infinity LimitRTTIMESoft=infinity OOMScoreAdjust=-999 Nice=0 IOSchedulingClass=0 IOSchedulingPriority=0 CPUSchedulingPolicy=0 CPUSchedulingPriority=0 TimerSlackNSec=50000 CPUSchedulingResetOnFork=no NonBlocking=no StandardInput=null StandardInputData= StandardOutput=journal StandardError=inherit TTYReset=no TTYVHangup=no TTYVTDisallocate=no SyslogPriority=30 SyslogLevelPrefix=yes SyslogLevel=6 SyslogFacility=3 LogLevelMax=-1 LogRateLimitIntervalUSec=0 LogRateLimitBurst=0 SecureBits=0 CapabilityBoundingSet=cap_chown cap_dac_override cap_dac_read_search cap_fowner cap_fsetid cap_kill cap_setgid cap_setuid cap_setpcap cap_linux_immutable cap_net_bind_service cap_net_broadcast cap_net_admin cap_net_raw cap_ipc_lock cap_ipc_owner cap_sys_module cap_sys_rawio cap_sys_chroot cap_sys_ptrace cap_sys_pacct cap_sys_admin cap_sys_boot cap_sys_nice cap_sys_resource cap_sys_time cap_sys_tty_config cap_mknod cap_lease cap_audit_write cap_audit_control cap_setfcap cap_mac_override cap_mac_admin cap_syslog cap_wake_alarm cap_block_suspend AmbientCapabilities= DynamicUser=no RemoveIPC=no MountFlags= PrivateTmp=no PrivateDevices=no ProtectKernelTunables=no ProtectKernelModules=no ProtectControlGroups=no PrivateNetwork=no PrivateUsers=no PrivateMounts=no ProtectHome=no ProtectSystem=no SameProcessGroup=no UtmpMode=init IgnoreSIGPIPE=yes NoNewPrivileges=no SystemCallErrorNumber=0 LockPersonality=no RuntimeDirectoryPreserve=no RuntimeDirectoryMode=0755 StateDirectoryMode=0755 CacheDirectoryMode=0755 LogsDirectoryMode=0755 ConfigurationDirectoryMode=0755 MemoryDenyWriteExecute=no RestrictRealtime=no RestrictNamespaces=no MountAPIVFS=no KeyringMode=private KillMode=process KillSignal=15 FinalKillSignal=9 SendSIGKILL=yes SendSIGHUP=no WatchdogSignal=6 Id=containerd.service Names=containerd.service Requires=sysinit.target system.slice WantedBy=multi-user.target Conflicts=shutdown.target Before=multi-user.target shutdown.target After=system.slice containerd-devicemapper-init.service sysinit.target basic.target systemd-journald.socket kata-init.service Documentation=https://containerd.io Description=containerd container runtime LoadState=loaded ActiveState=active SubState=running FragmentPath=/etc/systemd/system/containerd.service UnitFileState=enabled UnitFilePreset=enabled StateChangeTimestamp=Fri 2019-07-19 04:59:42 UTC StateChangeTimestampMonotonic=1810428909 InactiveExitTimestamp=Fri 2019-07-19 04:59:40 UTC InactiveExitTimestampMonotonic=1808328354 ActiveEnterTimestamp=Fri 2019-07-19 04:59:42 UTC ActiveEnterTimestampMonotonic=1810428909 ActiveExitTimestamp=Fri 2019-07-19 04:59:40 UTC ActiveExitTimestampMonotonic=1808318004 InactiveEnterTimestamp=Fri 2019-07-19 04:59:40 UTC InactiveEnterTimestampMonotonic=1808325906 CanStart=yes CanStop=yes CanReload=no CanIsolate=no StopWhenUnneeded=no RefuseManualStart=no RefuseManualStop=no AllowIsolate=no DefaultDependencies=yes OnFailureJobMode=replace IgnoreOnIsolate=no NeedDaemonReload=no JobTimeoutUSec=infinity JobRunningTimeoutUSec=infinity JobTimeoutAction=none ConditionResult=yes AssertResult=yes ConditionTimestamp=Fri 2019-07-19 04:59:40 UTC ConditionTimestampMonotonic=1808326684 AssertTimestamp=Fri 2019-07-19 04:59:40 UTC AssertTimestampMonotonic=1808326684 Transient=no Perpetual=no StartLimitIntervalUSec=10s StartLimitBurst=5 StartLimitAction=none FailureAction=none FailureActionExitStatus=-1 SuccessAction=none SuccessActionExitStatus=-1 InvocationID=753ef34855474477a741c9fe1f3af593 CollectMode=inactive ``` Output of "`cat /etc/containerd/config.toml`": ``` [grpc] address = "/run/containerd/containerd.sock" uid = 0 gid = 0 [plugins] [plugins.devmapper] pool_name = "containerd-thinpool" base_image_size = "32GB" [plugins.cri.containerd] snapshotter = "overlayfs" [plugins.cri.containerd.default_runtime] runtime_type = "io.containerd.runtime.v1.linux" runtime_engine = "/usr/bin/runc" runtime_root = "" [plugins.cri.containerd.untrusted_workload_runtime] runtime_type = "io.containerd.kata.v2" [plugins.cri] max_container_log_line_size = 262144 [plugins.linux] shim = "/opt/containerd/bin/containerd-shim" runtime = "runc" [plugins.cri.registry] [plugins.cri.registry.mirrors] [plugins.cri.registry.mirrors."docker.io"] endpoint = [] ``` --- # Packages No `dpkg` No `rpm` ---

awprice commented 5 years ago

Further digging and I found this in the kernel logs, right before the virtiofsd quits message:

Jul 18 01:43:09 ip-10-151-116-186.ec2.internal kernel: virtiofsd-x86_6[41629]: segfault at 7f931fc123c8 ip 00007f93b9a38948 sp 00007f9328e81988 error 6 in libc-2.27.so[7f93b98c9000+1c6000]
Jul 18 01:43:14 ip-10-151-116-186.ec2.internal systemd-coredump[41918]: Process 41462 (virtiofsd-x86_6) of user 0 dumped core.

We are using the virtiofsd from https://github.com/kata-containers/runtime/releases/tag/1.8.0-rc0

grahamwhaley commented 5 years ago

/cc @dagrh @stefanha

dagrh commented 5 years ago

Can you get a backtrace from that for us? It looks like systemd-coredump has squirreled away a core for you

awprice commented 5 years ago

@dagrh This isn't the exact backtrace for the process above as I don't have the core anymore, but I was able to replicate and get the backtrace for you:

GNU gdb (Ubuntu 8.1-0ubuntu3) 8.1.0.20180409-git
Copyright (C) 2018 Free Software Foundation, Inc.
License GPLv3+: GNU GPL version 3 or later <http://gnu.org/licenses/gpl.html>
This is free software: you are free to change and redistribute it.
There is NO WARRANTY, to the extent permitted by law.  Type "show copying"
and "show warranty" for details.
This GDB was configured as "x86_64-linux-gnu".
Type "show configuration" for configuration details.
For bug reporting instructions, please see:
<http://www.gnu.org/software/gdb/bugs/>.
Find the GDB manual and other documentation resources online at:
<http://www.gnu.org/software/gdb/documentation/>.
For help, type "help".
Type "apropos word" to search for commands related to "word"...
Reading symbols from ../opt/kata/bin/virtiofsd-x86_64...done.

warning: core file may not match specified executable file.
[New LWP 11324]
[New LWP 11284]
[New LWP 11323]

warning: .dynamic section for "/lib64/ld-linux-x86-64.so.2" is not at the expected address (wrong library or version mismatch?)

warning: Could not load shared library symbols for 5 libraries, e.g. /lib64/libseccomp.so.2.
Use the "info sharedlibrary" command to see the complete listing.
Do you need "set solib-search-path" or "set sysroot"?
Core was generated by `/opt/kata/bin/virtiofsd-x86_64 -o vhost_user_socket=/run/vc/vm/b1773b5a83b1d46e'.
Program terminated with signal SIGSEGV, Segmentation fault.
#0  0x000055c1756e255a in vu_log_queue_fill (dev=dev@entry=0x55c1769761b0, len=len@entry=16, elem=0x7ff79c022070, vq=<optimized out>, vq=<optimized out>)
    at /home/agentadmin/workspace/nemu_release-2019-05-21/contrib/libvhost-user/libvhost-user.c:2387
2387    /home/agentadmin/workspace/nemu_release-2019-05-21/contrib/libvhost-user/libvhost-user.c: No such file or directory.
[Current thread is 1 (LWP 11324)]
(gdb) bt
#0  0x000055c1756e255a in vu_log_queue_fill (dev=dev@entry=0x55c1769761b0, len=len@entry=16, elem=0x7ff79c022070, vq=<optimized out>, vq=<optimized out>)
    at /home/agentadmin/workspace/nemu_release-2019-05-21/contrib/libvhost-user/libvhost-user.c:2387
#1  0x000055c1756e46b8 in vu_queue_fill (dev=0x55c1769761b0, vq=0x55c176976378, elem=0x7ff79c022070, len=16, idx=0)
    at /home/agentadmin/workspace/nemu_release-2019-05-21/contrib/libvhost-user/libvhost-user.c:2410
#2  0x000055c1756e47b5 in vu_queue_push (dev=0x55c1769761b0, vq=vq@entry=0x55c176976378, elem=elem@entry=0x7ff79c022070, len=len@entry=16)
    at /home/agentadmin/workspace/nemu_release-2019-05-21/contrib/libvhost-user/libvhost-user.c:2456
#3  0x000055c1756dc908 in virtio_send_msg (se=0x55c176975d20, ch=0x7ff7a2bb1e30, iov=iov@entry=0x7ff7a2bb1a90, count=1)
    at /home/agentadmin/workspace/nemu_release-2019-05-21/contrib/virtiofsd/fuse_virtio.c:224
#4  0x000055c1756d6491 in fuse_send_msg (se=<optimized out>, ch=<optimized out>, iov=iov@entry=0x7ff7a2bb1a90, count=count@entry=1)
    at /home/agentadmin/workspace/nemu_release-2019-05-21/contrib/virtiofsd/fuse_lowlevel.c:178
#5  0x000055c1756d68fa in fuse_send_reply_iov_nofree (req=req@entry=0x7ff79c0234d0, error=<optimized out>, iov=iov@entry=0x7ff7a2bb1a90, count=count@entry=1)
    at /home/agentadmin/workspace/nemu_release-2019-05-21/contrib/virtiofsd/fuse_lowlevel.c:202
#6  0x000055c1756d6b68 in send_reply_iov (count=1, iov=0x7ff7a2bb1a90, error=<optimized out>, req=0x7ff79c0234d0)
    at /home/agentadmin/workspace/nemu_release-2019-05-21/contrib/virtiofsd/fuse_lowlevel.c:210
#7  send_reply (argsize=0, arg=0x0, error=<optimized out>, req=0x7ff79c0234d0) at /home/agentadmin/workspace/nemu_release-2019-05-21/contrib/virtiofsd/fuse_lowlevel.c:225
#8  fuse_reply_err (req=0x7ff79c0234d0, err=<optimized out>) at /home/agentadmin/workspace/nemu_release-2019-05-21/contrib/virtiofsd/fuse_lowlevel.c:296
#9  0x000055c1756de7b6 in lo_setupmapping (req=0x7ff79c0234d0, ino=222, foffset=<optimized out>, len=2097152, moffset=186646528, flags=<optimized out>, fi=0x0)
    at /home/agentadmin/workspace/nemu_release-2019-05-21/contrib/virtiofsd/passthrough_ll.c:1915
#10 0x000055c1756d6d20 in do_setupmapping (req=0x7ff79c0234d0, nodeid=222, iter=<optimized out>)
    at /home/agentadmin/workspace/nemu_release-2019-05-21/contrib/virtiofsd/fuse_lowlevel.c:1900
#11 0x000055c1756dae53 in fuse_session_process_buf_int (se=se@entry=0x55c176975d20, bufv=bufv@entry=0x7ff7a2bb1e70, ch=ch@entry=0x7ff7a2bb1e30)
    at /home/agentadmin/workspace/nemu_release-2019-05-21/contrib/virtiofsd/fuse_lowlevel.c:2514
#12 0x000055c1756dc30a in fv_queue_thread (opaque=0x55c1769783e0) at /home/agentadmin/workspace/nemu_release-2019-05-21/contrib/virtiofsd/fuse_virtio.c:565
#13 0x00007ff8339d09b3 in ?? ()
#14 0x0000000000000000 in ?? ()
dagrh commented 5 years ago

That shouldn't happen. We've apparently happily done a DAX mapping and are trying to send the reply back and something got upset. My guess is that we've got another message on the virtqueue for some reason - maybe a hotplug?

Hopefully; https://gitlab.com/virtio-fs/qemu/commit/b889837fc0ed3a42f05e07170afd7f82aa648e76 would protect against that

ganeshmaharaj commented 5 years ago

@dagrh Thanks for the info. As soon as the next release for virtio-fs happens, we can do the follow-on into NEMU & kata.

awprice commented 5 years ago

Thanks @dagrh + @ganeshmaharaj.

@dagrh I noticed the commit is on the virtio-fs-dev branch (https://gitlab.com/virtio-fs/qemu/commits/virtio-fs-dev) is this safe to test in it's current state? I would like to confirm that the commit you linked will fix our issue.

@ganeshmaharaj What's the release cadence like for getting new versions of virtiofsd/nemu into Kata?

ganeshmaharaj commented 5 years ago

@ganeshmaharaj What's the release cadence like for getting new versions of virtiofsd/nemu into Kata? @awprice working that out now internally. Will try and get some info out asap.

ganeshmaharaj commented 5 years ago

@awprice while we still need to update the contents of virtio-fs, can you share the steps on reproducing this issue? Just want to make sure there isn't other weird corner case we are missing that needs fixing on kata side.

awprice commented 5 years ago

@ganeshmaharaj Unfortunately I had a hard time trying to reproduce this one easily. It seems to happen semi-randomly. I can confirm if it's still occurring when a new version of virtiofsd is shipped with the fixes.

awprice commented 5 years ago

@ganeshmaharaj Hows the progress on getting a new version of virtiofsd into the Kata releases? We'd like to use the above fix if possible.

devimc commented 5 years ago

@awprice WIP https://github.com/kata-containers/runtime/pull/1994

awprice commented 4 years ago

I'm going to close this out, as the version of virtiofs included in Kata 1.9 includes the commit that fixes this for us.