Closed Dodan closed 5 years ago
Thanks for opening @Dodan
/cc @ganeshmaharaj @mcastelino - any thoughts, or more info we might need?
@egernst @grahamwhaley hi guys, do you think you need more information regarding this?
Hi @Dodan - not sure - @mcastelino is a bit busy right now, so might not get to look. @ganeshmaharaj , do you have some fc
insights maybe?
Thanks for that 'error reserving pod name', that feels like a big clue.
And, yeah, it really should not hang forever on error either....
@ganeshmaharaj will you be able to reproduce this?
@Dodan Was trying to reproduce the issue and it seems i have hitting an issue where running firecracker with crio seems to be crash with kata. This might be just my setup and I am chasing it down. Will keep this thread updated on that and this issue asap.
@Dodan - FYI, still working through this. We had an issue with latest release of Firecracker (see https://github.com/kata-containers/runtime/issues/2027), which @devimc is fixing. Once this is in place, we'll take a look! Sorry for delay.
@Dodan kata now supports FC 0.18, can you try again?
@devimc great! thanks for letting me know!
I have a couple of questions, if you don't mind :) :
@Dodan
What kata version should we use? The master branch or is there a stable version of 1.9.0 coming soon?
please use master
Does kata runtime come with FC 0.18 or do we need to get that from their repo?
no, you have to install it, https://github.com/firecracker-microvm/firecracker/releases
Is this fix for CRI-o or containerd? From what we can tell, they work a bit differently.
it's for both, I think
Should we FC 0.18 with the jailer or without?
disable it
@Dodan I can reproduce this issue with FC 0.18, I'll be working on a patch to fix it
@Dodan please take a look https://github.com/kata-containers/runtime/pull/2095
@devimc thanks for giving me a heads up regarding the status! I subscribed to #2095. It's still in review, right?
@Dodan
It's still in review, right?
yes
Description of problem
Hello! My team and I have been trying Kata and Firecracker and we noticed that Firecracker has issues when containers are closing and opening successively / aggresively. We've put together a simple scenario that reproduces those problems by using 2 successive calls to create-call-delete kata containers with both qemu and firecracker.
The Firecracker scenario always fails / hangs indefinitely.
The setup was run with CRI-O, Kata and Firecracker / QEMU.
Do have any ideas as to why this might be happening? We would really appreciate your input on this! :)
Here is the script we used to recreate the bug:
This is a the pod.yaml we used:
This is the container.yaml we used:
This is the CRI-O network configuration:
This is the crio.conf we used:
Expected result
This is what we noticed happened with Kata and Qemu when the script is run:
Actual result
This is what we noticed happened with Kata and Firecracker when the script is run:
Environment
This is the output of the kata-collect-data.sh script:
Show kata-collect-data.sh details
# Meta details Running `kata-collect-data.sh` version `1.7.0 (commit d4f4644312d2acbfed8a150e49831787f8ebdd90)` at `2019-07-16.16:14:37.995474297+0300`. --- Runtime is `/usr/bin/kata-runtime`. # `kata-env` Output of "`/usr/bin/kata-runtime kata-env`": ```toml [Meta] Version = "1.0.23" [Runtime] Debug = false Trace = false DisableGuestSeccomp = true DisableNewNetNs = false Path = "/usr/bin/kata-runtime" [Runtime.Version] Semver = "1.7.0" Commit = "" OCI = "1.0.1-dev" [Runtime.Config] Path = "/usr/share/defaults/kata-containers/configuration.toml" [Hypervisor] MachineType = "pc" Version = "QEMU emulator version 2.11.0\nCopyright (c) 2003-2017 Fabrice Bellard and the QEMU Project developers" Path = "/usr/bin/qemu-lite-system-x86_64" BlockDeviceDriver = "virtio-scsi" EntropySource = "/dev/urandom" Msize9p = 8192 MemorySlots = 10 Debug = false UseVSock = false SharedFS = "virtio-9p" [Image] Path = "/usr/share/kata-containers/kata-containers-image_clearlinux_1.7.0_agent_43bd707543.img" [Kernel] Path = "/usr/share/kata-containers/vmlinuz-4.19.28.40-28.container" Parameters = "init=/usr/lib/systemd/systemd systemd.unit=kata-containers.target systemd.mask=systemd-networkd.service systemd.mask=systemd-networkd.socket systemd.mask=systemd-journald.service systemd.mask=systemd-journald.socket systemd.mask=systemd-journal-flush.service systemd.mask=systemd-journald-dev-log.socket systemd.mask=systemd-udevd.service systemd.mask=systemd-udevd.socket systemd.mask=systemd-udev-trigger.service systemd.mask=systemd-udevd-kernel.socket systemd.mask=systemd-udevd-control.socket systemd.mask=systemd-timesyncd.service systemd.mask=systemd-update-utmp.service systemd.mask=systemd-tmpfiles-setup.service systemd.mask=systemd-tmpfiles-cleanup.service systemd.mask=systemd-tmpfiles-cleanup.timer systemd.mask=tmp.mount systemd.mask=systemd-random-seed.service systemd.mask=systemd-coredump@.service" [Initrd] Path = "" [Proxy] Type = "kataProxy" Version = "kata-proxy version 1.7.0-ea2b0bb" Path = "/usr/libexec/kata-containers/kata-proxy" Debug = false [Shim] Type = "kataShim" Version = "kata-shim version 1.7.0-7f2ab77" Path = "/usr/libexec/kata-containers/kata-shim" Debug = false [Agent] Type = "kata" Debug = false Trace = false TraceMode = "" TraceType = "" [Host] Kernel = "4.18.0-25-generic" Architecture = "amd64" VMContainerCapable = true SupportVSocks = true [Host.Distro] Name = "Ubuntu" Version = "18.04" [Host.CPU] Vendor = "GenuineIntel" Model = "Intel(R) Xeon(R) CPU E5-2699 v3 @ 2.30GHz" [Netmon] Version = "kata-netmon version 1.7.0" Path = "/usr/libexec/kata-containers/kata-netmon" Debug = false Enable = false ``` --- # Runtime config files ## Runtime default config files ``` /etc/kata-containers/configuration.toml /usr/share/defaults/kata-containers/configuration.toml ``` ## Runtime config file contents Config file `/etc/kata-containers/configuration.toml` not found Output of "`cat "/opt/kata/share/defaults/kata-containers/configuration.toml"`": ```toml # Copyright (c) 2017-2019 Intel Corporation # # SPDX-License-Identifier: Apache-2.0 # # XXX: WARNING: this file is auto-generated. # XXX: # XXX: Source file: "cli/config/configuration-qemu.toml.in" # XXX: Project: # XXX: Name: Kata Containers # XXX: Type: kata [hypervisor.qemu] path = "/opt/kata/bin/qemu-system-x86_64" kernel = "/opt/kata/share/kata-containers/vmlinuz.container" image = "/opt/kata/share/kata-containers/kata-containers.img" machine_type = "pc" # Optional space-separated list of options to pass to the guest kernel. # For example, use `kernel_params = "vsyscall=emulate"` if you are having # trouble running pre-2.15 glibc. # # WARNING: - any parameter specified here will take priority over the default # parameter value of the same name used to start the virtual machine. # Do not set values here unless you understand the impact of doing so as you # may stop the virtual machine from booting. # To see the list of default parameters, enable hypervisor debug, create a # container and look for 'default-kernel-parameters' log entries. kernel_params = "" # Path to the firmware. # If you want that qemu uses the default firmware leave this option empty firmware = "" # Machine accelerators # comma-separated list of machine accelerators to pass to the hypervisor. # For example, `machine_accelerators = "nosmm,nosmbus,nosata,nopit,static-prt,nofw"` machine_accelerators="" # Default number of vCPUs per SB/VM: # unspecified or 0 --> will be set to 1 # < 0 --> will be set to the actual number of physical cores # > 0 <= number of physical cores --> will be set to the specified number # > number of physical cores --> will be set to the actual number of physical cores default_vcpus = 1 # Default maximum number of vCPUs per SB/VM: # unspecified or == 0 --> will be set to the actual number of physical cores or to the maximum number # of vCPUs supported by KVM if that number is exceeded # > 0 <= number of physical cores --> will be set to the specified number # > number of physical cores --> will be set to the actual number of physical cores or to the maximum number # of vCPUs supported by KVM if that number is exceeded # WARNING: Depending of the architecture, the maximum number of vCPUs supported by KVM is used when # the actual number of physical cores is greater than it. # WARNING: Be aware that this value impacts the virtual machine's memory footprint and CPU # the hotplug functionality. For example, `default_maxvcpus = 240` specifies that until 240 vCPUs # can be added to a SB/VM, but the memory footprint will be big. Another example, with # `default_maxvcpus = 8` the memory footprint will be small, but 8 will be the maximum number of # vCPUs supported by the SB/VM. In general, we recommend that you do not edit this variable, # unless you know what are you doing. default_maxvcpus = 0 # Bridges can be used to hot plug devices. # Limitations: # * Currently only pci bridges are supported # * Until 30 devices per bridge can be hot plugged. # * Until 5 PCI bridges can be cold plugged per VM. # This limitation could be a bug in qemu or in the kernel # Default number of bridges per SB/VM: # unspecified or 0 --> will be set to 1 # > 1 <= 5 --> will be set to the specified number # > 5 --> will be set to 5 default_bridges = 1 # Default memory size in MiB for SB/VM. # If unspecified then it will be set 2048 MiB. default_memory = 2048 # # Default memory slots per SB/VM. # If unspecified then it will be set 10. # This is will determine the times that memory will be hotadded to sandbox/VM. #memory_slots = 10 # The size in MiB will be plused to max memory of hypervisor. # It is the memory address space for the NVDIMM devie. # If set block storage driver (block_device_driver) to "nvdimm", # should set memory_offset to the size of block device. # Default 0 #memory_offset = 0 # Disable block device from being used for a container's rootfs. # In case of a storage driver like devicemapper where a container's # root file system is backed by a block device, the block device is passed # directly to the hypervisor for performance reasons. # This flag prevents the block device from being passed to the hypervisor, # 9pfs is used instead to pass the rootfs. disable_block_device_use = false # Shared file system type: # - virtio-9p (default) # - virtio-fs shared_fs = "virtio-9p" # Path to vhost-user-fs daemon. virtio_fs_daemon = "/opt/kata/bin/virtiofsd-x86_64" # Default size of DAX cache in MiB virtio_fs_cache_size = 1024 # Cache mode: # # - none # Metadata, data, and pathname lookup are not cached in guest. They are # always fetched from host and any changes are immediately pushed to host. # # - auto # Metadata and pathname lookup cache expires after a configured amount of # time (default is 1 second). Data is cached while the file is open (close # to open consistency). # # - always # Metadata, data, and pathname lookup are cached in guest and never expire. virtio_fs_cache = "always" # Block storage driver to be used for the hypervisor in case the container # rootfs is backed by a block device. This is virtio-scsi, virtio-blk # or nvdimm. block_device_driver = "virtio-scsi" # Specifies cache-related options will be set to block devices or not. # Default false #block_device_cache_set = true # Specifies cache-related options for block devices. # Denotes whether use of O_DIRECT (bypass the host page cache) is enabled. # Default false #block_device_cache_direct = true # Specifies cache-related options for block devices. # Denotes whether flush requests for the device are ignored. # Default false #block_device_cache_noflush = true # Enable iothreads (data-plane) to be used. This causes IO to be # handled in a separate IO thread. This is currently only implemented # for SCSI. # enable_iothreads = false # Enable pre allocation of VM RAM, default false # Enabling this will result in lower container density # as all of the memory will be allocated and locked # This is useful when you want to reserve all the memory # upfront or in the cases where you want memory latencies # to be very predictable # Default false #enable_mem_prealloc = true # Enable huge pages for VM RAM, default false # Enabling this will result in the VM memory # being allocated using huge pages. # This is useful when you want to use vhost-user network # stacks within the container. This will automatically # result in memory pre allocation #enable_hugepages = true # Enable swap of vm memory. Default false. # The behaviour is undefined if mem_prealloc is also set to true #enable_swap = true # This option changes the default hypervisor and kernel parameters # to enable debug output where available. This extra output is added # to the proxy logs, but only when proxy debug is also enabled. # # Default false #enable_debug = true # Disable the customizations done in the runtime when it detects # that it is running on top a VMM. This will result in the runtime # behaving as it would when running on bare metal. # #disable_nesting_checks = true # This is the msize used for 9p shares. It is the number of bytes # used for 9p packet payload. #msize_9p = 8192 # If true and vsocks are supported, use vsocks to communicate directly # with the agent and no proxy is started, otherwise use unix # sockets and start a proxy to communicate with the agent. # Default false #use_vsock = true # VFIO devices are hotplugged on a bridge by default. # Enable hotplugging on root bus. This may be required for devices with # a large PCI bar, as this is a current limitation with hotplugging on # a bridge. This value is valid for "pc" machine type. # Default false #hotplug_vfio_on_root_bus = true # If host doesn't support vhost_net, set to true. Thus we won't create vhost fds for nics. # Default false #disable_vhost_net = true # # Default entropy source. # The path to a host source of entropy (including a real hardware RNG) # /dev/urandom and /dev/random are two main options. # Be aware that /dev/random is a blocking source of entropy. If the host # runs out of entropy, the VMs boot time will increase leading to get startup # timeouts. # The source of entropy /dev/urandom is non-blocking and provides a # generally acceptable source of entropy. It should work well for pretty much # all practical purposes. #entropy_source= "/dev/urandom" # Path to OCI hook binaries in the *guest rootfs*. # This does not affect host-side hooks which must instead be added to # the OCI spec passed to the runtime. # # You can create a rootfs with hooks by customizing the osbuilder scripts: # https://github.com/kata-containers/osbuilder # # Hooks must be stored in a subdirectory of guest_hook_path according to their # hook type, i.e. "guest_hook_path/{prestart,postart,poststop}". # The agent will scan these directories for executable files and add them, in # lexicographical order, to the lifecycle of the guest container. # Hooks are executed in the runtime namespace of the guest. See the official documentation: # https://github.com/opencontainers/runtime-spec/blob/v1.0.1/config.md#posix-platform-hooks # Warnings will be logged if any error is encountered will scanning for hooks, # but it will not abort container execution. #guest_hook_path = "/usr/share/oci/hooks" [factory] # VM templating support. Once enabled, new VMs are created from template # using vm cloning. They will share the same initial kernel, initramfs and # agent memory by mapping it readonly. It helps speeding up new container # creation and saves a lot of memory if there are many kata containers running # on the same host. # # When disabled, new VMs are created from scratch. # # Note: Requires "initrd=" to be set ("image=" is not supported). # # Default false #enable_template = true # Specifies the path of template. # # Default "/run/vc/vm/template" #template_path = "/run/vc/vm/template" # The number of caches of VMCache: # unspecified or == 0 --> VMCache is disabled # > 0 --> will be set to the specified number # # VMCache is a function that creates VMs as caches before using it. # It helps speed up new container creation. # The function consists of a server and some clients communicating # through Unix socket. The protocol is gRPC in protocols/cache/cache.proto. # The VMCache server will create some VMs and cache them by factory cache. # It will convert the VM to gRPC format and transport it when gets # requestion from clients. # Factory grpccache is the VMCache client. It will request gRPC format # VM and convert it back to a VM. If VMCache function is enabled, # kata-runtime will request VM from factory grpccache when it creates # a new sandbox. # # Default 0 #vm_cache_number = 0 # Specify the address of the Unix socket that is used by VMCache. # # Default /var/run/kata-containers/cache.sock #vm_cache_endpoint = "/var/run/kata-containers/cache.sock" [proxy.kata] path = "/opt/kata/libexec/kata-containers/kata-proxy" # If enabled, proxy messages will be sent to the system log # (default: disabled) #enable_debug = true [shim.kata] path = "/opt/kata/libexec/kata-containers/kata-shim" # If enabled, shim messages will be sent to the system log # (default: disabled) #enable_debug = true # If enabled, the shim will create opentracing.io traces and spans. # (See https://www.jaegertracing.io/docs/getting-started). # # Note: By default, the shim runs in a separate network namespace. Therefore, # to allow it to send trace details to the Jaeger agent running on the host, # it is necessary to set 'disable_new_netns=true' so that it runs in the host # network namespace. # # (default: disabled) #enable_tracing = true [agent.kata] # If enabled, make the agent display debug-level messages. # (default: disabled) #enable_debug = true # Enable agent tracing. # # If enabled, the default trace mode is "dynamic" and the # default trace type is "isolated". The trace mode and type are set # explicity with the `trace_type=` and `trace_mode=` options. # # Notes: # # - Tracing is ONLY enabled when `enable_tracing` is set: explicitly # setting `trace_mode=` and/or `trace_type=` without setting `enable_tracing` # will NOT activate agent tracing. # # - See https://github.com/kata-containers/agent/blob/master/TRACING.md for # full details. # # (default: disabled) #enable_tracing = true # #trace_mode = "dynamic" #trace_type = "isolated" [netmon] # If enabled, the network monitoring process gets started when the # sandbox is created. This allows for the detection of some additional # network being added to the existing network namespace, after the # sandbox has been created. # (default: disabled) #enable_netmon = true # Specify the path to the netmon binary. path = "/opt/kata/libexec/kata-containers/kata-netmon" # If enabled, netmon messages will be sent to the system log # (default: disabled) #enable_debug = true [runtime] # If enabled, the runtime will log additional debug messages to the # system log # (default: disabled) #enable_debug = true # # Internetworking model # Determines how the VM should be connected to the # the container network interface # Options: # # - bridged # Uses a linux bridge to interconnect the container interface to # the VM. Works for most cases except macvlan and ipvlan. # # - macvtap # Used when the Container network interface can be bridged using # macvtap. # # - none # Used when customize network. Only creates a tap device. No veth pair. # # - tcfilter # Uses tc filter rules to redirect traffic from the network interface # provided by plugin to a tap interface connected to the VM. # internetworking_model="tcfilter" # disable guest seccomp # Determines whether container seccomp profiles are passed to the virtual # machine and applied by the kata agent. If set to true, seccomp is not applied # within the guest # (default: true) disable_guest_seccomp=true # If enabled, the runtime will create opentracing.io traces and spans. # (See https://www.jaegertracing.io/docs/getting-started). # (default: disabled) #enable_tracing = true # If enabled, the runtime will not create a network namespace for shim and hypervisor processes. # This option may have some potential impacts to your host. It should only be used when you know what you're doing. # `disable_new_netns` conflicts with `enable_netmon` # `disable_new_netns` conflicts with `internetworking_model=bridged` and `internetworking_model=macvtap`. It works only # with `internetworking_model=none`. The tap device will be in the host network namespace and can connect to a bridge # (like OVS) directly. # If you are using docker, `disable_new_netns` only works with `docker run --net=none` # (default: false) #disable_new_netns = true # Enabled experimental feature list, format: ["a", "b"]. # Experimental features are features not stable enough for production, # They may break compatibility, and are prepared for a big version bump. # Supported experimental features: # 1. "newstore": new persist storage driver which breaks backward compatibility, # expected to move out of experimental in 2.0.0. # (default: []) experimental=[] ``` Output of "`cat "/usr/share/defaults/kata-containers/configuration.toml"`": ```toml # Copyright (c) 2017-2019 Intel Corporation # # SPDX-License-Identifier: Apache-2.0 # # XXX: WARNING: this file is auto-generated. # XXX: # XXX: Source file: "cli/config/configuration-qemu.toml.in" # XXX: Project: # XXX: Name: Kata Containers # XXX: Type: kata [hypervisor.qemu] path = "/usr/bin/qemu-lite-system-x86_64" kernel = "/usr/share/kata-containers/vmlinuz.container" image = "/usr/share/kata-containers/kata-containers.img" machine_type = "pc" # Optional space-separated list of options to pass to the guest kernel. # For example, use `kernel_params = "vsyscall=emulate"` if you are having # trouble running pre-2.15 glibc. # # WARNING: - any parameter specified here will take priority over the default # parameter value of the same name used to start the virtual machine. # Do not set values here unless you understand the impact of doing so as you # may stop the virtual machine from booting. # To see the list of default parameters, enable hypervisor debug, create a # container and look for 'default-kernel-parameters' log entries. kernel_params = "" # Path to the firmware. # If you want that qemu uses the default firmware leave this option empty firmware = "" # Machine accelerators # comma-separated list of machine accelerators to pass to the hypervisor. # For example, `machine_accelerators = "nosmm,nosmbus,nosata,nopit,static-prt,nofw"` machine_accelerators="" # Default number of vCPUs per SB/VM: # unspecified or 0 --> will be set to 1 # < 0 --> will be set to the actual number of physical cores # > 0 <= number of physical cores --> will be set to the specified number # > number of physical cores --> will be set to the actual number of physical cores default_vcpus = 1 # Default maximum number of vCPUs per SB/VM: # unspecified or == 0 --> will be set to the actual number of physical cores or to the maximum number # of vCPUs supported by KVM if that number is exceeded # > 0 <= number of physical cores --> will be set to the specified number # > number of physical cores --> will be set to the actual number of physical cores or to the maximum number # of vCPUs supported by KVM if that number is exceeded # WARNING: Depending of the architecture, the maximum number of vCPUs supported by KVM is used when # the actual number of physical cores is greater than it. # WARNING: Be aware that this value impacts the virtual machine's memory footprint and CPU # the hotplug functionality. For example, `default_maxvcpus = 240` specifies that until 240 vCPUs # can be added to a SB/VM, but the memory footprint will be big. Another example, with # `default_maxvcpus = 8` the memory footprint will be small, but 8 will be the maximum number of # vCPUs supported by the SB/VM. In general, we recommend that you do not edit this variable, # unless you know what are you doing. default_maxvcpus = 0 # Bridges can be used to hot plug devices. # Limitations: # * Currently only pci bridges are supported # * Until 30 devices per bridge can be hot plugged. # * Until 5 PCI bridges can be cold plugged per VM. # This limitation could be a bug in qemu or in the kernel # Default number of bridges per SB/VM: # unspecified or 0 --> will be set to 1 # > 1 <= 5 --> will be set to the specified number # > 5 --> will be set to 5 default_bridges = 1 # Default memory size in MiB for SB/VM. # If unspecified then it will be set 2048 MiB. default_memory = 2048 # # Default memory slots per SB/VM. # If unspecified then it will be set 10. # This is will determine the times that memory will be hotadded to sandbox/VM. #memory_slots = 10 # The size in MiB will be plused to max memory of hypervisor. # It is the memory address space for the NVDIMM devie. # If set block storage driver (block_device_driver) to "nvdimm", # should set memory_offset to the size of block device. # Default 0 #memory_offset = 0 # Disable block device from being used for a container's rootfs. # In case of a storage driver like devicemapper where a container's # root file system is backed by a block device, the block device is passed # directly to the hypervisor for performance reasons. # This flag prevents the block device from being passed to the hypervisor, # 9pfs is used instead to pass the rootfs. disable_block_device_use = false # Shared file system type: # - virtio-9p (default) # - virtio-fs shared_fs = "virtio-9p" # Path to vhost-user-fs daemon. virtio_fs_daemon = "/usr/bin/virtiofsd-x86_64" # Default size of DAX cache in MiB virtio_fs_cache_size = 1024 # Cache mode: # # - none # Metadata, data, and pathname lookup are not cached in guest. They are # always fetched from host and any changes are immediately pushed to host. # # - auto # Metadata and pathname lookup cache expires after a configured amount of # time (default is 1 second). Data is cached while the file is open (close # to open consistency). # # - always # Metadata, data, and pathname lookup are cached in guest and never expire. virtio_fs_cache = "always" # Block storage driver to be used for the hypervisor in case the container # rootfs is backed by a block device. This is virtio-scsi, virtio-blk # or nvdimm. block_device_driver = "virtio-scsi" # Specifies cache-related options will be set to block devices or not. # Default false #block_device_cache_set = true # Specifies cache-related options for block devices. # Denotes whether use of O_DIRECT (bypass the host page cache) is enabled. # Default false #block_device_cache_direct = true # Specifies cache-related options for block devices. # Denotes whether flush requests for the device are ignored. # Default false #block_device_cache_noflush = true # Enable iothreads (data-plane) to be used. This causes IO to be # handled in a separate IO thread. This is currently only implemented # for SCSI. # enable_iothreads = false # Enable pre allocation of VM RAM, default false # Enabling this will result in lower container density # as all of the memory will be allocated and locked # This is useful when you want to reserve all the memory # upfront or in the cases where you want memory latencies # to be very predictable # Default false #enable_mem_prealloc = true # Enable huge pages for VM RAM, default false # Enabling this will result in the VM memory # being allocated using huge pages. # This is useful when you want to use vhost-user network # stacks within the container. This will automatically # result in memory pre allocation #enable_hugepages = true # Enable swap of vm memory. Default false. # The behaviour is undefined if mem_prealloc is also set to true #enable_swap = true # This option changes the default hypervisor and kernel parameters # to enable debug output where available. This extra output is added # to the proxy logs, but only when proxy debug is also enabled. # # Default false #enable_debug = true # Disable the customizations done in the runtime when it detects # that it is running on top a VMM. This will result in the runtime # behaving as it would when running on bare metal. # #disable_nesting_checks = true # This is the msize used for 9p shares. It is the number of bytes # used for 9p packet payload. #msize_9p = 8192 # If true and vsocks are supported, use vsocks to communicate directly # with the agent and no proxy is started, otherwise use unix # sockets and start a proxy to communicate with the agent. # Default false #use_vsock = true # VFIO devices are hotplugged on a bridge by default. # Enable hotplugging on root bus. This may be required for devices with # a large PCI bar, as this is a current limitation with hotplugging on # a bridge. This value is valid for "pc" machine type. # Default false #hotplug_vfio_on_root_bus = true # If host doesn't support vhost_net, set to true. Thus we won't create vhost fds for nics. # Default false #disable_vhost_net = true # # Default entropy source. # The path to a host source of entropy (including a real hardware RNG) # /dev/urandom and /dev/random are two main options. # Be aware that /dev/random is a blocking source of entropy. If the host # runs out of entropy, the VMs boot time will increase leading to get startup # timeouts. # The source of entropy /dev/urandom is non-blocking and provides a # generally acceptable source of entropy. It should work well for pretty much # all practical purposes. #entropy_source= "/dev/urandom" # Path to OCI hook binaries in the *guest rootfs*. # This does not affect host-side hooks which must instead be added to # the OCI spec passed to the runtime. # # You can create a rootfs with hooks by customizing the osbuilder scripts: # https://github.com/kata-containers/osbuilder # # Hooks must be stored in a subdirectory of guest_hook_path according to their # hook type, i.e. "guest_hook_path/{prestart,postart,poststop}". # The agent will scan these directories for executable files and add them, in # lexicographical order, to the lifecycle of the guest container. # Hooks are executed in the runtime namespace of the guest. See the official documentation: # https://github.com/opencontainers/runtime-spec/blob/v1.0.1/config.md#posix-platform-hooks # Warnings will be logged if any error is encountered will scanning for hooks, # but it will not abort container execution. #guest_hook_path = "/usr/share/oci/hooks" [factory] # VM templating support. Once enabled, new VMs are created from template # using vm cloning. They will share the same initial kernel, initramfs and # agent memory by mapping it readonly. It helps speeding up new container # creation and saves a lot of memory if there are many kata containers running # on the same host. # # When disabled, new VMs are created from scratch. # # Note: Requires "initrd=" to be set ("image=" is not supported). # # Default false #enable_template = true # Specifies the path of template. # # Default "/run/vc/vm/template" #template_path = "/run/vc/vm/template" # The number of caches of VMCache: # unspecified or == 0 --> VMCache is disabled # > 0 --> will be set to the specified number # # VMCache is a function that creates VMs as caches before using it. # It helps speed up new container creation. # The function consists of a server and some clients communicating # through Unix socket. The protocol is gRPC in protocols/cache/cache.proto. # The VMCache server will create some VMs and cache them by factory cache. # It will convert the VM to gRPC format and transport it when gets # requestion from clients. # Factory grpccache is the VMCache client. It will request gRPC format # VM and convert it back to a VM. If VMCache function is enabled, # kata-runtime will request VM from factory grpccache when it creates # a new sandbox. # # Default 0 #vm_cache_number = 0 # Specify the address of the Unix socket that is used by VMCache. # # Default /var/run/kata-containers/cache.sock #vm_cache_endpoint = "/var/run/kata-containers/cache.sock" [proxy.kata] path = "/usr/libexec/kata-containers/kata-proxy" # If enabled, proxy messages will be sent to the system log # (default: disabled) #enable_debug = true [shim.kata] path = "/usr/libexec/kata-containers/kata-shim" # If enabled, shim messages will be sent to the system log # (default: disabled) #enable_debug = true # If enabled, the shim will create opentracing.io traces and spans. # (See https://www.jaegertracing.io/docs/getting-started). # # Note: By default, the shim runs in a separate network namespace. Therefore, # to allow it to send trace details to the Jaeger agent running on the host, # it is necessary to set 'disable_new_netns=true' so that it runs in the host # network namespace. # # (default: disabled) #enable_tracing = true [agent.kata] # If enabled, make the agent display debug-level messages. # (default: disabled) #enable_debug = true # Enable agent tracing. # # If enabled, the default trace mode is "dynamic" and the # default trace type is "isolated". The trace mode and type are set # explicity with the `trace_type=` and `trace_mode=` options. # # Notes: # # - Tracing is ONLY enabled when `enable_tracing` is set: explicitly # setting `trace_mode=` and/or `trace_type=` without setting `enable_tracing` # will NOT activate agent tracing. # # - See https://github.com/kata-containers/agent/blob/master/TRACING.md for # full details. # # (default: disabled) #enable_tracing = true # #trace_mode = "dynamic" #trace_type = "isolated" [netmon] # If enabled, the network monitoring process gets started when the # sandbox is created. This allows for the detection of some additional # network being added to the existing network namespace, after the # sandbox has been created. # (default: disabled) #enable_netmon = true # Specify the path to the netmon binary. path = "/usr/libexec/kata-containers/kata-netmon" # If enabled, netmon messages will be sent to the system log # (default: disabled) #enable_debug = true [runtime] # If enabled, the runtime will log additional debug messages to the # system log # (default: disabled) #enable_debug = true # # Internetworking model # Determines how the VM should be connected to the # the container network interface # Options: # # - bridged # Uses a linux bridge to interconnect the container interface to # the VM. Works for most cases except macvlan and ipvlan. # # - macvtap # Used when the Container network interface can be bridged using # macvtap. # # - none # Used when customize network. Only creates a tap device. No veth pair. # # - tcfilter # Uses tc filter rules to redirect traffic from the network interface # provided by plugin to a tap interface connected to the VM. # internetworking_model="tcfilter" # disable guest seccomp # Determines whether container seccomp profiles are passed to the virtual # machine and applied by the kata agent. If set to true, seccomp is not applied # within the guest # (default: true) disable_guest_seccomp=true # If enabled, the runtime will create opentracing.io traces and spans. # (See https://www.jaegertracing.io/docs/getting-started). # (default: disabled) #enable_tracing = true # If enabled, the runtime will not create a network namespace for shim and hypervisor processes. # This option may have some potential impacts to your host. It should only be used when you know what you're doing. # `disable_new_netns` conflicts with `enable_netmon` # `disable_new_netns` conflicts with `internetworking_model=bridged` and `internetworking_model=macvtap`. It works only # with `internetworking_model=none`. The tap device will be in the host network namespace and can connect to a bridge # (like OVS) directly. # If you are using docker, `disable_new_netns` only works with `docker run --net=none` # (default: false) #disable_new_netns = true # Enabled experimental feature list, format: ["a", "b"]. # Experimental features are features not stable enough for production, # They may break compatibility, and are prepared for a big version bump. # Supported experimental features: # 1. "newstore": new persist storage driver which breaks backward compatibility, # expected to move out of experimental in 2.0.0. # (default: []) experimental=[] ``` --- # KSM throttler ## version Output of "`/usr/libexec/kata-ksm-throttler/kata-ksm-throttler --version`": ``` kata-ksm-throttler version 1.7.0-ce041ba ``` ## systemd service # Image details ```yaml --- osbuilder: url: "https://github.com/kata-containers/osbuilder" version: "unknown" rootfs-creation-time: "2019-05-27T04:15:58.171033644+0000Z" description: "osbuilder rootfs" file-format-version: "0.0.2" architecture: "x86_64" base-distro: name: "Clear" version: "29620" packages: default: - "chrony" - "iptables-bin" - "libudev0-shim" - "systemd" extra: agent: url: "https://github.com/kata-containers/agent" name: "kata-agent" version: "1.7.0-43bd7075430fd62ff713daa2708489005cd20042" agent-is-init-daemon: "no" dax-nvdimm-header: "true" ``` --- # Initrd details No initrd --- # Logfiles ## Runtime logs Recent runtime problems found in system journal: ``` time="2019-07-16T14:13:44.492249619+03:00" level=warning msg="load sandbox devices failed" arch=amd64 command=create container=cb15c8687e84ba3706f523c9fb1afa74cd77afe2e88b7cdde8ffa3d342e7e874 error="open /run/vc/sbs/cb15c8687e84ba3706f523c9fb1afa74cd77afe2e88b7cdde8ffa3d342e7e874/devices.json: no such file or directory" name=kata-runtime pid=26135 sandbox=cb15c8687e84ba3706f523c9fb1afa74cd77afe2e88b7cdde8ffa3d342e7e874 sandboxid=cb15c8687e84ba3706f523c9fb1afa74cd77afe2e88b7cdde8ffa3d342e7e874 source=virtcontainers subsystem=sandbox time="2019-07-16T14:13:45.072251946+03:00" level=warning msg="unsupported address" address="fe80::c0be:19ff:fe91:44b0/64" arch=amd64 command=create container=cb15c8687e84ba3706f523c9fb1afa74cd77afe2e88b7cdde8ffa3d342e7e874 name=kata-runtime pid=26135 source=virtcontainers subsystem=network unsupported-address-type=ipv6 time="2019-07-16T14:13:45.072384269+03:00" level=warning msg="unsupported route" arch=amd64 command=create container=cb15c8687e84ba3706f523c9fb1afa74cd77afe2e88b7cdde8ffa3d342e7e874 destination="fe80::/64" name=kata-runtime pid=26135 source=virtcontainers subsystem=network unsupported-route-type=ipv6 time="2019-07-16T14:15:05.778670928+03:00" level=info msg="No info could be fetched" arch=amd64 command=create container=9b75d1d99c604d71811caf4bc04776fe5eb7b557a2127143c7ed8dff7630fe82 error="open /run/vc/sbs/9b75d1d99c604d71811caf4bc04776fe5eb7b557a2127143c7ed8dff7630fe82/hypervisor.json: no such file or directory" function=init name=kata-runtime pid=27373 source=virtcontainers subsystem=firecracker time="2019-07-16T14:15:05.778776541+03:00" level=warning msg="load sandbox devices failed" arch=amd64 command=create container=9b75d1d99c604d71811caf4bc04776fe5eb7b557a2127143c7ed8dff7630fe82 error="open /run/vc/sbs/9b75d1d99c604d71811caf4bc04776fe5eb7b557a2127143c7ed8dff7630fe82/devices.json: no such file or directory" name=kata-runtime pid=27373 sandbox=9b75d1d99c604d71811caf4bc04776fe5eb7b557a2127143c7ed8dff7630fe82 sandboxid=9b75d1d99c604d71811caf4bc04776fe5eb7b557a2127143c7ed8dff7630fe82 source=virtcontainers subsystem=sandbox time="2019-07-16T14:15:06.373228932+03:00" level=warning msg="unsupported address" address="fe80::ec0c:49ff:fe39:7229/64" arch=amd64 command=create container=9b75d1d99c604d71811caf4bc04776fe5eb7b557a2127143c7ed8dff7630fe82 name=kata-runtime pid=27373 source=virtcontainers subsystem=network unsupported-address-type=ipv6 time="2019-07-16T14:15:06.373396292+03:00" level=warning msg="unsupported route" arch=amd64 command=create container=9b75d1d99c604d71811caf4bc04776fe5eb7b557a2127143c7ed8dff7630fe82 destination="fe80::/64" name=kata-runtime pid=27373 source=virtcontainers subsystem=network unsupported-route-type=ipv6 time="2019-07-16T14:19:09.508243685+03:00" level=warning msg="load sandbox devices failed" arch=amd64 command=create container=c061ad1e7bb6b61810a8b5ad1b444c506a3d07754cd453220765bfd2cfe90928 error="open /run/vc/sbs/c061ad1e7bb6b61810a8b5ad1b444c506a3d07754cd453220765bfd2cfe90928/devices.json: no such file or directory" name=kata-runtime pid=34039 sandbox=c061ad1e7bb6b61810a8b5ad1b444c506a3d07754cd453220765bfd2cfe90928 sandboxid=c061ad1e7bb6b61810a8b5ad1b444c506a3d07754cd453220765bfd2cfe90928 source=virtcontainers subsystem=sandbox time="2019-07-16T14:19:10.101901673+03:00" level=warning msg="unsupported address" address="fe80::ec3b:59ff:fe73:a1d9/64" arch=amd64 command=create container=c061ad1e7bb6b61810a8b5ad1b444c506a3d07754cd453220765bfd2cfe90928 name=kata-runtime pid=34039 source=virtcontainers subsystem=network unsupported-address-type=ipv6 time="2019-07-16T14:19:10.102039598+03:00" level=warning msg="unsupported route" arch=amd64 command=create container=c061ad1e7bb6b61810a8b5ad1b444c506a3d07754cd453220765bfd2cfe90928 destination="fe80::/64" name=kata-runtime pid=34039 source=virtcontainers subsystem=network unsupported-route-type=ipv6 time="2019-07-16T14:19:16.664523086+03:00" level=warning msg="load sandbox devices failed" arch=amd64 command=create container=5ef2731036609b2764d5f905fc1161bb18ca3a79e15e41c9f66d7476098c710b error="open /run/vc/sbs/5ef2731036609b2764d5f905fc1161bb18ca3a79e15e41c9f66d7476098c710b/devices.json: no such file or directory" name=kata-runtime pid=34847 sandbox=5ef2731036609b2764d5f905fc1161bb18ca3a79e15e41c9f66d7476098c710b sandboxid=5ef2731036609b2764d5f905fc1161bb18ca3a79e15e41c9f66d7476098c710b source=virtcontainers subsystem=sandbox time="2019-07-16T14:19:17.235178669+03:00" level=warning msg="unsupported address" address="fe80::50a4:f6ff:fede:a30f/64" arch=amd64 command=create container=5ef2731036609b2764d5f905fc1161bb18ca3a79e15e41c9f66d7476098c710b name=kata-runtime pid=34847 source=virtcontainers subsystem=network unsupported-address-type=ipv6 time="2019-07-16T14:19:17.235309351+03:00" level=warning msg="unsupported route" arch=amd64 command=create container=5ef2731036609b2764d5f905fc1161bb18ca3a79e15e41c9f66d7476098c710b destination="fe80::/64" name=kata-runtime pid=34847 source=virtcontainers subsystem=network unsupported-route-type=ipv6 time="2019-07-16T14:19:20.948304924+03:00" level=warning msg="load sandbox devices failed" arch=amd64 command=create container=ac193a5349e0c606e6e94d56b49d576bdb5081a4082a1b9c4b8afaa8010a69a8 error="open /run/vc/sbs/ac193a5349e0c606e6e94d56b49d576bdb5081a4082a1b9c4b8afaa8010a69a8/devices.json: no such file or directory" name=kata-runtime pid=35643 sandbox=ac193a5349e0c606e6e94d56b49d576bdb5081a4082a1b9c4b8afaa8010a69a8 sandboxid=ac193a5349e0c606e6e94d56b49d576bdb5081a4082a1b9c4b8afaa8010a69a8 source=virtcontainers subsystem=sandbox time="2019-07-16T14:19:21.523457071+03:00" level=warning msg="unsupported address" address="fe80::380b:fbff:fee9:2bd3/64" arch=amd64 command=create container=ac193a5349e0c606e6e94d56b49d576bdb5081a4082a1b9c4b8afaa8010a69a8 name=kata-runtime pid=35643 source=virtcontainers subsystem=network unsupported-address-type=ipv6 time="2019-07-16T14:19:21.52360719+03:00" level=warning msg="unsupported route" arch=amd64 command=create container=ac193a5349e0c606e6e94d56b49d576bdb5081a4082a1b9c4b8afaa8010a69a8 destination="fe80::/64" name=kata-runtime pid=35643 source=virtcontainers subsystem=network unsupported-route-type=ipv6 time="2019-07-16T14:19:24.868344853+03:00" level=warning msg="load sandbox devices failed" arch=amd64 command=create container=bd756926bab6806eed75b066586e8f2f42eb2d5f57f51f25082d4df8c09f6186 error="open /run/vc/sbs/bd756926bab6806eed75b066586e8f2f42eb2d5f57f51f25082d4df8c09f6186/devices.json: no such file or directory" name=kata-runtime pid=36439 sandbox=bd756926bab6806eed75b066586e8f2f42eb2d5f57f51f25082d4df8c09f6186 sandboxid=bd756926bab6806eed75b066586e8f2f42eb2d5f57f51f25082d4df8c09f6186 source=virtcontainers subsystem=sandbox time="2019-07-16T14:19:25.443343458+03:00" level=warning msg="unsupported address" address="fe80::5ce1:b7ff:fe85:cc4d/64" arch=amd64 command=create container=bd756926bab6806eed75b066586e8f2f42eb2d5f57f51f25082d4df8c09f6186 name=kata-runtime pid=36439 source=virtcontainers subsystem=network unsupported-address-type=ipv6 time="2019-07-16T14:19:25.443490098+03:00" level=warning msg="unsupported route" arch=amd64 command=create container=bd756926bab6806eed75b066586e8f2f42eb2d5f57f51f25082d4df8c09f6186 destination="fe80::/64" name=kata-runtime pid=36439 source=virtcontainers subsystem=network unsupported-route-type=ipv6 time="2019-07-16T14:19:29.736130136+03:00" level=warning msg="load sandbox devices failed" arch=amd64 command=create container=b599403ca04332573a502e159fef1cb5362d15a86f3237f59209772e4d7443d5 error="open /run/vc/sbs/b599403ca04332573a502e159fef1cb5362d15a86f3237f59209772e4d7443d5/devices.json: no such file or directory" name=kata-runtime pid=718 sandbox=b599403ca04332573a502e159fef1cb5362d15a86f3237f59209772e4d7443d5 sandboxid=b599403ca04332573a502e159fef1cb5362d15a86f3237f59209772e4d7443d5 source=virtcontainers subsystem=sandbox time="2019-07-16T14:19:30.31406655+03:00" level=warning msg="unsupported address" address="fe80::6496:b7ff:fec2:82de/64" arch=amd64 command=create container=b599403ca04332573a502e159fef1cb5362d15a86f3237f59209772e4d7443d5 name=kata-runtime pid=718 source=virtcontainers subsystem=network unsupported-address-type=ipv6 time="2019-07-16T14:19:30.314217458+03:00" level=warning msg="unsupported route" arch=amd64 command=create container=b599403ca04332573a502e159fef1cb5362d15a86f3237f59209772e4d7443d5 destination="fe80::/64" name=kata-runtime pid=718 source=virtcontainers subsystem=network unsupported-route-type=ipv6 time="2019-07-16T14:19:33.908788997+03:00" level=warning msg="load sandbox devices failed" arch=amd64 command=create container=a1c05a92dcf4106e4fc3a1f44bc1b3933642a78c0eddd78fd18ec179f160d949 error="open /run/vc/sbs/a1c05a92dcf4106e4fc3a1f44bc1b3933642a78c0eddd78fd18ec179f160d949/devices.json: no such file or directory" name=kata-runtime pid=1692 sandbox=a1c05a92dcf4106e4fc3a1f44bc1b3933642a78c0eddd78fd18ec179f160d949 sandboxid=a1c05a92dcf4106e4fc3a1f44bc1b3933642a78c0eddd78fd18ec179f160d949 source=virtcontainers subsystem=sandbox time="2019-07-16T14:19:34.485392671+03:00" level=warning msg="unsupported address" address="fe80::1c81:2ff:fe96:35e1/64" arch=amd64 command=create container=a1c05a92dcf4106e4fc3a1f44bc1b3933642a78c0eddd78fd18ec179f160d949 name=kata-runtime pid=1692 source=virtcontainers subsystem=network unsupported-address-type=ipv6 time="2019-07-16T14:19:34.485546987+03:00" level=warning msg="unsupported route" arch=amd64 command=create container=a1c05a92dcf4106e4fc3a1f44bc1b3933642a78c0eddd78fd18ec179f160d949 destination="fe80::/64" name=kata-runtime pid=1692 source=virtcontainers subsystem=network unsupported-route-type=ipv6 time="2019-07-16T14:19:50.829548294+03:00" level=warning msg="load sandbox devices failed" arch=amd64 command=create container=6ccbbf51cdb9a19b79c6828bc50de43db198cac2eaeac3f4c0c34221db095727 error="open /run/vc/sbs/6ccbbf51cdb9a19b79c6828bc50de43db198cac2eaeac3f4c0c34221db095727/devices.json: no such file or directory" name=kata-runtime pid=2640 sandbox=6ccbbf51cdb9a19b79c6828bc50de43db198cac2eaeac3f4c0c34221db095727 sandboxid=6ccbbf51cdb9a19b79c6828bc50de43db198cac2eaeac3f4c0c34221db095727 source=virtcontainers subsystem=sandbox time="2019-07-16T14:19:51.394358202+03:00" level=warning msg="unsupported address" address="fe80::3836:f4ff:fe88:ad81/64" arch=amd64 command=create container=6ccbbf51cdb9a19b79c6828bc50de43db198cac2eaeac3f4c0c34221db095727 name=kata-runtime pid=2640 source=virtcontainers subsystem=network unsupported-address-type=ipv6 time="2019-07-16T14:19:51.394543318+03:00" level=warning msg="unsupported route" arch=amd64 command=create container=6ccbbf51cdb9a19b79c6828bc50de43db198cac2eaeac3f4c0c34221db095727 destination="fe80::/64" name=kata-runtime pid=2640 source=virtcontainers subsystem=network unsupported-route-type=ipv6 time="2019-07-16T15:28:45.384312291+03:00" level=warning msg="load sandbox devices failed" arch=amd64 command=create container=74ec0eafa423a48a0d5a332dfb2229fb42298d476738910064b19314ad2013c4 error="open /run/vc/sbs/74ec0eafa423a48a0d5a332dfb2229fb42298d476738910064b19314ad2013c4/devices.json: no such file or directory" name=kata-runtime pid=19701 sandbox=74ec0eafa423a48a0d5a332dfb2229fb42298d476738910064b19314ad2013c4 sandboxid=74ec0eafa423a48a0d5a332dfb2229fb42298d476738910064b19314ad2013c4 source=virtcontainers subsystem=sandbox time="2019-07-16T15:28:45.965328147+03:00" level=warning msg="unsupported address" address="fe80::9ca2:28ff:fe20:5e18/64" arch=amd64 command=create container=74ec0eafa423a48a0d5a332dfb2229fb42298d476738910064b19314ad2013c4 name=kata-runtime pid=19701 source=virtcontainers subsystem=network unsupported-address-type=ipv6 time="2019-07-16T15:28:45.96552599+03:00" level=warning msg="unsupported route" arch=amd64 command=create container=74ec0eafa423a48a0d5a332dfb2229fb42298d476738910064b19314ad2013c4 destination="fe80::/64" name=kata-runtime pid=19701 source=virtcontainers subsystem=network unsupported-route-type=ipv6 time="2019-07-16T15:28:59.244245218+03:00" level=warning msg="load sandbox devices failed" arch=amd64 command=create container=f3bd0684e3c089fcaf99eb8b55c821e670e76684d213be1beebec346095ea399 error="open /run/vc/sbs/f3bd0684e3c089fcaf99eb8b55c821e670e76684d213be1beebec346095ea399/devices.json: no such file or directory" name=kata-runtime pid=20547 sandbox=f3bd0684e3c089fcaf99eb8b55c821e670e76684d213be1beebec346095ea399 sandboxid=f3bd0684e3c089fcaf99eb8b55c821e670e76684d213be1beebec346095ea399 source=virtcontainers subsystem=sandbox time="2019-07-16T15:28:59.815856132+03:00" level=warning msg="unsupported address" address="fe80::5078:35ff:fe68:8d64/64" arch=amd64 command=create container=f3bd0684e3c089fcaf99eb8b55c821e670e76684d213be1beebec346095ea399 name=kata-runtime pid=20547 source=virtcontainers subsystem=network unsupported-address-type=ipv6 time="2019-07-16T15:28:59.815971156+03:00" level=warning msg="unsupported route" arch=amd64 command=create container=f3bd0684e3c089fcaf99eb8b55c821e670e76684d213be1beebec346095ea399 destination="fe80::/64" name=kata-runtime pid=20547 source=virtcontainers subsystem=network unsupported-route-type=ipv6 time="2019-07-16T16:13:10.14425964+03:00" level=warning msg="load sandbox devices failed" arch=amd64 command=create container=4ad8a4bb11d5205eca4775e360f639292d350eee3aeb9b1d5a3fa93654d328b0 error="open /run/vc/sbs/4ad8a4bb11d5205eca4775e360f639292d350eee3aeb9b1d5a3fa93654d328b0/devices.json: no such file or directory" name=kata-runtime pid=32431 sandbox=4ad8a4bb11d5205eca4775e360f639292d350eee3aeb9b1d5a3fa93654d328b0 sandboxid=4ad8a4bb11d5205eca4775e360f639292d350eee3aeb9b1d5a3fa93654d328b0 source=virtcontainers subsystem=sandbox time="2019-07-16T16:13:10.716717754+03:00" level=warning msg="unsupported address" address="fe80::8c45:48ff:fe8a:2e45/64" arch=amd64 command=create container=4ad8a4bb11d5205eca4775e360f639292d350eee3aeb9b1d5a3fa93654d328b0 name=kata-runtime pid=32431 source=virtcontainers subsystem=network unsupported-address-type=ipv6 time="2019-07-16T16:13:10.716829673+03:00" level=warning msg="unsupported route" arch=amd64 command=create container=4ad8a4bb11d5205eca4775e360f639292d350eee3aeb9b1d5a3fa93654d328b0 destination="fe80::/64" name=kata-runtime pid=32431 source=virtcontainers subsystem=network unsupported-route-type=ipv6 time="2019-07-16T16:13:18.368334537+03:00" level=warning msg="load sandbox devices failed" arch=amd64 command=create container=5d3ca860411720bf1b6229d0d72f287ae929821822df14f9eb729d56fb61f9a8 error="open /run/vc/sbs/5d3ca860411720bf1b6229d0d72f287ae929821822df14f9eb729d56fb61f9a8/devices.json: no such file or directory" name=kata-runtime pid=33250 sandbox=5d3ca860411720bf1b6229d0d72f287ae929821822df14f9eb729d56fb61f9a8 sandboxid=5d3ca860411720bf1b6229d0d72f287ae929821822df14f9eb729d56fb61f9a8 source=virtcontainers subsystem=sandbox time="2019-07-16T16:13:18.959305089+03:00" level=warning msg="unsupported address" address="fe80::7a:82ff:fef8:b5a/64" arch=amd64 command=create container=5d3ca860411720bf1b6229d0d72f287ae929821822df14f9eb729d56fb61f9a8 name=kata-runtime pid=33250 source=virtcontainers subsystem=network unsupported-address-type=ipv6 time="2019-07-16T16:13:18.959456082+03:00" level=warning msg="unsupported route" arch=amd64 command=create container=5d3ca860411720bf1b6229d0d72f287ae929821822df14f9eb729d56fb61f9a8 destination="fe80::/64" name=kata-runtime pid=33250 source=virtcontainers subsystem=network unsupported-route-type=ipv6 time="2019-07-16T16:13:22.900313778+03:00" level=warning msg="load sandbox devices failed" arch=amd64 command=create container=a27966c0979a43309a8aee50a8b22b11ae867b7b737d6b4397238e8f1808f7c2 error="open /run/vc/sbs/a27966c0979a43309a8aee50a8b22b11ae867b7b737d6b4397238e8f1808f7c2/devices.json: no such file or directory" name=kata-runtime pid=34079 sandbox=a27966c0979a43309a8aee50a8b22b11ae867b7b737d6b4397238e8f1808f7c2 sandboxid=a27966c0979a43309a8aee50a8b22b11ae867b7b737d6b4397238e8f1808f7c2 source=virtcontainers subsystem=sandbox time="2019-07-16T16:13:23.493834714+03:00" level=warning msg="unsupported address" address="fe80::8ce5:e2ff:fe6f:47a8/64" arch=amd64 command=create container=a27966c0979a43309a8aee50a8b22b11ae867b7b737d6b4397238e8f1808f7c2 name=kata-runtime pid=34079 source=virtcontainers subsystem=network unsupported-address-type=ipv6 time="2019-07-16T16:13:23.493980906+03:00" level=warning msg="unsupported route" arch=amd64 command=create container=a27966c0979a43309a8aee50a8b22b11ae867b7b737d6b4397238e8f1808f7c2 destination="fe80::/64" name=kata-runtime pid=34079 source=virtcontainers subsystem=network unsupported-route-type=ipv6 time="2019-07-16T16:13:27.440380252+03:00" level=warning msg="load sandbox devices failed" arch=amd64 command=create container=9f2407dfb70a58da6ce923ee074475be9534704ec69c1ce8f382090484dfb198 error="open /run/vc/sbs/9f2407dfb70a58da6ce923ee074475be9534704ec69c1ce8f382090484dfb198/devices.json: no such file or directory" name=kata-runtime pid=34877 sandbox=9f2407dfb70a58da6ce923ee074475be9534704ec69c1ce8f382090484dfb198 sandboxid=9f2407dfb70a58da6ce923ee074475be9534704ec69c1ce8f382090484dfb198 source=virtcontainers subsystem=sandbox time="2019-07-16T16:13:28.03046362+03:00" level=warning msg="unsupported address" address="fe80::5c5d:d6ff:fe0d:6a2d/64" arch=amd64 command=create container=9f2407dfb70a58da6ce923ee074475be9534704ec69c1ce8f382090484dfb198 name=kata-runtime pid=34877 source=virtcontainers subsystem=network unsupported-address-type=ipv6 time="2019-07-16T16:13:28.030606312+03:00" level=warning msg="unsupported route" arch=amd64 command=create container=9f2407dfb70a58da6ce923ee074475be9534704ec69c1ce8f382090484dfb198 destination="fe80::/64" name=kata-runtime pid=34877 source=virtcontainers subsystem=network unsupported-route-type=ipv6 time="2019-07-16T16:13:38.013183423+03:00" level=info msg="No info could be fetched" arch=amd64 command=create container=8d6201882f482470d3ee67814631b631419fd4cae5ee48b99178fcda2d3affda error="open /run/vc/sbs/8d6201882f482470d3ee67814631b631419fd4cae5ee48b99178fcda2d3affda/hypervisor.json: no such file or directory" function=init name=kata-runtime pid=35718 source=virtcontainers subsystem=firecracker time="2019-07-16T16:13:38.013320368+03:00" level=warning msg="load sandbox devices failed" arch=amd64 command=create container=8d6201882f482470d3ee67814631b631419fd4cae5ee48b99178fcda2d3affda error="open /run/vc/sbs/8d6201882f482470d3ee67814631b631419fd4cae5ee48b99178fcda2d3affda/devices.json: no such file or directory" name=kata-runtime pid=35718 sandbox=8d6201882f482470d3ee67814631b631419fd4cae5ee48b99178fcda2d3affda sandboxid=8d6201882f482470d3ee67814631b631419fd4cae5ee48b99178fcda2d3affda source=virtcontainers subsystem=sandbox time="2019-07-16T16:13:38.69365313+03:00" level=warning msg="unsupported address" address="fe80::c44e:acff:fe08:b470/64" arch=amd64 command=create container=8d6201882f482470d3ee67814631b631419fd4cae5ee48b99178fcda2d3affda name=kata-runtime pid=35718 source=virtcontainers subsystem=network unsupported-address-type=ipv6 time="2019-07-16T16:13:38.693835523+03:00" level=warning msg="unsupported route" arch=amd64 command=create container=8d6201882f482470d3ee67814631b631419fd4cae5ee48b99178fcda2d3affda destination="fe80::/64" name=kata-runtime pid=35718 source=virtcontainers subsystem=network unsupported-route-type=ipv6 ``` ## Proxy logs Recent proxy problems found in system journal: ``` time="2019-07-11T16:02:49.06512562+03:00" level=fatal msg="channel error" error="accept unix /run/vc/sbs/64d55b28fbbf0d09ed3495c44baa9a47339453b5434bf270921460c6aca7b8b6/proxy.sock: use of closed network connection" name=kata-proxy pid=30848 sandbox=64d55b28fbbf0d09ed3495c44baa9a47339453b5434bf270921460c6aca7b8b6 source=proxy time="2019-07-11T16:02:50.017687393+03:00" level=fatal msg="failed to handle exit signal" error="close unix @->/run/vc/vm/fe0ae934c40b04733b9969c568267dfb2f583934dfb2c16fbbfaefeb0fd3cda5/kata.sock: use of closed network connection" name=kata-proxy pid=28163 sandbox=fe0ae934c40b04733b9969c568267dfb2f583934dfb2c16fbbfaefeb0fd3cda5 source=proxy time="2019-07-11T16:02:50.045150906+03:00" level=fatal msg="channel error" error="accept unix /run/vc/sbs/c02ceb619246fb312d9f7070c2660dbd3c517313c027294c4dd4b730463211e2/proxy.sock: use of closed network connection" name=kata-proxy pid=21905 sandbox=c02ceb619246fb312d9f7070c2660dbd3c517313c027294c4dd4b730463211e2 source=proxy time="2019-07-11T16:02:50.397524638+03:00" level=fatal msg="failed to handle exit signal" error="close unix @->/run/vc/vm/a2cf4fbdfed70445d46c10b7d08d2725364f523fce8ea40a3294a20094a97c93/kata.sock: use of closed network connection" name=kata-proxy pid=20552 sandbox=a2cf4fbdfed70445d46c10b7d08d2725364f523fce8ea40a3294a20094a97c93 source=proxy time="2019-07-11T16:02:50.564709427+03:00" level=fatal msg="channel error" error="accept unix /run/vc/sbs/98c9ea43c61c598e121bc178cdc046f5aea94ebbbcc22b583074524a78a1af2c/proxy.sock: use of closed network connection" name=kata-proxy pid=19520 sandbox=98c9ea43c61c598e121bc178cdc046f5aea94ebbbcc22b583074524a78a1af2c source=proxy time="2019-07-11T16:02:50.980946009+03:00" level=fatal msg="failed to handle exit signal" error="close unix @->/run/vc/vm/0e067a0d6f0482333fb1c8f285d6615d4d4aa280bcdaed462823181bebe3c8e3/kata.sock: use of closed network connection" name=kata-proxy pid=19152 sandbox=0e067a0d6f0482333fb1c8f285d6615d4d4aa280bcdaed462823181bebe3c8e3 source=proxy time="2019-07-11T16:02:51.306165201+03:00" level=fatal msg="failed to handle exit signal" error="close unix @->/run/vc/vm/7070b6603e22caecfa3d96ed4689012f9e3ad3d4fd41ad4130894c443b7c4d63/kata.sock: use of closed network connection" name=kata-proxy pid=19340 sandbox=7070b6603e22caecfa3d96ed4689012f9e3ad3d4fd41ad4130894c443b7c4d63 source=proxy time="2019-07-11T16:02:51.729882682+03:00" level=fatal msg="failed to handle exit signal" error="close unix @->/run/vc/vm/c27f1ae9be86509d07725849889143dc3e8ad27b19225622b1f5662b1efce24c/kata.sock: use of closed network connection" name=kata-proxy pid=16009 sandbox=c27f1ae9be86509d07725849889143dc3e8ad27b19225622b1f5662b1efce24c source=proxy time="2019-07-11T16:02:51.987145129+03:00" level=fatal msg="channel error" error="accept unix /run/vc/sbs/f9ed42f5343557cf4dd8540cabe45fcfec9be2240136e4d5ade94b94a7b1b42f/proxy.sock: use of closed network connection" name=kata-proxy pid=12347 sandbox=f9ed42f5343557cf4dd8540cabe45fcfec9be2240136e4d5ade94b94a7b1b42f source=proxy time="2019-07-11T16:02:52.481680199+03:00" level=fatal msg="failed to handle exit signal" error="close unix @->/run/vc/vm/500cb74c3989c946de7c6cc20de03ec34d061623f53cb2e567421d3f6b03b31d/kata.sock: use of closed network connection" name=kata-proxy pid=16544 sandbox=500cb74c3989c946de7c6cc20de03ec34d061623f53cb2e567421d3f6b03b31d source=proxy time="2019-07-11T16:02:52.549943285+03:00" level=fatal msg="failed to handle exit signal" error="close unix @->/run/vc/vm/ce98f8bf601c72c46fabf645e685156a99024227c8129180369f7ece385edf4e/kata.sock: use of closed network connection" name=kata-proxy pid=9663 sandbox=ce98f8bf601c72c46fabf645e685156a99024227c8129180369f7ece385edf4e source=proxy time="2019-07-11T16:02:53.066238771+03:00" level=fatal msg="failed to handle exit signal" error="close unix @->/run/vc/vm/70e3d60243736931d7cbea8dcccc38710e615f987cc9ce94dfdd4078432e9fab/kata.sock: use of closed network connection" name=kata-proxy pid=19998 sandbox=70e3d60243736931d7cbea8dcccc38710e615f987cc9ce94dfdd4078432e9fab source=proxy time="2019-07-15T16:57:28.735241788+03:00" level=fatal msg="failed to handle exit signal" error="close unix @->/run/vc/vm/6c97c34da7a2f6b54921643aae88566e041fe251ba39442a466b56b2cb0427de/kata.sock: use of closed network connection" name=kata-proxy pid=30294 sandbox=6c97c34da7a2f6b54921643aae88566e041fe251ba39442a466b56b2cb0427de source=proxy time="2019-07-15T17:00:14.182418743+03:00" level=fatal msg="channel error" error="accept unix /run/vc/sbs/1d81b542b4497bafc085f250b02b1013dd2f7a787dd055e8d9de48bcb472ba6b/proxy.sock: use of closed network connection" name=kata-proxy pid=32265 sandbox=1d81b542b4497bafc085f250b02b1013dd2f7a787dd055e8d9de48bcb472ba6b source=proxy time="2019-07-15T17:00:57.89505483+03:00" level=fatal msg="failed to handle exit signal" error="close unix @->/run/vc/vm/93c05139256ae59e2a6ffd9013d6bd7051bc849a9169f58830ee0665d75a813a/kata.sock: use of closed network connection" name=kata-proxy pid=33402 sandbox=93c05139256ae59e2a6ffd9013d6bd7051bc849a9169f58830ee0665d75a813a source=proxy time="2019-07-15T17:08:58.022483804+03:00" level=fatal msg="failed to handle exit signal" error="close unix @->/run/vc/vm/6651aa6e1048e91f4ab118383fd3e1ad3273b840af99286ce8dae65ae3c15925/kata.sock: use of closed network connection" name=kata-proxy pid=783 sandbox=6651aa6e1048e91f4ab118383fd3e1ad3273b840af99286ce8dae65ae3c15925 source=proxy time="2019-07-15T17:09:30.883703372+03:00" level=fatal msg="failed to handle exit signal" error="close unix @->/run/vc/vm/b50f811bd947bc1181c9c4ad6edeb3ba3b910f63bb198b2f1eebdf12aeda3eea/kata.sock: use of closed network connection" name=kata-proxy pid=2137 sandbox=b50f811bd947bc1181c9c4ad6edeb3ba3b910f63bb198b2f1eebdf12aeda3eea source=proxy time="2019-07-15T17:10:26.075608843+03:00" level=fatal msg="failed to handle exit signal" error="close unix @->/run/vc/vm/d98d178ff96fe249b2002b1fbed4a837bb45403963d4fbe755a99cd723b75edc/kata.sock: use of closed network connection" name=kata-proxy pid=3144 sandbox=d98d178ff96fe249b2002b1fbed4a837bb45403963d4fbe755a99cd723b75edc source=proxy time="2019-07-16T13:03:28.205177361+03:00" level=fatal msg="failed to handle exit signal" error="close unix @->/run/vc/vm/c9a5d1f408544c281c650f3d0e1941f78ced072ccb395ad9682b929c3b205d52/kata.sock: use of closed network connection" name=kata-proxy pid=3222 sandbox=c9a5d1f408544c281c650f3d0e1941f78ced072ccb395ad9682b929c3b205d52 source=proxy time="2019-07-16T13:03:34.556530351+03:00" level=fatal msg="failed to handle exit signal" error="close unix @->/run/vc/vm/998b03d2adc57dddac9b157645e17ccc3f07852125d25d25296864734ca3e332/kata.sock: use of closed network connection" name=kata-proxy pid=4066 sandbox=998b03d2adc57dddac9b157645e17ccc3f07852125d25d25296864734ca3e332 source=proxy time="2019-07-16T13:17:00.540692816+03:00" level=fatal msg="failed to handle exit signal" error="close unix @->/run/vc/vm/e17f56c599abdfdbbc302fb33cf489058560f2f9ddbcca390571eb15c7f6eb78/kata.sock: use of closed network connection" name=kata-proxy pid=15157 sandbox=e17f56c599abdfdbbc302fb33cf489058560f2f9ddbcca390571eb15c7f6eb78 source=proxy time="2019-07-16T13:17:05.711451788+03:00" level=fatal msg="channel error" error="accept unix /run/vc/sbs/3e8db3ddf0f989a93334c979cb2a8cad14b3af3a5fac11f809c86c47e4c4b796/proxy.sock: use of closed network connection" name=kata-proxy pid=15968 sandbox=3e8db3ddf0f989a93334c979cb2a8cad14b3af3a5fac11f809c86c47e4c4b796 source=proxy time="2019-07-16T13:17:18.051153416+03:00" level=fatal msg="failed to handle exit signal" error="close unix @->/run/vc/vm/b348b468c59655dd2eb41fd274ba649162dc752d80ede9e6ba98ee8d2e8c1219/kata.sock: use of closed network connection" name=kata-proxy pid=16784 sandbox=b348b468c59655dd2eb41fd274ba649162dc752d80ede9e6ba98ee8d2e8c1219 source=proxy time="2019-07-16T13:39:47.676154275+03:00" level=fatal msg="failed to handle exit signal" error="close unix @->/run/vc/vm/e9470954a5a82a86ff77b3e0a329911dc61fc3586a65f6615abf0dd5aa94ba5c/kata.sock: use of closed network connection" name=kata-proxy pid=5472 sandbox=e9470954a5a82a86ff77b3e0a329911dc61fc3586a65f6615abf0dd5aa94ba5c source=proxy time="2019-07-16T13:39:52.163062012+03:00" level=fatal msg="failed to handle exit signal" error="close unix @->/run/vc/vm/22e89c919148383ba93c4b5efd910be54b4551a4acf091254e819e9cefbc1822/kata.sock: use of closed network connection" name=kata-proxy pid=6270 sandbox=22e89c919148383ba93c4b5efd910be54b4551a4acf091254e819e9cefbc1822 source=proxy time="2019-07-16T13:39:57.439664928+03:00" level=fatal msg="failed to handle exit signal" error="close unix @->/run/vc/vm/62d834b642ee30b3476de0d00dde889b3d0241a595f1faac8130b0d7a8e57ef0/kata.sock: use of closed network connection" name=kata-proxy pid=7062 sandbox=62d834b642ee30b3476de0d00dde889b3d0241a595f1faac8130b0d7a8e57ef0 source=proxy time="2019-07-16T13:40:02.24761258+03:00" level=fatal msg="failed to handle exit signal" error="close unix @->/run/vc/vm/eb6f865aecb18c99ef892896cad30756b3fd4ce2e03fa0bb647550176d077883/kata.sock: use of closed network connection" name=kata-proxy pid=7860 sandbox=eb6f865aecb18c99ef892896cad30756b3fd4ce2e03fa0bb647550176d077883 source=proxy time="2019-07-16T13:44:34.739614495+03:00" level=fatal msg="channel error" error="accept unix /run/vc/sbs/db7022ffb21aa2d1d23fc0815e99643a4f13c49d3c50ad7f65a2da74baf472fd/proxy.sock: use of closed network connection" name=kata-proxy pid=10488 sandbox=db7022ffb21aa2d1d23fc0815e99643a4f13c49d3c50ad7f65a2da74baf472fd source=proxy time="2019-07-16T13:50:59.350671203+03:00" level=fatal msg="failed to handle exit signal" error="close unix @->/run/vc/vm/5d5986314c7b48488d123f0efde9ea69856aaa658ad31cc44eba13249c802c16/kata.sock: use of closed network connection" name=kata-proxy pid=26647 sandbox=5d5986314c7b48488d123f0efde9ea69856aaa658ad31cc44eba13249c802c16 source=proxy time="2019-07-16T13:51:07.944071448+03:00" level=fatal msg="failed to handle exit signal" error="close unix @->/run/vc/vm/dc91cef902f6d6225d4d59c1b560d9ef15bccc5c8e00f5e281235c986792ded0/kata.sock: use of closed network connection" name=kata-proxy pid=27449 sandbox=dc91cef902f6d6225d4d59c1b560d9ef15bccc5c8e00f5e281235c986792ded0 source=proxy time="2019-07-16T13:57:11.63244811+03:00" level=fatal msg="failed to handle exit signal" error="close unix @->/run/vc/vm/105d98aaaee996b27c39054496bf35c00887273a84b6258d5d29405554db02aa/kata.sock: use of closed network connection" name=kata-proxy pid=17476 sandbox=105d98aaaee996b27c39054496bf35c00887273a84b6258d5d29405554db02aa source=proxy time="2019-07-16T13:57:16.207418978+03:00" level=fatal msg="failed to handle exit signal" error="close unix @->/run/vc/vm/16b7bbb57a2513702c0aa2eba393e9cba3bd83d2369e043c0989142261cd50d1/kata.sock: use of closed network connection" name=kata-proxy pid=18264 sandbox=16b7bbb57a2513702c0aa2eba393e9cba3bd83d2369e043c0989142261cd50d1 source=proxy time="2019-07-16T13:57:21.075560001+03:00" level=fatal msg="failed to handle exit signal" error="close unix @->/run/vc/vm/4924e84b64a0123d6afe6bbfd6e5dc6ad0fde72643a3145d92edd0899ae80053/kata.sock: use of closed network connection" name=kata-proxy pid=19069 sandbox=4924e84b64a0123d6afe6bbfd6e5dc6ad0fde72643a3145d92edd0899ae80053 source=proxy time="2019-07-16T13:57:26.287706637+03:00" level=fatal msg="failed to handle exit signal" error="close unix @->/run/vc/vm/5f312932a12bac552f2f6e5e4d7e1b023c80ee637e6bff7ccb25728def17be5e/kata.sock: use of closed network connection" name=kata-proxy pid=19863 sandbox=5f312932a12bac552f2f6e5e4d7e1b023c80ee637e6bff7ccb25728def17be5e source=proxy time="2019-07-16T14:13:26.463507159+03:00" level=fatal msg="failed to handle exit signal" error="close unix @->/run/vc/vm/fb7c9ef4cccfa2fd6b13d7d83d510f499e1e28a42290f0a04790ce8d1412fe08/kata.sock: use of closed network connection" name=kata-proxy pid=24523 sandbox=fb7c9ef4cccfa2fd6b13d7d83d510f499e1e28a42290f0a04790ce8d1412fe08 source=proxy time="2019-07-16T14:13:30.789794947+03:00" level=fatal msg="failed to handle exit signal" error="close unix @->/run/vc/vm/b6d90cf34413d3e396b3f77cf9881b5aac95aba96f298cfc1753193e06f56ae9/kata.sock: use of closed network connection" name=kata-proxy pid=25333 sandbox=b6d90cf34413d3e396b3f77cf9881b5aac95aba96f298cfc1753193e06f56ae9 source=proxy time="2019-07-16T14:13:46.760314474+03:00" level=fatal msg="failed to handle exit signal" error="close unix @->/run/vc/vm/cb15c8687e84ba3706f523c9fb1afa74cd77afe2e88b7cdde8ffa3d342e7e874/kata.sock: use of closed network connection" name=kata-proxy pid=26184 sandbox=cb15c8687e84ba3706f523c9fb1afa74cd77afe2e88b7cdde8ffa3d342e7e874 source=proxy time="2019-07-16T14:19:11.851384692+03:00" level=fatal msg="failed to handle exit signal" error="close unix @->/run/vc/vm/c061ad1e7bb6b61810a8b5ad1b444c506a3d07754cd453220765bfd2cfe90928/kata.sock: use of closed network connection" name=kata-proxy pid=34090 sandbox=c061ad1e7bb6b61810a8b5ad1b444c506a3d07754cd453220765bfd2cfe90928 source=proxy time="2019-07-16T14:19:18.88877423+03:00" level=fatal msg="failed to handle exit signal" error="close unix @->/run/vc/vm/5ef2731036609b2764d5f905fc1161bb18ca3a79e15e41c9f66d7476098c710b/kata.sock: use of closed network connection" name=kata-proxy pid=34898 sandbox=5ef2731036609b2764d5f905fc1161bb18ca3a79e15e41c9f66d7476098c710b source=proxy time="2019-07-16T14:19:23.196741489+03:00" level=fatal msg="failed to handle exit signal" error="close unix @->/run/vc/vm/ac193a5349e0c606e6e94d56b49d576bdb5081a4082a1b9c4b8afaa8010a69a8/kata.sock: use of closed network connection" name=kata-proxy pid=35692 sandbox=ac193a5349e0c606e6e94d56b49d576bdb5081a4082a1b9c4b8afaa8010a69a8 source=proxy time="2019-07-16T14:19:27.343920903+03:00" level=fatal msg="channel error" error="accept unix /run/vc/sbs/bd756926bab6806eed75b066586e8f2f42eb2d5f57f51f25082d4df8c09f6186/proxy.sock: use of closed network connection" name=kata-proxy pid=36488 sandbox=bd756926bab6806eed75b066586e8f2f42eb2d5f57f51f25082d4df8c09f6186 source=proxy time="2019-07-16T14:19:32.031792632+03:00" level=fatal msg="failed to handle exit signal" error="close unix @->/run/vc/vm/b599403ca04332573a502e159fef1cb5362d15a86f3237f59209772e4d7443d5/kata.sock: use of closed network connection" name=kata-proxy pid=790 sandbox=b599403ca04332573a502e159fef1cb5362d15a86f3237f59209772e4d7443d5 source=proxy time="2019-07-16T14:19:36.01378607+03:00" level=fatal msg="failed to handle exit signal" error="close unix @->/run/vc/vm/a1c05a92dcf4106e4fc3a1f44bc1b3933642a78c0eddd78fd18ec179f160d949/kata.sock: use of closed network connection" name=kata-proxy pid=1754 sandbox=a1c05a92dcf4106e4fc3a1f44bc1b3933642a78c0eddd78fd18ec179f160d949 source=proxy time="2019-07-16T14:19:53.032063883+03:00" level=fatal msg="failed to handle exit signal" error="close unix @->/run/vc/vm/6ccbbf51cdb9a19b79c6828bc50de43db198cac2eaeac3f4c0c34221db095727/kata.sock: use of closed network connection" name=kata-proxy pid=2691 sandbox=6ccbbf51cdb9a19b79c6828bc50de43db198cac2eaeac3f4c0c34221db095727 source=proxy time="2019-07-16T15:28:47.664810771+03:00" level=fatal msg="channel error" error="accept unix /run/vc/sbs/74ec0eafa423a48a0d5a332dfb2229fb42298d476738910064b19314ad2013c4/proxy.sock: use of closed network connection" name=kata-proxy pid=19751 sandbox=74ec0eafa423a48a0d5a332dfb2229fb42298d476738910064b19314ad2013c4 source=proxy time="2019-07-16T15:29:01.51580735+03:00" level=fatal msg="channel error" error="accept unix /run/vc/sbs/f3bd0684e3c089fcaf99eb8b55c821e670e76684d213be1beebec346095ea399/proxy.sock: use of closed network connection" name=kata-proxy pid=20597 sandbox=f3bd0684e3c089fcaf99eb8b55c821e670e76684d213be1beebec346095ea399 source=proxy time="2019-07-16T16:13:12.400813795+03:00" level=fatal msg="failed to handle exit signal" error="close unix @->/run/vc/vm/4ad8a4bb11d5205eca4775e360f639292d350eee3aeb9b1d5a3fa93654d328b0/kata.sock: use of closed network connection" name=kata-proxy pid=32481 sandbox=4ad8a4bb11d5205eca4775e360f639292d350eee3aeb9b1d5a3fa93654d328b0 source=proxy time="2019-07-16T16:13:20.651534385+03:00" level=fatal msg="channel error" error="accept unix /run/vc/sbs/5d3ca860411720bf1b6229d0d72f287ae929821822df14f9eb729d56fb61f9a8/proxy.sock: use of closed network connection" name=kata-proxy pid=33299 sandbox=5d3ca860411720bf1b6229d0d72f287ae929821822df14f9eb729d56fb61f9a8 source=proxy time="2019-07-16T16:13:25.182794494+03:00" level=fatal msg="failed to handle exit signal" error="close unix @->/run/vc/vm/a27966c0979a43309a8aee50a8b22b11ae867b7b737d6b4397238e8f1808f7c2/kata.sock: use of closed network connection" name=kata-proxy pid=34130 sandbox=a27966c0979a43309a8aee50a8b22b11ae867b7b737d6b4397238e8f1808f7c2 source=proxy time="2019-07-16T16:13:29.699968913+03:00" level=fatal msg="failed to handle exit signal" error="close unix @->/run/vc/vm/9f2407dfb70a58da6ce923ee074475be9534704ec69c1ce8f382090484dfb198/kata.sock: use of closed network connection" name=kata-proxy pid=34929 sandbox=9f2407dfb70a58da6ce923ee074475be9534704ec69c1ce8f382090484dfb198 source=proxy ``` ## Shim logs Recent shim problems found in system journal: ``` time="2019-07-11T12:14:36.732859809+03:00" level=error msg="forward signal failed" container=d1bc23d86f23bc6770d8d7a0bab3425626582c8cd02f319fb595b42fb4ea2006 error="rpc error: code = Unavailable desc = all SubConns are in TransientFailure, latest connection error: connection error: desc = \"transport: Error while dialing rpc error: code = DeadlineExceeded desc = timed out connecting to vsock 3965962151:1024\"" exec-id=d1bc23d86f23bc6770d8d7a0bab3425626582c8cd02f319fb595b42fb4ea2006 name=kata-shim pid=1 signal=terminated source=shim time="2019-07-11T15:20:43.376073892+03:00" level=warning msg="close stdin failed" container=c086a5ac16442286ea8fa8db0310920ca08b582d34d6942c20938e2c2ce374d1 error="rpc error: code = NotFound desc = Process eb0e7f2a-94ba-4750-830e-b60aede0e092 not found (container c086a5ac16442286ea8fa8db0310920ca08b582d34d6942c20938e2c2ce374d1)" exec-id=eb0e7f2a-94ba-4750-830e-b60aede0e092 name=kata-shim pid=5861 source=shim time="2019-07-11T15:34:37.156929894+03:00" level=warning msg="close stdin failed" container=e1329e2dc8a2d86ee21e50fbe3a17d0053a7423ac17929792e85bfdb48662f63 error="rpc error: code = NotFound desc = Process 5c1d4559-eaa6-4a8d-ae51-0461a8614ee7 not found (container e1329e2dc8a2d86ee21e50fbe3a17d0053a7423ac17929792e85bfdb48662f63)" exec-id=5c1d4559-eaa6-4a8d-ae51-0461a8614ee7 name=kata-shim pid=1353 source=shim time="2019-07-15T17:02:44.523900384+03:00" level=error msg="forward signal failed" container=337b260e51b24182b628a7adb4e3c6b6bd6cf915386777402de9e013f64e9e1a error="rpc error: code = Unavailable desc = transport is closing" exec-id=337b260e51b24182b628a7adb4e3c6b6bd6cf915386777402de9e013f64e9e1a name=kata-shim pid=1 signal=terminated source=shim time="2019-07-15T17:02:44.523961808+03:00" level=error msg="forward signal failed" container=425e5b2e41ad2d115889197695e3f450916f01810e916fdccb379bec04328ed3 error="rpc error: code = Unavailable desc = transport is closing" exec-id=425e5b2e41ad2d115889197695e3f450916f01810e916fdccb379bec04328ed3 name=kata-shim pid=1 signal=terminated source=shim time="2019-07-15T17:02:44.523905573+03:00" level=error msg="forward signal failed" container=a3af87d0dbc605ec73cfbf4751e6919e65460b2d17f823b8a0533bb2cda731cb error="rpc error: code = Unavailable desc = transport is closing" exec-id=a3af87d0dbc605ec73cfbf4751e6919e65460b2d17f823b8a0533bb2cda731cb name=kata-shim pid=1 signal=terminated source=shim time="2019-07-15T17:02:44.523986593+03:00" level=error msg="forward signal failed" container=6c38991f6c6efdc48154d7e680b766f4fefaacae8a69480decd8df97cb86febc error="rpc error: code = Unavailable desc = transport is closing" exec-id=6c38991f6c6efdc48154d7e680b766f4fefaacae8a69480decd8df97cb86febc name=kata-shim pid=1 signal=terminated source=shim time="2019-07-16T11:26:40.468006644+03:00" level=error msg="forward signal failed" container=c86afb97e323e60913a3044ccf9e58287948b6a9f8c766071db4413082fc6005 error="rpc error: code = Unavailable desc = transport is closing" exec-id=c86afb97e323e60913a3044ccf9e58287948b6a9f8c766071db4413082fc6005 name=kata-shim pid=1 signal=terminated source=shim ``` ## Throttler logs No recent throttler problems found in system journal. --- # Container manager details No `docker` Have `kubectl` ## Kubernetes Output of "`kubectl version`": ``` Client Version: version.Info{Major:"1", Minor:"14", GitVersion:"v1.14.3", GitCommit:"5e53fd6bc17c0dec8434817e69b04a25d8ae0ff0", GitTreeState:"clean", BuildDate:"2019-06-06T01:44:30Z", GoVersion:"go1.12.5", Compiler:"gc", Platform:"linux/amd64"} The connection to the server localhost:8080 was refused - did you specify the right host or port? ``` Output of "`kubectl config view`": ``` apiVersion: v1 clusters: [] contexts: [] current-context: "" kind: Config preferences: {} users: [] ``` Output of "`systemctl show kubelet`": ``` Type=simple Restart=always NotifyAccess=none RestartUSec=10s TimeoutStartUSec=1min 30s TimeoutStopUSec=1min 30s RuntimeMaxUSec=infinity WatchdogUSec=0 WatchdogTimestampMonotonic=0 PermissionsStartOnly=no RootDirectoryStartOnly=no RemainAfterExit=no GuessMainPID=yes MainPID=0 ControlPID=0 FileDescriptorStoreMax=0 NFileDescriptorStore=0 StatusErrno=0 Result=success UID=[not set] GID=[not set] NRestarts=24655 ExecMainStartTimestamp=Mon 2019-07-15 15:29:20 EEST ExecMainStartTimestampMonotonic=253064406381 ExecMainExitTimestamp=Mon 2019-07-15 17:02:39 EEST ExecMainExitTimestampMonotonic=258663648267 ExecMainPID=20324 ExecMainCode=1 ExecMainStatus=0 ExecStart={ path=/usr/bin/kubelet ; argv[]=/usr/bin/kubelet $KUBELET_KUBECONFIG_ARGS $KUBELET_CONFIG_ARGS $KUBELET_KUBEADM_ARGS $KUBELET_EXTRA_ARGS ; ignore_errors=no ; start_time=[n/a] ; stop_time=[n/a] ; pid=0 ; code=(null) ; status=0/0 } Slice=system.slice MemoryCurrent=[not set] CPUUsageNSec=191506113136 TasksCurrent=[not set] IPIngressBytes=18446744073709551615 IPIngressPackets=18446744073709551615 IPEgressBytes=18446744073709551615 IPEgressPackets=18446744073709551615 Delegate=no CPUAccounting=yes CPUWeight=[not set] StartupCPUWeight=[not set] CPUShares=[not set] StartupCPUShares=[not set] CPUQuotaPerSecUSec=infinity IOAccounting=no IOWeight=[not set] StartupIOWeight=[not set] BlockIOAccounting=no BlockIOWeight=[not set] StartupBlockIOWeight=[not set] MemoryAccounting=yes MemoryLow=0 MemoryHigh=infinity MemoryMax=infinity MemorySwapMax=infinity MemoryLimit=infinity DevicePolicy=auto TasksAccounting=yes TasksMax=5529 IPAccounting=no Environment=[unprintable] [unprintable] KUBELET_CONFIG_ARGS=--config=/var/lib/kubelet/config.yaml EnvironmentFile=/var/lib/kubelet/kubeadm-flags.env (ignore_errors=yes) EnvironmentFile=/etc/default/kubelet (ignore_errors=yes) UMask=0022 LimitCPU=infinity LimitCPUSoft=infinity LimitFSIZE=infinity LimitFSIZESoft=infinity LimitDATA=infinity LimitDATASoft=infinity LimitSTACK=infinity LimitSTACKSoft=8388608 LimitCORE=infinity LimitCORESoft=0 LimitRSS=infinity LimitRSSSoft=infinity LimitNOFILE=4096 LimitNOFILESoft=1024 LimitAS=infinity LimitASSoft=infinity LimitNPROC=95685 LimitNPROCSoft=95685 LimitMEMLOCK=16777216 LimitMEMLOCKSoft=16777216 LimitLOCKS=infinity LimitLOCKSSoft=infinity LimitSIGPENDING=95685 LimitSIGPENDINGSoft=95685 LimitMSGQUEUE=819200 LimitMSGQUEUESoft=819200 LimitNICE=0 LimitNICESoft=0 LimitRTPRIO=0 LimitRTPRIOSoft=0 LimitRTTIME=infinity LimitRTTIMESoft=infinity OOMScoreAdjust=0 Nice=0 IOSchedulingClass=0 IOSchedulingPriority=0 CPUSchedulingPolicy=0 CPUSchedulingPriority=0 TimerSlackNSec=50000 CPUSchedulingResetOnFork=no NonBlocking=no StandardInput=null StandardInputData= StandardOutput=journal StandardError=inherit TTYReset=no TTYVHangup=no TTYVTDisallocate=no SyslogPriority=30 SyslogLevelPrefix=yes SyslogLevel=6 SyslogFacility=3 LogLevelMax=-1 SecureBits=0 CapabilityBoundingSet=cap_chown cap_dac_override cap_dac_read_search cap_fowner cap_fsetid cap_kill cap_setgid cap_setuid cap_setpcap cap_linux_immutable cap_net_bind_service cap_net_broadcast cap_net_admin cap_net_raw cap_ipc_lock cap_ipc_owner cap_sys_module cap_sys_rawio cap_sys_chroot cap_sys_ptrace cap_sys_pacct cap_sys_admin cap_sys_boot cap_sys_nice cap_sys_resource cap_sys_time cap_sys_tty_config cap_mknod cap_lease cap_audit_write cap_audit_control cap_setfcap cap_mac_override cap_mac_admin cap_syslog cap_wake_alarm cap_block_suspend AmbientCapabilities= DynamicUser=no RemoveIPC=no MountFlags= PrivateTmp=no PrivateDevices=no ProtectKernelTunables=no ProtectKernelModules=no ProtectControlGroups=no PrivateNetwork=no PrivateUsers=no ProtectHome=no ProtectSystem=no SameProcessGroup=no UtmpMode=init IgnoreSIGPIPE=yes NoNewPrivileges=no SystemCallErrorNumber=0 LockPersonality=no RuntimeDirectoryPreserve=no RuntimeDirectoryMode=0755 StateDirectoryMode=0755 CacheDirectoryMode=0755 LogsDirectoryMode=0755 ConfigurationDirectoryMode=0755 MemoryDenyWriteExecute=no RestrictRealtime=no RestrictNamespaces=no MountAPIVFS=no KeyringMode=private KillMode=control-group KillSignal=15 SendSIGKILL=yes SendSIGHUP=no Id=kubelet.service Names=kubelet.service Requires=system.slice sysinit.target WantedBy=multi-user.target Conflicts=shutdown.target Before=shutdown.target multi-user.target After=basic.target system.slice sysinit.target systemd-journald.socket Documentation=https://kubernetes.io/docs/home/ Description=kubelet: The Kubernetes Node Agent LoadState=loaded ActiveState=inactive SubState=dead FragmentPath=/lib/systemd/system/kubelet.service DropInPaths=/etc/systemd/system/kubelet.service.d/0-crio.conf /etc/systemd/system/kubelet.service.d/10-kubeadm.conf /etc/systemd/system/kubelet.service.d/11-cgroups.conf UnitFileState=enabled UnitFilePreset=enabled StateChangeTimestamp=Mon 2019-07-15 17:02:39 EEST StateChangeTimestampMonotonic=258663648383 InactiveExitTimestamp=Mon 2019-07-15 15:29:20 EEST InactiveExitTimestampMonotonic=253064406437 ActiveEnterTimestamp=Mon 2019-07-15 15:29:20 EEST ActiveEnterTimestampMonotonic=253064406437 ActiveExitTimestamp=Mon 2019-07-15 17:02:39 EEST ActiveExitTimestampMonotonic=258663611209 InactiveEnterTimestamp=Mon 2019-07-15 17:02:39 EEST InactiveEnterTimestampMonotonic=258663648383 CanStart=yes CanStop=yes CanReload=no CanIsolate=no StopWhenUnneeded=no RefuseManualStart=no RefuseManualStop=no AllowIsolate=no DefaultDependencies=yes OnFailureJobMode=replace IgnoreOnIsolate=no NeedDaemonReload=no JobTimeoutUSec=infinity JobRunningTimeoutUSec=infinity JobTimeoutAction=none ConditionResult=yes AssertResult=yes ConditionTimestamp=Mon 2019-07-15 15:29:20 EEST ConditionTimestampMonotonic=253064404717 AssertTimestamp=Mon 2019-07-15 15:29:20 EEST AssertTimestampMonotonic=253064404718 Transient=no Perpetual=no StartLimitIntervalUSec=0 StartLimitBurst=5 StartLimitAction=none FailureAction=none SuccessAction=none InvocationID=9e46be852a98435d92a9bef207c788ab CollectMode=inactive ``` Have `crio` ## crio Output of "`crio --version`": ``` crio version 1.14.3-dev commit: "615c561b67ac140c1d155d9dc25767a0e81ce433" ``` Output of "`systemctl show crio`": ``` Type=simple Restart=on-failure NotifyAccess=none RestartUSec=5s TimeoutStartUSec=1min 30s TimeoutStopUSec=1min 30s RuntimeMaxUSec=infinity WatchdogUSec=0 WatchdogTimestamp=Tue 2019-07-16 14:17:58 EEST WatchdogTimestampMonotonic=335182886442 PermissionsStartOnly=no RootDirectoryStartOnly=no RemainAfterExit=no GuessMainPID=yes MainPID=33546 ControlPID=0 FileDescriptorStoreMax=0 NFileDescriptorStore=0 StatusErrno=0 Result=success UID=[not set] GID=[not set] NRestarts=0 ExecMainStartTimestamp=Tue 2019-07-16 14:17:58 EEST ExecMainStartTimestampMonotonic=335182886381 ExecMainExitTimestampMonotonic=0 ExecMainPID=33546 ExecMainCode=0 ExecMainStatus=0 ExecStart={ path=/usr/local/bin/crio ; argv[]=/usr/local/bin/crio ; ignore_errors=no ; start_time=[Tue 2019-07-16 14:17:58 EEST] ; stop_time=[n/a] ; pid=33546 ; code=(null) ; status=0/0 } Slice=system.slice ControlGroup=/system.slice/crio.service MemoryCurrent=[not set] CPUUsageNSec=[not set] TasksCurrent=75 IPIngressBytes=18446744073709551615 IPIngressPackets=18446744073709551615 IPEgressBytes=18446744073709551615 IPEgressPackets=18446744073709551615 Delegate=no CPUAccounting=no CPUWeight=[not set] StartupCPUWeight=[not set] CPUShares=[not set] StartupCPUShares=[not set] CPUQuotaPerSecUSec=infinity IOAccounting=no IOWeight=[not set] StartupIOWeight=[not set] BlockIOAccounting=no BlockIOWeight=[not set] StartupBlockIOWeight=[not set] MemoryAccounting=no MemoryLow=0 MemoryHigh=infinity MemoryMax=infinity MemorySwapMax=infinity MemoryLimit=infinity DevicePolicy=auto TasksAccounting=yes TasksMax=5529 IPAccounting=no UMask=0022 LimitCPU=infinity LimitCPUSoft=infinity LimitFSIZE=infinity LimitFSIZESoft=infinity LimitDATA=infinity LimitDATASoft=infinity LimitSTACK=infinity LimitSTACKSoft=8388608 LimitCORE=infinity LimitCORESoft=0 LimitRSS=infinity LimitRSSSoft=infinity LimitNOFILE=4096 LimitNOFILESoft=1024 LimitAS=infinity LimitASSoft=infinity LimitNPROC=95685 LimitNPROCSoft=95685 LimitMEMLOCK=16777216 LimitMEMLOCKSoft=16777216 LimitLOCKS=infinity LimitLOCKSSoft=infinity LimitSIGPENDING=95685 LimitSIGPENDINGSoft=95685 LimitMSGQUEUE=819200 LimitMSGQUEUESoft=819200 LimitNICE=0 LimitNICESoft=0 LimitRTPRIO=0 LimitRTPRIOSoft=0 LimitRTTIME=infinity LimitRTTIMESoft=infinity OOMScoreAdjust=0 Nice=0 IOSchedulingClass=0 IOSchedulingPriority=0 CPUSchedulingPolicy=0 CPUSchedulingPriority=0 TimerSlackNSec=50000 CPUSchedulingResetOnFork=no NonBlocking=no StandardInput=null StandardInputData= StandardOutput=journal StandardError=inherit TTYReset=no TTYVHangup=no TTYVTDisallocate=no SyslogPriority=30 SyslogLevelPrefix=yes SyslogLevel=6 SyslogFacility=3 LogLevelMax=-1 SecureBits=0 CapabilityBoundingSet=cap_chown cap_dac_override cap_dac_read_search cap_fowner cap_fsetid cap_kill cap_setgid cap_setuid cap_setpcap cap_linux_immutable cap_net_bind_service cap_net_broadcast cap_net_admin cap_net_raw cap_ipc_lock cap_ipc_owner cap_sys_module cap_sys_rawio cap_sys_chroot cap_sys_ptrace cap_sys_pacct cap_sys_admin cap_sys_boot cap_sys_nice cap_sys_resource cap_sys_time cap_sys_tty_config cap_mknod cap_lease cap_audit_write cap_audit_control cap_setfcap cap_mac_override cap_mac_admin cap_syslog cap_wake_alarm cap_block_suspend AmbientCapabilities= DynamicUser=no RemoveIPC=no MountFlags= PrivateTmp=no PrivateDevices=no ProtectKernelTunables=no ProtectKernelModules=no ProtectControlGroups=no PrivateNetwork=no PrivateUsers=no ProtectHome=no ProtectSystem=no SameProcessGroup=no UtmpMode=init IgnoreSIGPIPE=yes NoNewPrivileges=no SystemCallErrorNumber=0 LockPersonality=no RuntimeDirectoryPreserve=no RuntimeDirectoryMode=0755 StateDirectoryMode=0755 CacheDirectoryMode=0755 LogsDirectoryMode=0755 ConfigurationDirectoryMode=0755 MemoryDenyWriteExecute=no RestrictRealtime=no RestrictNamespaces=no MountAPIVFS=no KeyringMode=private KillMode=control-group KillSignal=15 SendSIGKILL=yes SendSIGHUP=no Id=crio.service Names=crio.service Requires=system.slice sysinit.target WantedBy=multi-user.target Conflicts=shutdown.target Before=shutdown.target multi-user.target After=system.slice basic.target sysinit.target systemd-journald.socket Documentation=https://github.com/cri-o/cri-o Description=OCI-based implementation of Kubernetes Container Runtime Interface LoadState=loaded ActiveState=active SubState=running FragmentPath=/etc/systemd/system/crio.service UnitFileState=enabled UnitFilePreset=enabled StateChangeTimestamp=Tue 2019-07-16 14:17:58 EEST StateChangeTimestampMonotonic=335182886444 InactiveExitTimestamp=Tue 2019-07-16 14:17:58 EEST InactiveExitTimestampMonotonic=335182886444 ActiveEnterTimestamp=Tue 2019-07-16 14:17:58 EEST ActiveEnterTimestampMonotonic=335182886444 ActiveExitTimestamp=Tue 2019-07-16 14:16:34 EEST ActiveExitTimestampMonotonic=335098359615 InactiveEnterTimestamp=Tue 2019-07-16 14:16:34 EEST InactiveEnterTimestampMonotonic=335098384475 CanStart=yes CanStop=yes CanReload=no CanIsolate=no StopWhenUnneeded=no RefuseManualStart=no RefuseManualStop=no AllowIsolate=no DefaultDependencies=yes OnFailureJobMode=replace IgnoreOnIsolate=no NeedDaemonReload=no JobTimeoutUSec=infinity JobRunningTimeoutUSec=infinity JobTimeoutAction=none ConditionResult=yes AssertResult=yes ConditionTimestamp=Tue 2019-07-16 14:17:58 EEST ConditionTimestampMonotonic=335182884521 AssertTimestamp=Tue 2019-07-16 14:17:58 EEST AssertTimestampMonotonic=335182884522 Transient=no Perpetual=no StartLimitIntervalUSec=10s StartLimitBurst=5 StartLimitAction=none FailureAction=none SuccessAction=none InvocationID=f8d0e26597664adfa2c2d1134d883d90 CollectMode=inactive ``` Output of "`cat /etc/crio/crio.conf`": ``` [crio] root = "/var/lib/containers/storage" # directory where the reference to the images are stored runroot = "/var/run/containers/storage" # directory where the layer for new containers is created file_locking = false file_locking_path = "/run/crio.lock" storage_driver = "devicemapper" # List to pass options to the storage driver. Please refer to # containers-storage.conf(5) to see all available storage options. storage_option = [ "dm.directlvm_device=/dev/sda5", # CHANGE ACCORDING TO YOUR PHYSICAL VOLUME "dm.directlvm_device_force=true", "dm.thinp_percent=95", "dm.thinp_metapercent=1", "dm.thinp_autoextend_threshold=80", "dm.thinp_autoextend_percent=20" ] [crio.api] listen = "/var/run/crio/crio.sock" stream_address = "127.0.0.1" stream_port = "0" # Enable encrypted TLS transport of the stream server. stream_enable_tls = false # Path to the x509 certificate file used to serve the encrypted stream. This # file can change, and CRI-O will automatically pick up the changes within 5 # minutes. stream_tls_cert = "" # Path to the key file used to serve the encrypted stream. This file can # change, and CRI-O will automatically pick up the changes within 5 minutes. stream_tls_key = "" # Path to the x509 CA(s) file used to verify and authenticate client # communication with the encrypted stream. This file can change, and CRI-O will # automatically pick up the changes within 5 minutes. stream_tls_ca = "" # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 16 * 1024 * 1024. grpc_max_send_msg_size = 16777216 # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 16 * 1024 * 1024. grpc_max_recv_msg_size = 16777216 [crio.runtime] manage_network_ns_lifecycle = true default_runtime = "runc" no_pivot = false conmon = "/usr/local/libexec/crio/conmon" # safe to check this path exists conmon_env = [ "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin", ] selinux = false seccomp_profile = "/etc/crio/seccomp.json" apparmor_profile = "crio-default" cgroup_manager = "cgroupsfs" default_capabilities = [ "CHOWN", "DAC_OVERRIDE", "FSETID", "FOWNER", "NET_RAW", "SETGID", "SETUID", "SETPCAP", "NET_BIND_SERVICE", "SYS_CHROOT", "KILL", ] pids_limit = 1024 log_size_max = -1 # Path to directory in which container exit files are written to by conmon. container_exits_dir = "/var/run/crio/exits" # Path to directory for container attach sockets. container_attach_socket_dir = "/var/run/crio" # If set to true, all containers will run in read-only mode. read_only = false # Changes the verbosity of the logs based on the level it is set to. Options # are fatal, panic, error, warn, info, and debug. log_level = "error" # The UID mappings for the user namespace of each container. A range is # specified in the form containerUID:HostUID:Size. Multiple ranges must be # separated by comma. uid_mappings = "" # The GID mappings for the user namespace of each container. A range is # specified in the form containerGID:HostGID:Size. Multiple ranges must be # separated by comma. gid_mappings = "" # The minimal amount of time in seconds to wait before issuing a timeout # regarding the proper termination of the container. ctr_stop_timeout = 0 # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes. # The runtime to use is picked based on the runtime_handler provided by the CRI. # If no runtime_handler is provided, the runtime will be picked based on the level # of trust of the workload. [crio.runtime.runtimes.runc] runtime_path = "/usr/sbin/runc" # Be careful that these 3 paths actually match on disk [crio.runtime.runtimes.kata] runtime_path = "/usr/bin/kata-runtime" [crio.runtime.runtimes.kata-fc] runtime_path = "/usr/bin/kata-fc" [crio.image] default_transport = "docker://" pause_image = "k8s.gcr.io/pause:3.1" pause_command = "/pause" signature_policy = "" image_volumes = "mkdir" # CHANGE THE INSECURE REGS ACCORDING TO YOUR OWN SETUP insecure_registries = [ "192.168.1.99:2501", ] registries = [ "docker.io", "registry-1.docker.io", "index.docker.io",] [crio.network] network_dir = "/etc/cni/net.d" plugin_dir = "/opt/cni/bin" ``` No `containerd` --- # Packages Have `dpkg` Output of "`dpkg -l|egrep "(cc-oci-runtimecc-runtimerunv|kata-proxy|kata-runtime|kata-shim|kata-ksm-throttler|kata-containers-image|linux-container|qemu-)"`": ``` ii kata-containers-image 1.7.0-25 amd64 Kata containers image ii kata-ksm-throttler 1.7.0-28 amd64 ii kata-linux-container 4.19.28.40-28 amd64 linux kernel optimised for container-like workloads. ii kata-proxy 1.7.0-26 amd64 ii kata-runtime 1.7.0-34 amd64 ii kata-shim 1.7.0-24 amd64 ii qemu-block-extra:amd64 1:2.11+dfsg-1ubuntu7.14 amd64 extra block backend modules for qemu-system and qemu-utils ii qemu-lite 2.11.0+git.87517afd72-29 amd64 linux kernel optimised for container-like workloads. ii qemu-utils 1:2.11+dfsg-1ubuntu7.14 amd64 QEMU utilities ii qemu-vanilla 2.11.2+git.0982a56a55-29 amd64 linux kernel optimised for container-like workloads. ``` Have `rpm` Output of "`rpm -qa|egrep "(cc-oci-runtimecc-runtimerunv|kata-proxy|kata-runtime|kata-shim|kata-ksm-throttler|kata-containers-image|linux-container|qemu-)"`": ``` ``` ---