Open davidhay1969 opened 3 years ago
As above, have raised PR 2077 to cover the typo issues in the Developer Guide
@GabyCT not sure that this should be closed, my PR was merely to fix a small number of typos. The main issue that I raised is still, in my view, outstanding i.e. the debugging console behaves inconsistently. Should we reopen this ?
@davidhay1969, it was automatically closed because the PR was merged. Let me re-open this one.
Description of problem
Running: -
kata-runtime
2.1.1
qemu-system-x86_64
5.2.0
containerd
1.4.4-0ubuntu1~20.04.2
I'm trying and failing to consistently see output / diag from the
kata-agent
running inside the pod sandbox / guest VM.I'm following the guidance in the Developer Guide
After much experimentation, I think need to "Enable agent debugging" as per the following: -
vi $kata_configuration_file
and remove the comment from
enable_debug = true
e.g.: -and remove the comment from
debug_console_enabled = true
e.g.kubectl apply -f nginx-kata.yaml
This does appear to work, returning debug such as: -
However, this is inconsistent, in that I don't always see output from the
socat
command.Reading this Connect to the virtual machine using the debug console the directions in the Enabling debug console for QEMU section advises one to: -
which appears to duplicate the
debug_console_enabled = true
setting ?I did note a few typos in that page, including: -
which should read: -
i.e. the variable is missing the preceding
$
symbol and there's no trailing'
at the end of the command.I'll create a PR to (hopefully) resolve that ( the missing
$
symbol is an issue in 3-4 places ).Final comment, regarding Start kata-monitor - ONLY NEEDED FOR 2.0.x am I right in believing that
kata-monitor
is NOT required for Kata 2.0 and beyond e.g.2.1.1
etc. ?Expected result
A clear and consistent set of instructions to enable agent / pod sandbox / guest debugging, perhaps with examples.
Actual result
As per the above, inconsistent results from
socat
when debugging the agent / pod sandbox.Further information
Output from
/opt/kata/bin/kata-collect-data.sh
Show kata-collect-data.sh details
# Meta details Running `kata-collect-data.sh` version `2.1.1 (commit 0e2be438bdd6d213ac4a3d7d300a5757c4137799)` at `2021-06-21.07:49:06.680474524-0700`. ---
Runtime
Runtime is `/usr/bin/kata-runtime`. # `kata-env`
---
/usr/bin/kata-runtime kata-env
```toml [Meta] Version = "1.0.25" [Runtime] Debug = false Trace = false DisableGuestSeccomp = true DisableNewNetNs = false SandboxCgroupOnly = false Path = "/opt/kata/bin/kata-runtime" [Runtime.Version] OCI = "1.0.1-dev" [Runtime.Version.Version] Semver = "2.1.1" Major = 2 Minor = 1 Patch = 1 Commit = "0e2be438bdd6d213ac4a3d7d300a5757c4137799" [Runtime.Config] Path = "/opt/kata/share/defaults/kata-containers/configuration-qemu.toml" [Hypervisor] MachineType = "pc" Version = "QEMU emulator version 5.2.0 (kata-static)\nCopyright (c) 2003-2020 Fabrice Bellard and the QEMU Project developers" Path = "/opt/kata/bin/qemu-system-x86_64" BlockDeviceDriver = "virtio-scsi" EntropySource = "/dev/urandom" SharedFS = "virtio-fs" VirtioFSDaemon = "/opt/kata/libexec/kata-qemu/virtiofsd" Msize9p = 8192 MemorySlots = 10 PCIeRootPort = 0 HotplugVFIOOnRootBus = false Debug = false [Image] Path = "/opt/kata/share/kata-containers/kata-containers-image_clearlinux_2.1.1_agent_0e2be438bd.img" [Kernel] Path = "/opt/kata/share/kata-containers/vmlinux-5.10.25-85" Parameters = "systemd.unit=kata-containers.target systemd.mask=systemd-networkd.service systemd.mask=systemd-networkd.socket scsi_mod.scan=none" [Initrd] Path = "" [Agent] Debug = false Trace = false TraceMode = "" TraceType = "" [Host] Kernel = "5.4.0-72-generic" Architecture = "amd64" VMContainerCapable = true SupportVSocks = true [Host.Distro] Name = "Ubuntu" Version = "20.04" [Host.CPU] Vendor = "AuthenticAMD" Model = "AMD EPYC Processor (with IBPB)" CPUs = 2 [Host.Memory] Total = 4030548 Free = 257444 Available = 3237104 [Netmon] Path = "/opt/kata/libexec/kata-containers/kata-netmon" Debug = false Enable = false [Netmon.Version] Semver = "2.1.1" Major = 2 Minor = 1 Patch = 1 Commit = "<>"
```
Runtime config files
# Runtime config files ## Runtime default config files ``` /etc/kata-containers/configuration.toml /opt/kata/share/defaults/kata-containers/configuration.toml ``` ## Runtime config file contents Config file `/etc/kata-containers/configuration.toml` not found
Config file `/usr/share/defaults/kata-containers/configuration.toml` not found
---
cat "/opt/kata/share/defaults/kata-containers/configuration.toml"
```toml # Copyright (c) 2017-2019 Intel Corporation # # SPDX-License-Identifier: Apache-2.0 # # XXX: WARNING: this file is auto-generated. # XXX: # XXX: Source file: "cli/config/configuration-qemu.toml.in" # XXX: Project: # XXX: Name: Kata Containers # XXX: Type: kata [hypervisor.qemu] path = "/opt/kata/bin/qemu-system-x86_64" kernel = "/opt/kata/share/kata-containers/vmlinux.container" image = "/opt/kata/share/kata-containers/kata-containers.img" machine_type = "pc" # List of valid annotation names for the hypervisor # Each member of the list is a regular expression, which is the base name # of the annotation, e.g. "path" for io.katacontainers.config.hypervisor.path" enable_annotations = [] # List of valid annotations values for the hypervisor # Each member of the list is a path pattern as described by glob(3). # The default if not set is empty (all annotations rejected.) # Your distribution recommends: ["/opt/kata/bin/qemu-system-x86_64"] valid_hypervisor_paths = ["/opt/kata/bin/qemu-system-x86_64"] # Optional space-separated list of options to pass to the guest kernel. # For example, use `kernel_params = "vsyscall=emulate"` if you are having # trouble running pre-2.15 glibc. # # WARNING: - any parameter specified here will take priority over the default # parameter value of the same name used to start the virtual machine. # Do not set values here unless you understand the impact of doing so as you # may stop the virtual machine from booting. # To see the list of default parameters, enable hypervisor debug, create a # container and look for 'default-kernel-parameters' log entries. kernel_params = "" # Path to the firmware. # If you want that qemu uses the default firmware leave this option empty firmware = "" # Machine accelerators # comma-separated list of machine accelerators to pass to the hypervisor. # For example, `machine_accelerators = "nosmm,nosmbus,nosata,nopit,static-prt,nofw"` machine_accelerators="" # CPU features # comma-separated list of cpu features to pass to the cpu # For example, `cpu_features = "pmu=off,vmx=off" cpu_features="pmu=off" # Default number of vCPUs per SB/VM: # unspecified or 0 --> will be set to 1 # < 0 --> will be set to the actual number of physical cores # > 0 <= number of physical cores --> will be set to the specified number # > number of physical cores --> will be set to the actual number of physical cores default_vcpus = 1 # Default maximum number of vCPUs per SB/VM: # unspecified or == 0 --> will be set to the actual number of physical cores or to the maximum number # of vCPUs supported by KVM if that number is exceeded # > 0 <= number of physical cores --> will be set to the specified number # > number of physical cores --> will be set to the actual number of physical cores or to the maximum number # of vCPUs supported by KVM if that number is exceeded # WARNING: Depending of the architecture, the maximum number of vCPUs supported by KVM is used when # the actual number of physical cores is greater than it. # WARNING: Be aware that this value impacts the virtual machine's memory footprint and CPU # the hotplug functionality. For example, `default_maxvcpus = 240` specifies that until 240 vCPUs # can be added to a SB/VM, but the memory footprint will be big. Another example, with # `default_maxvcpus = 8` the memory footprint will be small, but 8 will be the maximum number of # vCPUs supported by the SB/VM. In general, we recommend that you do not edit this variable, # unless you know what are you doing. # NOTICE: on arm platform with gicv2 interrupt controller, set it to 8. default_maxvcpus = 0 # Bridges can be used to hot plug devices. # Limitations: # * Currently only pci bridges are supported # * Until 30 devices per bridge can be hot plugged. # * Until 5 PCI bridges can be cold plugged per VM. # This limitation could be a bug in qemu or in the kernel # Default number of bridges per SB/VM: # unspecified or 0 --> will be set to 1 # > 1 <= 5 --> will be set to the specified number # > 5 --> will be set to 5 default_bridges = 1 # Default memory size in MiB for SB/VM. # If unspecified then it will be set 2048 MiB. default_memory = 2048 # # Default memory slots per SB/VM. # If unspecified then it will be set 10. # This is will determine the times that memory will be hotadded to sandbox/VM. #memory_slots = 10 # The size in MiB will be plused to max memory of hypervisor. # It is the memory address space for the NVDIMM devie. # If set block storage driver (block_device_driver) to "nvdimm", # should set memory_offset to the size of block device. # Default 0 #memory_offset = 0 # Specifies virtio-mem will be enabled or not. # Please note that this option should be used with the command # "echo 1 > /proc/sys/vm/overcommit_memory". # Default false #enable_virtio_mem = true # Disable block device from being used for a container's rootfs. # In case of a storage driver like devicemapper where a container's # root file system is backed by a block device, the block device is passed # directly to the hypervisor for performance reasons. # This flag prevents the block device from being passed to the hypervisor, # 9pfs is used instead to pass the rootfs. disable_block_device_use = false # Shared file system type: # - virtio-fs (default) # - virtio-9p shared_fs = "virtio-fs" # Path to vhost-user-fs daemon. virtio_fs_daemon = "/opt/kata/libexec/kata-qemu/virtiofsd" # List of valid annotations values for the virtiofs daemon # The default if not set is empty (all annotations rejected.) # Your distribution recommends: ["/opt/kata/libexec/kata-qemu/virtiofsd"] valid_virtio_fs_daemon_paths = ["/opt/kata/libexec/kata-qemu/virtiofsd"] # Default size of DAX cache in MiB virtio_fs_cache_size = 0 # Extra args for virtiofsd daemon # # Format example: # ["-o", "arg1=xxx,arg2", "-o", "hello world", "--arg3=yyy"] # # see `virtiofsd -h` for possible options. virtio_fs_extra_args = ["--thread-pool-size=1"] # Cache mode: # # - none # Metadata, data, and pathname lookup are not cached in guest. They are # always fetched from host and any changes are immediately pushed to host. # # - auto # Metadata and pathname lookup cache expires after a configured amount of # time (default is 1 second). Data is cached while the file is open (close # to open consistency). # # - always # Metadata, data, and pathname lookup are cached in guest and never expire. virtio_fs_cache = "auto" # Block storage driver to be used for the hypervisor in case the container # rootfs is backed by a block device. This is virtio-scsi, virtio-blk # or nvdimm. block_device_driver = "virtio-scsi" # Specifies cache-related options will be set to block devices or not. # Default false #block_device_cache_set = true # Specifies cache-related options for block devices. # Denotes whether use of O_DIRECT (bypass the host page cache) is enabled. # Default false #block_device_cache_direct = true # Specifies cache-related options for block devices. # Denotes whether flush requests for the device are ignored. # Default false #block_device_cache_noflush = true # Enable iothreads (data-plane) to be used. This causes IO to be # handled in a separate IO thread. This is currently only implemented # for SCSI. # enable_iothreads = false # Enable pre allocation of VM RAM, default false # Enabling this will result in lower container density # as all of the memory will be allocated and locked # This is useful when you want to reserve all the memory # upfront or in the cases where you want memory latencies # to be very predictable # Default false #enable_mem_prealloc = true # Enable huge pages for VM RAM, default false # Enabling this will result in the VM memory # being allocated using huge pages. # This is useful when you want to use vhost-user network # stacks within the container. This will automatically # result in memory pre allocation #enable_hugepages = true # Enable vhost-user storage device, default false # Enabling this will result in some Linux reserved block type # major range 240-254 being chosen to represent vhost-user devices. enable_vhost_user_store = false # The base directory specifically used for vhost-user devices. # Its sub-path "block" is used for block devices; "block/sockets" is # where we expect vhost-user sockets to live; "block/devices" is where # simulated block device nodes for vhost-user devices to live. vhost_user_store_path = "/var/run/kata-containers/vhost-user" # Enable vIOMMU, default false # Enabling this will result in the VM having a vIOMMU device # This will also add the following options to the kernel's # command line: intel_iommu=on,iommu=pt #enable_iommu = true # Enable IOMMU_PLATFORM, default false # Enabling this will result in the VM device having iommu_platform=on set #enable_iommu_platform = true # List of valid annotations values for the vhost user store path # The default if not set is empty (all annotations rejected.) # Your distribution recommends: ["/var/run/kata-containers/vhost-user"] valid_vhost_user_store_paths = ["/var/run/kata-containers/vhost-user"] # Enable file based guest memory support. The default is an empty string which # will disable this feature. In the case of virtio-fs, this is enabled # automatically and '/dev/shm' is used as the backing folder. # This option will be ignored if VM templating is enabled. #file_mem_backend = "" # List of valid annotations values for the file_mem_backend annotation # The default if not set is empty (all annotations rejected.) # Your distribution recommends: [""] valid_file_mem_backends = [""] # Enable swap of vm memory. Default false. # The behaviour is undefined if mem_prealloc is also set to true #enable_swap = true # -pflash can add image file to VM. The arguments of it should be in format # of ["/path/to/flash0.img", "/path/to/flash1.img"] pflashes = [] # This option changes the default hypervisor and kernel parameters # to enable debug output where available. # # Default false #enable_debug = true # Disable the customizations done in the runtime when it detects # that it is running on top a VMM. This will result in the runtime # behaving as it would when running on bare metal. # #disable_nesting_checks = true # This is the msize used for 9p shares. It is the number of bytes # used for 9p packet payload. #msize_9p = 8192 # If false and nvdimm is supported, use nvdimm device to plug guest image. # Otherwise virtio-block device is used. # Default is false #disable_image_nvdimm = true # VFIO devices are hotplugged on a bridge by default. # Enable hotplugging on root bus. This may be required for devices with # a large PCI bar, as this is a current limitation with hotplugging on # a bridge. This value is valid for "pc" machine type. # Default false #hotplug_vfio_on_root_bus = true # Before hot plugging a PCIe device, you need to add a pcie_root_port device. # Use this parameter when using some large PCI bar devices, such as Nvidia GPU # The value means the number of pcie_root_port # This value is valid when hotplug_vfio_on_root_bus is true and machine_type is "q35" # Default 0 #pcie_root_port = 2 # If vhost-net backend for virtio-net is not desired, set to true. Default is false, which trades off # security (vhost-net runs ring0) for network I/O performance. #disable_vhost_net = true # # Default entropy source. # The path to a host source of entropy (including a real hardware RNG) # /dev/urandom and /dev/random are two main options. # Be aware that /dev/random is a blocking source of entropy. If the host # runs out of entropy, the VMs boot time will increase leading to get startup # timeouts. # The source of entropy /dev/urandom is non-blocking and provides a # generally acceptable source of entropy. It should work well for pretty much # all practical purposes. #entropy_source= "/dev/urandom" # List of valid annotations values for entropy_source # The default if not set is empty (all annotations rejected.) # Your distribution recommends: ["/dev/urandom","/dev/random",""] valid_entropy_sources = ["/dev/urandom","/dev/random",""] # Path to OCI hook binaries in the *guest rootfs*. # This does not affect host-side hooks which must instead be added to # the OCI spec passed to the runtime. # # You can create a rootfs with hooks by customizing the osbuilder scripts: # https://github.com/kata-containers/osbuilder # # Hooks must be stored in a subdirectory of guest_hook_path according to their # hook type, i.e. "guest_hook_path/{prestart,postart,poststop}". # The agent will scan these directories for executable files and add them, in # lexicographical order, to the lifecycle of the guest container. # Hooks are executed in the runtime namespace of the guest. See the official documentation: # https://github.com/opencontainers/runtime-spec/blob/v1.0.1/config.md#posix-platform-hooks # Warnings will be logged if any error is encountered while scanning for hooks, # but it will not abort container execution. #guest_hook_path = "/usr/share/oci/hooks" # # Use rx Rate Limiter to control network I/O inbound bandwidth(size in bits/sec for SB/VM). # In Qemu, we use classful qdiscs HTB(Hierarchy Token Bucket) to discipline traffic. # Default 0-sized value means unlimited rate. #rx_rate_limiter_max_rate = 0 # Use tx Rate Limiter to control network I/O outbound bandwidth(size in bits/sec for SB/VM). # In Qemu, we use classful qdiscs HTB(Hierarchy Token Bucket) and ifb(Intermediate Functional Block) # to discipline traffic. # Default 0-sized value means unlimited rate. #tx_rate_limiter_max_rate = 0 # Set where to save the guest memory dump file. # If set, when GUEST_PANICKED event occurred, # guest memeory will be dumped to host filesystem under guest_memory_dump_path, # This directory will be created automatically if it does not exist. # # The dumped file(also called vmcore) can be processed with crash or gdb. # # WARNING: # Dump guest’s memory can take very long depending on the amount of guest memory # and use much disk space. #guest_memory_dump_path="/var/crash/kata" # If enable paging. # Basically, if you want to use "gdb" rather than "crash", # or need the guest-virtual addresses in the ELF vmcore, # then you should enable paging. # # See: https://www.qemu.org/docs/master/qemu-qmp-ref.html#Dump-guest-memory for details #guest_memory_dump_paging=false [factory] # VM templating support. Once enabled, new VMs are created from template # using vm cloning. They will share the same initial kernel, initramfs and # agent memory by mapping it readonly. It helps speeding up new container # creation and saves a lot of memory if there are many kata containers running # on the same host. # # When disabled, new VMs are created from scratch. # # Note: Requires "initrd=" to be set ("image=" is not supported). # # Default false #enable_template = true # Specifies the path of template. # # Default "/run/vc/vm/template" #template_path = "/run/vc/vm/template" # The number of caches of VMCache: # unspecified or == 0 --> VMCache is disabled # > 0 --> will be set to the specified number # # VMCache is a function that creates VMs as caches before using it. # It helps speed up new container creation. # The function consists of a server and some clients communicating # through Unix socket. The protocol is gRPC in protocols/cache/cache.proto. # The VMCache server will create some VMs and cache them by factory cache. # It will convert the VM to gRPC format and transport it when gets # requestion from clients. # Factory grpccache is the VMCache client. It will request gRPC format # VM and convert it back to a VM. If VMCache function is enabled, # kata-runtime will request VM from factory grpccache when it creates # a new sandbox. # # Default 0 #vm_cache_number = 0 # Specify the address of the Unix socket that is used by VMCache. # # Default /var/run/kata-containers/cache.sock #vm_cache_endpoint = "/var/run/kata-containers/cache.sock" [agent.kata] # If enabled, make the agent display debug-level messages. # (default: disabled) #enable_debug = true # Enable agent tracing. # # If enabled, the default trace mode is "dynamic" and the # default trace type is "isolated". The trace mode and type are set # explicity with the `trace_type=` and `trace_mode=` options. # # Notes: # # - Tracing is ONLY enabled when `enable_tracing` is set: explicitly # setting `trace_mode=` and/or `trace_type=` without setting `enable_tracing` # will NOT activate agent tracing. # # - See https://github.com/kata-containers/agent/blob/master/TRACING.md for # full details. # # (default: disabled) #enable_tracing = true # #trace_mode = "dynamic" #trace_type = "isolated" # Comma separated list of kernel modules and their parameters. # These modules will be loaded in the guest kernel using modprobe(8). # The following example can be used to load two kernel modules with parameters # - kernel_modules=["e1000e InterruptThrottleRate=3000,3000,3000 EEE=1", "i915 enable_ppgtt=0"] # The first word is considered as the module name and the rest as its parameters. # Container will not be started when: # * A kernel module is specified and the modprobe command is not installed in the guest # or it fails loading the module. # * The module is not available in the guest or it doesn't met the guest kernel # requirements, like architecture and version. # kernel_modules=[] # Enable debug console. # If enabled, user can connect guest OS running inside hypervisor # through "kata-runtime exec" command
#debug_console_enabled = true
# Agent connection dialing timeout value in seconds
# (default: 30)
#dial_timeout = 30
[netmon]
# If enabled, the network monitoring process gets started when the
# sandbox is created. This allows for the detection of some additional
# network being added to the existing network namespace, after the
# sandbox has been created.
# (default: disabled)
#enable_netmon = true
# Specify the path to the netmon binary.
path = "/opt/kata/libexec/kata-containers/kata-netmon"
# If enabled, netmon messages will be sent to the system log
# (default: disabled)
#enable_debug = true
[runtime]
# If enabled, the runtime will log additional debug messages to the
# system log
# (default: disabled)
#enable_debug = true
#
# Internetworking model
# Determines how the VM should be connected to the
# the container network interface
# Options:
#
# - macvtap
# Used when the Container network interface can be bridged using
# macvtap.
#
# - none
# Used when customize network. Only creates a tap device. No veth pair.
#
# - tcfilter
# Uses tc filter rules to redirect traffic from the network interface
# provided by plugin to a tap interface connected to the VM.
#
internetworking_model="tcfilter"
# disable guest seccomp
# Determines whether container seccomp profiles are passed to the virtual
# machine and applied by the kata agent. If set to true, seccomp is not applied
# within the guest
# (default: true)
disable_guest_seccomp=true
# If enabled, the runtime will create opentracing.io traces and spans.
# (See https://www.jaegertracing.io/docs/getting-started).
# (default: disabled)
#enable_tracing = true
# Set the full url to the Jaeger HTTP Thrift collector.
# The default if not set will be "http://localhost:14268/api/traces"
#jaeger_endpoint = ""
# Sets the username to be used if basic auth is required for Jaeger.
#jaeger_user = ""
# Sets the password to be used if basic auth is required for Jaeger.
#jaeger_password = ""
# If enabled, the runtime will not create a network namespace for shim and hypervisor processes.
# This option may have some potential impacts to your host. It should only be used when you know what you're doing.
# `disable_new_netns` conflicts with `enable_netmon`
# `disable_new_netns` conflicts with `internetworking_model=tcfilter` and `internetworking_model=macvtap`. It works only
# with `internetworking_model=none`. The tap device will be in the host network namespace and can connect to a bridge
# (like OVS) directly.
# If you are using docker, `disable_new_netns` only works with `docker run --net=none`
# (default: false)
#disable_new_netns = true
# if enabled, the runtime will add all the kata processes inside one dedicated cgroup.
# The container cgroups in the host are not created, just one single cgroup per sandbox.
# The runtime caller is free to restrict or collect cgroup stats of the overall Kata sandbox.
# The sandbox cgroup path is the parent cgroup of a container with the PodSandbox annotation.
# The sandbox cgroup is constrained if there is no container type annotation.
# See: https://godoc.org/github.com/kata-containers/runtime/virtcontainers#ContainerType
sandbox_cgroup_only=false
# If specified, sandbox_bind_mounts identifieds host paths to be mounted (ro) into the sandboxes shared path.
# This is only valid if filesystem sharing is utilized. The provided path(s) will be bindmounted into the shared fs directory.
# If defaults are utilized, these mounts should be available in the guest at `/run/kata-containers/shared/containers/sandbox-mounts`
# These will not be exposed to the container workloads, and are only provided for potential guest services.
sandbox_bind_mounts=[]
# Enabled experimental feature list, format: ["a", "b"].
# Experimental features are features not stable enough for production,
# they may break compatibility, and are prepared for a big version bump.
# Supported experimental features:
# (default: [])
experimental=[]
# If enabled, user can run pprof tools with shim v2 process through kata-monitor.
# (default: false)
# enable_pprof = true
```
Containerd shim v2
Containerd shim v2 is `/usr/bin/containerd-shim-kata-v2`.
---
containerd-shim-kata-v2 --version
``` Kata Containers containerd shim: id: "io.containerd.kata.v2", version: 2.1.1, commit: 0e2be438bdd6d213ac4a3d7d300a5757c4137799 ```
KSM throttler
# KSM throttler ## version ## systemd service
Image details
# Image details ```yaml --- osbuilder: url: "https://github.com/kata-containers/kata-containers/tools/osbuilder" version: "2.1.1-0e2be438bdd6d213ac4a3d7d300a5757c4137799" rootfs-creation-time: "2021-06-11T20:55:26.306881742+0000Z" description: "osbuilder rootfs" file-format-version: "0.0.2" architecture: "x86_64" base-distro: name: "Clear" version: "34730" packages: default: - "chrony" - "iptables-bin" - "kmod-bin" - "libudev0-shim" - "systemd" - "util-linux-bin" extra: agent: url: "https://github.com/kata-containers/kata-containers" name: "kata-agent" version: "2.1.1" agent-is-init-daemon: "no" ``` ---
Initrd details
# Initrd details No initrd ---
Logfiles
# Logfiles ## Runtime logs
## Throttler logs
## Kata Containerd Shim v2 logs
---
Runtime logs
No recent runtime problems found in system journal.
Throttler logs
No recent throttler problems found in system journal.
Kata Containerd Shim v2
Recent problems found in system journal: ``` time="2021-06-21T03:14:09.744803462-07:00" level=warning msg="sandbox's cgroup won't be updated: cgroup path is empty" name=containerd-shim-v2 pid=576431 sandbox=1963f76ded46f731a7dda902994f43f3131a73bc658a2e7048dacec75c2d75e6 source=virtcontainers subsystem=sandbox time="2021-06-21T03:14:09.795481134-07:00" level=warning msg="failed to get OOM event from sandbox" error="rpc error: code = Internal desc = " name=containerd-shim-v2 pid=576431 sandbox=1963f76ded46f731a7dda902994f43f3131a73bc658a2e7048dacec75c2d75e6 source=containerd-kata-shim-v2 time="2021-06-21T03:14:09.802253445-07:00" level=warning msg="sandbox cgroups path is empty" name=containerd-shim-v2 pid=576431 sandbox=1963f76ded46f731a7dda902994f43f3131a73bc658a2e7048dacec75c2d75e6 source=virtcontainers subsystem=sandbox time="2021-06-21T03:15:47.150568327-07:00" level=warning msg="sandbox's cgroup won't be updated: cgroup path is empty" name=containerd-shim-v2 pid=577850 sandbox=d113730116ee795bf9e6b0caa661b7147c0e03edcd17d3fd117eea538625f56a source=virtcontainers subsystem=sandbox time="2021-06-21T03:15:48.925867842-07:00" level=warning msg="sandbox's cgroup won't be updated: cgroup path is empty" name=containerd-shim-v2 pid=577850 sandbox=d113730116ee795bf9e6b0caa661b7147c0e03edcd17d3fd117eea538625f56a source=virtcontainers subsystem=sandbox time="2021-06-21T03:16:08.219645248-07:00" level=warning msg="sandbox's cgroup won't be updated: cgroup path is empty" name=containerd-shim-v2 pid=577850 sandbox=d113730116ee795bf9e6b0caa661b7147c0e03edcd17d3fd117eea538625f56a source=virtcontainers subsystem=sandbox time="2021-06-21T03:16:08.264311389-07:00" level=warning msg="failed to get OOM event from sandbox" error="rpc error: code = Internal desc = " name=containerd-shim-v2 pid=577850 sandbox=d113730116ee795bf9e6b0caa661b7147c0e03edcd17d3fd117eea538625f56a source=containerd-kata-shim-v2 time="2021-06-21T03:16:08.272357372-07:00" level=warning msg="sandbox cgroups path is empty" name=containerd-shim-v2 pid=577850 sandbox=d113730116ee795bf9e6b0caa661b7147c0e03edcd17d3fd117eea538625f56a source=virtcontainers subsystem=sandbox time="2021-06-21T03:26:06.375259701-07:00" level=warning msg="sandbox's cgroup won't be updated: cgroup path is empty" name=containerd-shim-v2 pid=583869 sandbox=e63eab6ed1ba5429ee4130b561c04eee3d5a62c61d8ae22e0cc912561cad77d7 source=virtcontainers subsystem=sandbox time="2021-06-21T03:26:07.957491141-07:00" level=warning msg="sandbox's cgroup won't be updated: cgroup path is empty" name=containerd-shim-v2 pid=583869 sandbox=e63eab6ed1ba5429ee4130b561c04eee3d5a62c61d8ae22e0cc912561cad77d7 source=virtcontainers subsystem=sandbox time="2021-06-21T03:28:44.107215322-07:00" level=warning msg="sandbox's cgroup won't be updated: cgroup path is empty" name=containerd-shim-v2 pid=583869 sandbox=e63eab6ed1ba5429ee4130b561c04eee3d5a62c61d8ae22e0cc912561cad77d7 source=virtcontainers subsystem=sandbox time="2021-06-21T03:28:44.162568056-07:00" level=warning msg="failed to get OOM event from sandbox" error="rpc error: code = Internal desc = " name=containerd-shim-v2 pid=583869 sandbox=e63eab6ed1ba5429ee4130b561c04eee3d5a62c61d8ae22e0cc912561cad77d7 source=containerd-kata-shim-v2 time="2021-06-21T03:28:44.173857943-07:00" level=warning msg="sandbox cgroups path is empty" name=containerd-shim-v2 pid=583869 sandbox=e63eab6ed1ba5429ee4130b561c04eee3d5a62c61d8ae22e0cc912561cad77d7 source=virtcontainers subsystem=sandbox time="2021-06-21T03:29:11.41442994-07:00" level=warning msg="sandbox's cgroup won't be updated: cgroup path is empty" name=containerd-shim-v2 pid=585933 sandbox=5d8d444ba8fd2a74a6467e2a0a2791f635f056949e2a7e0583aa51aa57668b70 source=virtcontainers subsystem=sandbox time="2021-06-21T03:29:13.24433068-07:00" level=warning msg="sandbox's cgroup won't be updated: cgroup path is empty" name=containerd-shim-v2 pid=585933 sandbox=5d8d444ba8fd2a74a6467e2a0a2791f635f056949e2a7e0583aa51aa57668b70 source=virtcontainers subsystem=sandbox time="2021-06-21T03:30:33.165185381-07:00" level=warning msg="sandbox's cgroup won't be updated: cgroup path is empty" name=containerd-shim-v2 pid=585933 sandbox=5d8d444ba8fd2a74a6467e2a0a2791f635f056949e2a7e0583aa51aa57668b70 source=virtcontainers subsystem=sandbox time="2021-06-21T03:30:33.21058195-07:00" level=warning msg="failed to get OOM event from sandbox" error="rpc error: code = Internal desc = " name=containerd-shim-v2 pid=585933 sandbox=5d8d444ba8fd2a74a6467e2a0a2791f635f056949e2a7e0583aa51aa57668b70 source=containerd-kata-shim-v2 time="2021-06-21T03:30:33.214646276-07:00" level=warning msg="sandbox cgroups path is empty" name=containerd-shim-v2 pid=585933 sandbox=5d8d444ba8fd2a74a6467e2a0a2791f635f056949e2a7e0583aa51aa57668b70 source=virtcontainers subsystem=sandbox time="2021-06-21T03:30:53.836687272-07:00" level=warning msg="sandbox's cgroup won't be updated: cgroup path is empty" name=containerd-shim-v2 pid=587322 sandbox=5cd3c4ab0c038355ccba46f347c37d0f6ec20c17d57f059a1e7bfc91d852d95d source=virtcontainers subsystem=sandbox time="2021-06-21T03:30:55.45881196-07:00" level=warning msg="sandbox's cgroup won't be updated: cgroup path is empty" name=containerd-shim-v2 pid=587322 sandbox=5cd3c4ab0c038355ccba46f347c37d0f6ec20c17d57f059a1e7bfc91d852d95d source=virtcontainers subsystem=sandbox time="2021-06-21T03:32:35.904917206-07:00" level=warning msg="sandbox's cgroup won't be updated: cgroup path is empty" name=containerd-shim-v2 pid=587322 sandbox=5cd3c4ab0c038355ccba46f347c37d0f6ec20c17d57f059a1e7bfc91d852d95d source=virtcontainers subsystem=sandbox time="2021-06-21T03:32:35.950721534-07:00" level=warning msg="failed to get OOM event from sandbox" error="rpc error: code = Internal desc = " name=containerd-shim-v2 pid=587322 sandbox=5cd3c4ab0c038355ccba46f347c37d0f6ec20c17d57f059a1e7bfc91d852d95d source=containerd-kata-shim-v2 time="2021-06-21T03:32:35.957575705-07:00" level=warning msg="sandbox cgroups path is empty" name=containerd-shim-v2 pid=587322 sandbox=5cd3c4ab0c038355ccba46f347c37d0f6ec20c17d57f059a1e7bfc91d852d95d source=virtcontainers subsystem=sandbox time="2021-06-21T03:33:00.989943624-07:00" level=warning msg="sandbox's cgroup won't be updated: cgroup path is empty" name=containerd-shim-v2 pid=588818 sandbox=2a0c6361c39342f5977f00d0eceebf28d37fc9db15e4f934ec65fef2403da83b source=virtcontainers subsystem=sandbox time="2021-06-21T03:33:02.701728738-07:00" level=warning msg="sandbox's cgroup won't be updated: cgroup path is empty" name=containerd-shim-v2 pid=588818 sandbox=2a0c6361c39342f5977f00d0eceebf28d37fc9db15e4f934ec65fef2403da83b source=virtcontainers subsystem=sandbox time="2021-06-21T03:36:29.907456728-07:00" level=warning msg="sandbox's cgroup won't be updated: cgroup path is empty" name=containerd-shim-v2 pid=588818 sandbox=2a0c6361c39342f5977f00d0eceebf28d37fc9db15e4f934ec65fef2403da83b source=virtcontainers subsystem=sandbox time="2021-06-21T03:36:29.965703123-07:00" level=warning msg="failed to get OOM event from sandbox" error="rpc error: code = Internal desc = " name=containerd-shim-v2 pid=588818 sandbox=2a0c6361c39342f5977f00d0eceebf28d37fc9db15e4f934ec65fef2403da83b source=containerd-kata-shim-v2 time="2021-06-21T03:36:29.966455935-07:00" level=warning msg="Agent did not stop sandbox" error="write vsock host(2):1025800027->vm(1651748194):1024: broken pipe" name=containerd-shim-v2 pid=588818 sandbox=2a0c6361c39342f5977f00d0eceebf28d37fc9db15e4f934ec65fef2403da83b sandboxid=2a0c6361c39342f5977f00d0eceebf28d37fc9db15e4f934ec65fef2403da83b source=virtcontainers subsystem=sandbox time="2021-06-21T03:36:29.971045582-07:00" level=warning msg="sandbox cgroups path is empty" name=containerd-shim-v2 pid=588818 sandbox=2a0c6361c39342f5977f00d0eceebf28d37fc9db15e4f934ec65fef2403da83b source=virtcontainers subsystem=sandbox time="2021-06-21T03:36:49.154297638-07:00" level=warning msg="sandbox's cgroup won't be updated: cgroup path is empty" name=containerd-shim-v2 pid=591427 sandbox=528c1ce17aa63f0a334ca3f8f0ac5d2d3cf1fe58b8571d16b17a5587ecdd9779 source=virtcontainers subsystem=sandbox time="2021-06-21T03:36:50.849245611-07:00" level=warning msg="sandbox's cgroup won't be updated: cgroup path is empty" name=containerd-shim-v2 pid=591427 sandbox=528c1ce17aa63f0a334ca3f8f0ac5d2d3cf1fe58b8571d16b17a5587ecdd9779 source=virtcontainers subsystem=sandbox time="2021-06-21T03:37:11.209365962-07:00" level=warning msg="sandbox's cgroup won't be updated: cgroup path is empty" name=containerd-shim-v2 pid=591427 sandbox=528c1ce17aa63f0a334ca3f8f0ac5d2d3cf1fe58b8571d16b17a5587ecdd9779 source=virtcontainers subsystem=sandbox time="2021-06-21T03:37:11.288307917-07:00" level=warning msg="Agent did not stop sandbox" error="ttrpc: closed" name=containerd-shim-v2 pid=591427 sandbox=528c1ce17aa63f0a334ca3f8f0ac5d2d3cf1fe58b8571d16b17a5587ecdd9779 sandboxid=528c1ce17aa63f0a334ca3f8f0ac5d2d3cf1fe58b8571d16b17a5587ecdd9779 source=virtcontainers subsystem=sandbox time="2021-06-21T03:37:11.288405248-07:00" level=warning msg="failed to get OOM event from sandbox" error="rpc error: code = Internal desc = " name=containerd-shim-v2 pid=591427 sandbox=528c1ce17aa63f0a334ca3f8f0ac5d2d3cf1fe58b8571d16b17a5587ecdd9779 source=containerd-kata-shim-v2 time="2021-06-21T03:37:11.296104689-07:00" level=warning msg="sandbox cgroups path is empty" name=containerd-shim-v2 pid=591427 sandbox=528c1ce17aa63f0a334ca3f8f0ac5d2d3cf1fe58b8571d16b17a5587ecdd9779 source=virtcontainers subsystem=sandbox time="2021-06-21T03:38:21.128812992-07:00" level=warning msg="sandbox's cgroup won't be updated: cgroup path is empty" name=containerd-shim-v2 pid=592689 sandbox=babc205370189292953b562e71fd75dbbd087f841c0e637e7755f5365a64067b source=virtcontainers subsystem=sandbox time="2021-06-21T03:38:22.568317597-07:00" level=warning msg="sandbox's cgroup won't be updated: cgroup path is empty" name=containerd-shim-v2 pid=592689 sandbox=babc205370189292953b562e71fd75dbbd087f841c0e637e7755f5365a64067b source=virtcontainers subsystem=sandbox time="2021-06-21T03:41:07.36922657-07:00" level=warning msg="sandbox's cgroup won't be updated: cgroup path is empty" name=containerd-shim-v2 pid=592689 sandbox=babc205370189292953b562e71fd75dbbd087f841c0e637e7755f5365a64067b source=virtcontainers subsystem=sandbox time="2021-06-21T03:41:07.421370137-07:00" level=warning msg="failed to get OOM event from sandbox" error="rpc error: code = Internal desc = " name=containerd-shim-v2 pid=592689 sandbox=babc205370189292953b562e71fd75dbbd087f841c0e637e7755f5365a64067b source=containerd-kata-shim-v2 time="2021-06-21T03:41:07.426795794-07:00" level=warning msg="sandbox cgroups path is empty" name=containerd-shim-v2 pid=592689 sandbox=babc205370189292953b562e71fd75dbbd087f841c0e637e7755f5365a64067b source=virtcontainers subsystem=sandbox time="2021-06-21T03:41:52.275830288-07:00" level=warning msg="sandbox's cgroup won't be updated: cgroup path is empty" name=containerd-shim-v2 pid=595059 sandbox=9d27dc51ea33e282de5a4dd0ba63f8f3736b25731b6aad0c7f477c1fc56ada03 source=virtcontainers subsystem=sandbox time="2021-06-21T03:41:53.957872457-07:00" level=warning msg="sandbox's cgroup won't be updated: cgroup path is empty" name=containerd-shim-v2 pid=595059 sandbox=9d27dc51ea33e282de5a4dd0ba63f8f3736b25731b6aad0c7f477c1fc56ada03 source=virtcontainers subsystem=sandbox time="2021-06-21T03:55:14.238369715-07:00" level=warning msg="sandbox's cgroup won't be updated: cgroup path is empty" name=containerd-shim-v2 pid=595059 sandbox=9d27dc51ea33e282de5a4dd0ba63f8f3736b25731b6aad0c7f477c1fc56ada03 source=virtcontainers subsystem=sandbox time="2021-06-21T03:55:14.299814655-07:00" level=warning msg="failed to get OOM event from sandbox" error="rpc error: code = Internal desc = " name=containerd-shim-v2 pid=595059 sandbox=9d27dc51ea33e282de5a4dd0ba63f8f3736b25731b6aad0c7f477c1fc56ada03 source=containerd-kata-shim-v2 time="2021-06-21T03:55:14.306262864-07:00" level=warning msg="sandbox cgroups path is empty" name=containerd-shim-v2 pid=595059 sandbox=9d27dc51ea33e282de5a4dd0ba63f8f3736b25731b6aad0c7f477c1fc56ada03 source=virtcontainers subsystem=sandbox time="2021-06-21T03:56:24.652827599-07:00" level=warning msg="sandbox's cgroup won't be updated: cgroup path is empty" name=containerd-shim-v2 pid=603758 sandbox=0221b117a7abb7f4be232caedd485298b21a7027595646bab9ad2bc488c0f62b source=virtcontainers subsystem=sandbox time="2021-06-21T03:56:26.277318069-07:00" level=warning msg="sandbox's cgroup won't be updated: cgroup path is empty" name=containerd-shim-v2 pid=603758 sandbox=0221b117a7abb7f4be232caedd485298b21a7027595646bab9ad2bc488c0f62b source=virtcontainers subsystem=sandbox time="2021-06-21T07:45:30.047668365-07:00" level=warning msg="sandbox's cgroup won't be updated: cgroup path is empty" name=containerd-shim-v2 pid=603758 sandbox=0221b117a7abb7f4be232caedd485298b21a7027595646bab9ad2bc488c0f62b source=virtcontainers subsystem=sandbox time="2021-06-21T07:45:30.093074714-07:00" level=warning msg="failed to get OOM event from sandbox" error="rpc error: code = Internal desc = " name=containerd-shim-v2 pid=603758 sandbox=0221b117a7abb7f4be232caedd485298b21a7027595646bab9ad2bc488c0f62b source=containerd-kata-shim-v2 time="2021-06-21T07:45:30.098749099-07:00" level=warning msg="sandbox cgroups path is empty" name=containerd-shim-v2 pid=603758 sandbox=0221b117a7abb7f4be232caedd485298b21a7027595646bab9ad2bc488c0f62b source=virtcontainers subsystem=sandbox ```
Container manager details
# Container manager details
---
Kubernetes
## Kubernetes
kubectl version
``` Client Version: version.Info{Major:"1", Minor:"21", GitVersion:"v1.21.0", GitCommit:"cb303e613a121a29364f75cc67d3d580833a7479", GitTreeState:"clean", BuildDate:"2021-04-08T16:31:21Z", GoVersion:"go1.16.1", Compiler:"gc", Platform:"linux/amd64"} The connection to the server localhost:8080 was refused - did you specify the right host or port? ```
kubectl config view
``` apiVersion: v1 clusters: null contexts: null current-context: "" kind: Config preferences: {} users: null ```
systemctl show kubelet
``` Type=simple Restart=always NotifyAccess=none RestartUSec=10s TimeoutStartUSec=1min 30s TimeoutStopUSec=1min 30s TimeoutAbortUSec=1min 30s RuntimeMaxUSec=infinity WatchdogUSec=0 WatchdogTimestampMonotonic=0 RootDirectoryStartOnly=no RemainAfterExit=no GuessMainPID=yes MainPID=67332 ControlPID=0 FileDescriptorStoreMax=0 NFileDescriptorStore=0 StatusErrno=0 Result=success ReloadResult=success CleanResult=success UID=[not set] GID=[not set] NRestarts=29 OOMPolicy=stop ExecMainStartTimestamp=Sun 2021-06-20 12:51:05 PDT ExecMainStartTimestampMonotonic=169875960082 ExecMainExitTimestampMonotonic=0 ExecMainPID=67332 ExecMainCode=0 ExecMainStatus=0 ExecStart={ path=/usr/bin/kubelet ; argv[]=/usr/bin/kubelet $KUBELET_KUBECONFIG_ARGS $KUBELET_CONFIG_ARGS $KUBELET_KUBEADM_ARGS $KUBELET_EXTRA_ARGS ; ignore_errors=no ; start_time=[Sun 2021-06-20 12:51:05 PDT] ; stop_time=[n/a] ; pid=67332 ; code=(null) ; status=0/0 } ExecStartEx={ path=/usr/bin/kubelet ; argv[]=/usr/bin/kubelet $KUBELET_KUBECONFIG_ARGS $KUBELET_CONFIG_ARGS $KUBELET_KUBEADM_ARGS $KUBELET_EXTRA_ARGS ; flags= ; start_time=[Sun 2021-06-20 12:51:05 PDT] ; stop_time=[n/a] ; pid=67332 ; code=(null) ; status=0/0 } Slice=system.slice ControlGroup=/system.slice/kubelet.service MemoryCurrent=45662208 CPUUsageNSec=[not set] EffectiveCPUs= EffectiveMemoryNodes= TasksCurrent=16 IPIngressBytes=[no data] IPIngressPackets=[no data] IPEgressBytes=[no data] IPEgressPackets=[no data] IOReadBytes=18446744073709551615 IOReadOperations=18446744073709551615 IOWriteBytes=18446744073709551615 IOWriteOperations=18446744073709551615 Delegate=no CPUAccounting=no CPUWeight=[not set] StartupCPUWeight=[not set] CPUShares=[not set] StartupCPUShares=[not set] CPUQuotaPerSecUSec=infinity CPUQuotaPeriodUSec=infinity AllowedCPUs= AllowedMemoryNodes= IOAccounting=no IOWeight=[not set] StartupIOWeight=[not set] BlockIOAccounting=no BlockIOWeight=[not set] StartupBlockIOWeight=[not set] MemoryAccounting=yes DefaultMemoryLow=0 DefaultMemoryMin=0 MemoryMin=0 MemoryLow=0 MemoryHigh=infinity MemoryMax=infinity MemorySwapMax=infinity MemoryLimit=infinity DevicePolicy=auto TasksAccounting=yes TasksMax=4618 IPAccounting=no Environment=[unprintable] KUBELET_CONFIG_ARGS=--config=/var/lib/kubelet/config.yaml EnvironmentFiles=/var/lib/kubelet/kubeadm-flags.env (ignore_errors=yes) EnvironmentFiles=/etc/default/kubelet (ignore_errors=yes) UMask=0022 LimitCPU=infinity LimitCPUSoft=infinity LimitFSIZE=infinity LimitFSIZESoft=infinity LimitDATA=infinity LimitDATASoft=infinity LimitSTACK=infinity LimitSTACKSoft=8388608 LimitCORE=infinity LimitCORESoft=0 LimitRSS=infinity LimitRSSSoft=infinity LimitNOFILE=524288 LimitNOFILESoft=1024 LimitAS=infinity LimitASSoft=infinity LimitNPROC=15394 LimitNPROCSoft=15394 LimitMEMLOCK=65536 LimitMEMLOCKSoft=65536 LimitLOCKS=infinity LimitLOCKSSoft=infinity LimitSIGPENDING=15394 LimitSIGPENDINGSoft=15394 LimitMSGQUEUE=819200 LimitMSGQUEUESoft=819200 LimitNICE=0 LimitNICESoft=0 LimitRTPRIO=0 LimitRTPRIOSoft=0 LimitRTTIME=infinity LimitRTTIMESoft=infinity OOMScoreAdjust=0 Nice=0 IOSchedulingClass=0 IOSchedulingPriority=0 CPUSchedulingPolicy=0 CPUSchedulingPriority=0 CPUAffinity= CPUAffinityFromNUMA=no NUMAPolicy=n/a NUMAMask= TimerSlackNSec=50000 CPUSchedulingResetOnFork=no NonBlocking=no StandardInput=null StandardInputData= StandardOutput=journal StandardError=inherit TTYReset=no TTYVHangup=no TTYVTDisallocate=no SyslogPriority=30 SyslogLevelPrefix=yes SyslogLevel=6 SyslogFacility=3 LogLevelMax=-1 LogRateLimitIntervalUSec=0 LogRateLimitBurst=0 SecureBits=0 CapabilityBoundingSet=cap_chown cap_dac_override cap_dac_read_search cap_fowner cap_fsetid cap_kill cap_setgid cap_setuid cap_setpcap cap_linux_immutable cap_net_bind_service cap_net_broadcast cap_net_admin cap_net_raw cap_ipc_lock cap_ipc_owner cap_sys_module cap_sys_rawio cap_sys_chroot cap_sys_ptrace cap_sys_pacct cap_sys_admin cap_sys_boot cap_sys_nice cap_sys_resource cap_sys_time cap_sys_tty_config cap_mknod cap_lease cap_audit_write cap_audit_control cap_setfcap cap_mac_override cap_mac_admin cap_syslog cap_wake_alarm cap_block_suspend cap_audit_read AmbientCapabilities= DynamicUser=no RemoveIPC=no MountFlags= PrivateTmp=no PrivateDevices=no ProtectKernelTunables=no ProtectKernelModules=no ProtectKernelLogs=no ProtectControlGroups=no PrivateNetwork=no PrivateUsers=no PrivateMounts=no ProtectHome=no ProtectSystem=no SameProcessGroup=no UtmpMode=init IgnoreSIGPIPE=yes NoNewPrivileges=no SystemCallErrorNumber=0 LockPersonality=no RuntimeDirectoryPreserve=no RuntimeDirectoryMode=0755 StateDirectoryMode=0755 CacheDirectoryMode=0755 LogsDirectoryMode=0755 ConfigurationDirectoryMode=0755 TimeoutCleanUSec=infinity MemoryDenyWriteExecute=no RestrictRealtime=no RestrictSUIDSGID=no RestrictNamespaces=no MountAPIVFS=no KeyringMode=private ProtectHostname=no KillMode=control-group KillSignal=15 RestartKillSignal=15 FinalKillSignal=9 SendSIGKILL=yes SendSIGHUP=no WatchdogSignal=6 Id=kubelet.service Names=kubelet.service Requires=sysinit.target system.slice Wants=network-online.target WantedBy=multi-user.target Conflicts=shutdown.target Before=multi-user.target shutdown.target After=systemd-journald.socket network-online.target sysinit.target basic.target system.slice Documentation=https://kubernetes.io/docs/home/ Description=kubelet: The Kubernetes Node Agent LoadState=loaded ActiveState=active SubState=running FragmentPath=/lib/systemd/system/kubelet.service DropInPaths=/etc/systemd/system/kubelet.service.d/10-kubeadm.conf UnitFileState=enabled UnitFilePreset=enabled StateChangeTimestamp=Sun 2021-06-20 12:51:05 PDT StateChangeTimestampMonotonic=169875960678 InactiveExitTimestamp=Sun 2021-06-20 12:51:05 PDT InactiveExitTimestampMonotonic=169875960678 ActiveEnterTimestamp=Sun 2021-06-20 12:51:05 PDT ActiveEnterTimestampMonotonic=169875960678 ActiveExitTimestamp=Sun 2021-06-20 12:50:59 PDT ActiveExitTimestampMonotonic=169870098626 InactiveEnterTimestamp=Sun 2021-06-20 12:51:05 PDT InactiveEnterTimestampMonotonic=169875585559 CanStart=yes CanStop=yes CanReload=no CanIsolate=no StopWhenUnneeded=no RefuseManualStart=no RefuseManualStop=no AllowIsolate=no DefaultDependencies=yes OnFailureJobMode=replace IgnoreOnIsolate=no NeedDaemonReload=no JobTimeoutUSec=infinity JobRunningTimeoutUSec=infinity JobTimeoutAction=none ConditionResult=yes AssertResult=yes ConditionTimestamp=Sun 2021-06-20 12:51:05 PDT ConditionTimestampMonotonic=169875958407 AssertTimestamp=Sun 2021-06-20 12:51:05 PDT AssertTimestampMonotonic=169875958408 Transient=no Perpetual=no StartLimitIntervalUSec=0 StartLimitBurst=5 StartLimitAction=none FailureAction=none SuccessAction=none InvocationID=98129aa46bc94af18f3508420ea105e7 CollectMode=inactive ```
containerd
## containerd
containerd --version
``` containerd github.com/containerd/containerd 1.4.4-0ubuntu1~20.04.2 ```
systemctl show containerd
``` Type=notify Restart=always NotifyAccess=main RestartUSec=5s TimeoutStartUSec=1min 30s TimeoutStopUSec=1min 30s TimeoutAbortUSec=1min 30s RuntimeMaxUSec=infinity WatchdogUSec=0 WatchdogTimestampMonotonic=0 RootDirectoryStartOnly=no RemainAfterExit=no GuessMainPID=yes MainPID=67870 ControlPID=0 FileDescriptorStoreMax=0 NFileDescriptorStore=0 StatusErrno=0 Result=success ReloadResult=success CleanResult=success UID=[not set] GID=[not set] NRestarts=0 OOMPolicy=continue ExecMainStartTimestamp=Sun 2021-06-20 12:53:58 PDT ExecMainStartTimestampMonotonic=170048571025 ExecMainExitTimestampMonotonic=0 ExecMainPID=67870 ExecMainCode=0 ExecMainStatus=0 ExecStartPre={ path=/sbin/modprobe ; argv[]=/sbin/modprobe overlay ; ignore_errors=yes ; start_time=[Sun 2021-06-20 12:53:58 PDT] ; stop_time=[Sun 2021-06-20 12:53:58 PDT] ; pid=67869 ; code=exited ; status=0 } ExecStartPreEx={ path=/sbin/modprobe ; argv[]=/sbin/modprobe overlay ; flags=ignore-failure ; start_time=[Sun 2021-06-20 12:53:58 PDT] ; stop_time=[Sun 2021-06-20 12:53:58 PDT] ; pid=67869 ; code=exited ; status=0 } ExecStart={ path=/usr/bin/containerd ; argv[]=/usr/bin/containerd ; ignore_errors=no ; start_time=[Sun 2021-06-20 12:53:58 PDT] ; stop_time=[n/a] ; pid=67870 ; code=(null) ; status=0/0 } ExecStartEx={ path=/usr/bin/containerd ; argv[]=/usr/bin/containerd ; flags= ; start_time=[Sun 2021-06-20 12:53:58 PDT] ; stop_time=[n/a] ; pid=67870 ; code=(null) ; status=0/0 } Slice=system.slice ControlGroup=/system.slice/containerd.service MemoryCurrent=1137856512 CPUUsageNSec=[not set] EffectiveCPUs= EffectiveMemoryNodes= TasksCurrent=176 IPIngressBytes=[no data] IPIngressPackets=[no data] IPEgressBytes=[no data] IPEgressPackets=[no data] IOReadBytes=18446744073709551615 IOReadOperations=18446744073709551615 IOWriteBytes=18446744073709551615 IOWriteOperations=18446744073709551615 Delegate=yes DelegateControllers=cpu cpuacct cpuset io blkio memory devices pids bpf-firewall bpf-devices CPUAccounting=no CPUWeight=[not set] StartupCPUWeight=[not set] CPUShares=[not set] StartupCPUShares=[not set] CPUQuotaPerSecUSec=infinity CPUQuotaPeriodUSec=infinity AllowedCPUs= AllowedMemoryNodes= IOAccounting=no IOWeight=[not set] StartupIOWeight=[not set] BlockIOAccounting=no BlockIOWeight=[not set] StartupBlockIOWeight=[not set] MemoryAccounting=yes DefaultMemoryLow=0 DefaultMemoryMin=0 MemoryMin=0 MemoryLow=0 MemoryHigh=infinity MemoryMax=infinity MemorySwapMax=infinity MemoryLimit=infinity DevicePolicy=auto TasksAccounting=yes TasksMax=infinity IPAccounting=no UMask=0022 LimitCPU=infinity LimitCPUSoft=infinity LimitFSIZE=infinity LimitFSIZESoft=infinity LimitDATA=infinity LimitDATASoft=infinity LimitSTACK=infinity LimitSTACKSoft=8388608 LimitCORE=infinity LimitCORESoft=infinity LimitRSS=infinity LimitRSSSoft=infinity LimitNOFILE=1048576 LimitNOFILESoft=1048576 LimitAS=infinity LimitASSoft=infinity LimitNPROC=infinity LimitNPROCSoft=infinity LimitMEMLOCK=65536 LimitMEMLOCKSoft=65536 LimitLOCKS=infinity LimitLOCKSSoft=infinity LimitSIGPENDING=15394 LimitSIGPENDINGSoft=15394 LimitMSGQUEUE=819200 LimitMSGQUEUESoft=819200 LimitNICE=0 LimitNICESoft=0 LimitRTPRIO=0 LimitRTPRIOSoft=0 LimitRTTIME=infinity LimitRTTIMESoft=infinity OOMScoreAdjust=-999 Nice=0 IOSchedulingClass=0 IOSchedulingPriority=0 CPUSchedulingPolicy=0 CPUSchedulingPriority=0 CPUAffinity= CPUAffinityFromNUMA=no NUMAPolicy=n/a NUMAMask= TimerSlackNSec=50000 CPUSchedulingResetOnFork=no NonBlocking=no StandardInput=null StandardInputData= StandardOutput=journal StandardError=inherit TTYReset=no TTYVHangup=no TTYVTDisallocate=no SyslogPriority=30 SyslogLevelPrefix=yes SyslogLevel=6 SyslogFacility=3 LogLevelMax=-1 LogRateLimitIntervalUSec=0 LogRateLimitBurst=0 SecureBits=0 CapabilityBoundingSet=cap_chown cap_dac_override cap_dac_read_search cap_fowner cap_fsetid cap_kill cap_setgid cap_setuid cap_setpcap cap_linux_immutable cap_net_bind_service cap_net_broadcast cap_net_admin cap_net_raw cap_ipc_lock cap_ipc_owner cap_sys_module cap_sys_rawio cap_sys_chroot cap_sys_ptrace cap_sys_pacct cap_sys_admin cap_sys_boot cap_sys_nice cap_sys_resource cap_sys_time cap_sys_tty_config cap_mknod cap_lease cap_audit_write cap_audit_control cap_setfcap cap_mac_override cap_mac_admin cap_syslog cap_wake_alarm cap_block_suspend cap_audit_read AmbientCapabilities= DynamicUser=no RemoveIPC=no MountFlags= PrivateTmp=no PrivateDevices=no ProtectKernelTunables=no ProtectKernelModules=no ProtectKernelLogs=no ProtectControlGroups=no PrivateNetwork=no PrivateUsers=no PrivateMounts=no ProtectHome=no ProtectSystem=no SameProcessGroup=no UtmpMode=init IgnoreSIGPIPE=yes NoNewPrivileges=no SystemCallErrorNumber=0 LockPersonality=no RuntimeDirectoryPreserve=no RuntimeDirectoryMode=0755 StateDirectoryMode=0755 CacheDirectoryMode=0755 LogsDirectoryMode=0755 ConfigurationDirectoryMode=0755 TimeoutCleanUSec=infinity MemoryDenyWriteExecute=no RestrictRealtime=no RestrictSUIDSGID=no RestrictNamespaces=no MountAPIVFS=no KeyringMode=private ProtectHostname=no KillMode=process KillSignal=15 RestartKillSignal=15 FinalKillSignal=9 SendSIGKILL=yes SendSIGHUP=no WatchdogSignal=6 Id=containerd.service Names=containerd.service Requires=sysinit.target system.slice WantedBy=multi-user.target Conflicts=shutdown.target Before=multi-user.target shutdown.target After=system.slice systemd-journald.socket basic.target sysinit.target network.target local-fs.target Documentation=https://containerd.io Description=containerd container runtime LoadState=loaded ActiveState=active SubState=running FragmentPath=/lib/systemd/system/containerd.service UnitFileState=enabled UnitFilePreset=enabled StateChangeTimestamp=Sun 2021-06-20 12:53:58 PDT StateChangeTimestampMonotonic=170048675981 InactiveExitTimestamp=Sun 2021-06-20 12:53:58 PDT InactiveExitTimestampMonotonic=170048558158 ActiveEnterTimestamp=Sun 2021-06-20 12:53:58 PDT ActiveEnterTimestampMonotonic=170048675981 ActiveExitTimestamp=Sun 2021-06-20 12:53:58 PDT ActiveExitTimestampMonotonic=170048530261 InactiveEnterTimestamp=Sun 2021-06-20 12:53:58 PDT InactiveEnterTimestampMonotonic=170048555847 CanStart=yes CanStop=yes CanReload=no CanIsolate=no StopWhenUnneeded=no RefuseManualStart=no RefuseManualStop=no AllowIsolate=no DefaultDependencies=yes OnFailureJobMode=replace IgnoreOnIsolate=no NeedDaemonReload=no JobTimeoutUSec=infinity JobRunningTimeoutUSec=infinity JobTimeoutAction=none ConditionResult=yes AssertResult=yes ConditionTimestamp=Sun 2021-06-20 12:53:58 PDT ConditionTimestampMonotonic=170048556463 AssertTimestamp=Sun 2021-06-20 12:53:58 PDT AssertTimestampMonotonic=170048556464 Transient=no Perpetual=no StartLimitIntervalUSec=10s StartLimitBurst=5 StartLimitAction=none FailureAction=none SuccessAction=none InvocationID=a1eb75a12bbb49358fa196dbc01aecb4 CollectMode=inactive ```
cat /etc/containerd/config.toml
```toml version = 2 root = "/var/lib/containerd" state = "/run/containerd" plugin_dir = "" disabled_plugins = [] required_plugins = [] oom_score = 0 [grpc] address = "/run/containerd/containerd.sock" tcp_address = "" tcp_tls_cert = "" tcp_tls_key = "" uid = 0 gid = 0 max_recv_message_size = 16777216 max_send_message_size = 16777216 [ttrpc] address = "" uid = 0 gid = 0 [debug] address = "" uid = 0 gid = 0 level = "" [metrics] address = "" grpc_histogram = false [cgroup] path = "" [timeouts] "io.containerd.timeout.shim.cleanup" = "5s" "io.containerd.timeout.shim.load" = "5s" "io.containerd.timeout.shim.shutdown" = "3s" "io.containerd.timeout.task.state" = "2s" [plugins] [plugins."io.containerd.gc.v1.scheduler"] pause_threshold = 0.02 deletion_threshold = 0 mutation_threshold = 100 schedule_delay = "0s" startup_delay = "100ms" [plugins."io.containerd.grpc.v1.cri"] disable_tcp_service = true stream_server_address = "127.0.0.1" stream_server_port = "0" stream_idle_timeout = "4h0m0s" enable_selinux = false selinux_category_range = 1024 sandbox_image = "k8s.gcr.io/pause:3.2" stats_collect_period = 10 systemd_cgroup = false enable_tls_streaming = false max_container_log_line_size = 16384 disable_cgroup = false disable_apparmor = false restrict_oom_score_adj = false max_concurrent_downloads = 3 disable_proc_mount = false unset_seccomp_profile = "" tolerate_missing_hugetlb_controller = true disable_hugetlb_controller = true ignore_image_defined_volumes = false [plugins."io.containerd.grpc.v1.cri".containerd] snapshotter = "overlayfs" default_runtime_name = "runc" no_pivot = false disable_snapshot_annotations = true discard_unpacked_layers = false [plugins."io.containerd.grpc.v1.cri".containerd.default_runtime] runtime_type = "" runtime_engine = "" runtime_root = "" privileged_without_host_devices = false base_runtime_spec = "" [plugins."io.containerd.grpc.v1.cri".containerd.untrusted_workload_runtime] runtime_type = "" runtime_engine = "" runtime_root = "" privileged_without_host_devices = false base_runtime_spec = "" [plugins."io.containerd.grpc.v1.cri".containerd.runtimes] [plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc] runtime_type = "io.containerd.runc.v2" runtime_engine = "" runtime_root = "" privileged_without_host_devices = false base_runtime_spec = "" [plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc.options] [plugins."io.containerd.grpc.v1.cri".containerd.runtimes.kata] runtime_type = "io.containerd.kata.v2" runtime_engine = "" runtime_root = "" privileged_without_host_devices = false base_runtime_spec = "" [plugins."io.containerd.grpc.v1.cri".containerd.runtimes.kata.options] [plugins."io.containerd.grpc.v1.cri".cni] bin_dir = "/opt/cni/bin" conf_dir = "/etc/cni/net.d" max_conf_num = 1 conf_template = "" [plugins."io.containerd.grpc.v1.cri".registry] [plugins."io.containerd.grpc.v1.cri".registry.mirrors] [plugins."io.containerd.grpc.v1.cri".registry.mirrors."docker.io"] endpoint = ["https://registry-1.docker.io"] [plugins."io.containerd.grpc.v1.cri".image_decryption] key_model = "" [plugins."io.containerd.grpc.v1.cri".x509_key_pair_streaming] tls_cert_file = "" tls_key_file = "" [plugins."io.containerd.internal.v1.opt"] path = "/opt/containerd" [plugins."io.containerd.internal.v1.restart"] interval = "10s" [plugins."io.containerd.metadata.v1.bolt"] content_sharing_policy = "shared" [plugins."io.containerd.monitor.v1.cgroups"] no_prometheus = false [plugins."io.containerd.runtime.v1.linux"] shim = "containerd-shim" runtime = "runc" runtime_root = "" no_shim = false shim_debug = false [plugins."io.containerd.runtime.v2.task"] platforms = ["linux/amd64"] [plugins."io.containerd.service.v1.diff-service"] default = ["walking"] [plugins."io.containerd.snapshotter.v1.devmapper"] root_path = "" pool_name = "" base_image_size = "" async_remove = false ```
Packages
# Packages Have `dpkg`
No `rpm`
---
dpkg -l|egrep "(cc-oci-runtime|cc-runtime|runv|kata-runtime|kata-ksm-throttler|kata-containers-image|linux-container|qemu-)"
``` ```
Kata Monitor
Kata Monitor `kata-monitor`.
---
kata-monitor --version
``` /opt/kata/bin/kata-collect-data.sh: line 218: kata-monitor: command not found ```
kata.log