kata-containers / runtime

Kata Containers version 1.x runtime (for version 2.x see https://github.com/kata-containers/kata-containers).
https://katacontainers.io/
Apache License 2.0
2.1k stars 375 forks source link

[Question] Pod can't run when memory resource limit set using kata. #2588

Closed lining2020x closed 4 years ago

lining2020x commented 4 years ago

Description of problem

I am a new bee to kata container. I have just get my kata enviroment setup by refering the document, I created a pod with memory resouce limit and it failed.When I remove the resouce limit , the pod can run normally.

Here is my pod yaml.

apiVersion: v1
kind: Pod
metadata:
  name: nginx-untrusted
  annotations:
    io.kubernetes.cri.untrusted-workload: "true"
spec:
  containers:
  - name: nginx
    image: nginx:alpine
    resources:
      requests:
        memory: "512Mi"
      limits:
        memory: "1024Mi"

Here is error message.

Events:
  Type     Reason     Age                From               Message
  ----     ------     ----               ----               -------
  Normal   Scheduled  33s                default-scheduler  Successfully assigned default/nginx-untrusted to debian0
  Normal   Pulled     16s (x3 over 31s)  kubelet, debian0   Container image "nginx:alpine" already present on machine
  Normal   Created    16s (x3 over 31s)  kubelet, debian0   Created container nginx
  Warning  Failed     15s (x3 over 31s)  kubelet, debian0   Error: failed to create containerd task: Unable to hotplug 1024 MiB memory, the SB has 2048 MiB and the maximum amount is 1971 MiB: unknown
  Warning  BackOff    2s (x4 over 29s)   kubelet, debian0   Back-off restarting failed container

Expected result

Pod runs OK.

Actual result

Pod crash loopback.

grahamwhaley commented 4 years ago

/cc @jcvenegas @devimc I guess the relevant bit is ?:

Warning  Failed     15s (x3 over 31s)  kubelet, debian0   Error: failed to create containerd task:
 Unable to hotplug 1024 MiB memory, the SB has 2048 MiB and the maximum amount is
 1971 MiB: unknown
lining2020x commented 4 years ago
Show kata-collect-data.sh details

# Meta details Running `kata-collect-data.sh` version `1.11.0-alpha1 (commit 705713b4f9bc4d1e754871d5ef1ab5e99ea71aff)` at `2020-04-06.23:48:51.511542415+0800`. --- Runtime is `/usr/local/bin/kata-runtime`. # `kata-env` Output of "`/usr/local/bin/kata-runtime kata-env`": ```toml [Meta] Version = "1.0.24" [Runtime] Debug = false Trace = false DisableGuestSeccomp = true DisableNewNetNs = false SandboxCgroupOnly = false Path = "/usr/local/bin/kata-runtime" [Runtime.Version] OCI = "1.0.1-dev" [Runtime.Version.Version] Semver = "1.11.0-alpha1" Major = 1 Minor = 11 Patch = 0 Commit = "705713b4f9bc4d1e754871d5ef1ab5e99ea71aff" [Runtime.Config] Path = "/usr/share/defaults/kata-containers/configuration-qemu.toml" [Hypervisor] MachineType = "pc" Version = "QEMU emulator version 3.1.0 (Debian 1:3.1+dfsg-8+deb10u4)\nCopyright (c) 2003-2018 Fabrice Bellard and the QEMU Project developers" Path = "/usr/bin/qemu-system-x86_64" BlockDeviceDriver = "virtio-scsi" EntropySource = "/dev/urandom" SharedFS = "virtio-9p" VirtioFSDaemon = "/usr/bin/virtiofsd" Msize9p = 8192 MemorySlots = 10 PCIeRootPort = 0 HotplugVFIOOnRootBus = false Debug = false UseVSock = false [Image] Path = "/usr/share/kata-containers/kata-containers-image_clearlinux_1.11.0-alpha0_agent_d26a505efd.img" [Kernel] Path = "/usr/share/kata-containers/vmlinuz-5.4.15.66-45.container" Parameters = "systemd.unit=kata-containers.target systemd.mask=systemd-networkd.service systemd.mask=systemd-networkd.socket" [Initrd] Path = "" [Proxy] Type = "kataProxy" Path = "/usr/libexec/kata-containers/kata-proxy" Debug = false [Proxy.Version] Semver = "1.11.0-alpha0-4ec94c8" Major = 1 Minor = 11 Patch = 0 Commit = "4ec94c8" [Shim] Type = "kataShim" Path = "/usr/libexec/kata-containers/kata-shim" Debug = false [Shim.Version] Semver = "1.11.0-alpha0-834e0b3" Major = 1 Minor = 11 Patch = 0 Commit = "834e0b3" [Agent] Type = "kata" Debug = false Trace = false TraceMode = "" TraceType = "" [Host] Kernel = "4.19.0-8-amd64" Architecture = "amd64" VMContainerCapable = true SupportVSocks = true [Host.Distro] Name = "Debian GNU/Linux" Version = "10" [Host.CPU] Vendor = "GenuineIntel" Model = "Intel(R) Core(TM) i5-7360U CPU @ 2.30GHz" [Netmon] Path = "/usr/libexec/kata-containers/kata-netmon" Debug = false Enable = false [Netmon.Version] Semver = "1.11.0-alpha1" Major = 1 Minor = 11 Patch = 0 Commit = "<>" ``` --- # Runtime config files ## Runtime default config files ``` /etc/kata-containers/configuration.toml /usr/share/defaults/kata-containers/configuration.toml ``` ## Runtime config file contents Config file `/etc/kata-containers/configuration.toml` not found Output of "`cat "/usr/share/defaults/kata-containers/configuration.toml"`": ```toml # Copyright (c) 2017-2019 Intel Corporation # # SPDX-License-Identifier: Apache-2.0 # # XXX: WARNING: this file is auto-generated. # XXX: # XXX: Source file: "cli/config/configuration-qemu.toml.in" # XXX: Project: # XXX: Name: Kata Containers # XXX: Type: kata [hypervisor.qemu] path = "/usr/bin/qemu-system-x86_64" kernel = "/usr/share/kata-containers/vmlinuz.container" #initrd = "/usr/share/kata-containers/kata-containers-initrd.img" image = "/usr/share/kata-containers/kata-containers.img" machine_type = "pc" # Optional space-separated list of options to pass to the guest kernel. # For example, use `kernel_params = "vsyscall=emulate"` if you are having # trouble running pre-2.15 glibc. # # WARNING: - any parameter specified here will take priority over the default # parameter value of the same name used to start the virtual machine. # Do not set values here unless you understand the impact of doing so as you # may stop the virtual machine from booting. # To see the list of default parameters, enable hypervisor debug, create a # container and look for 'default-kernel-parameters' log entries. kernel_params = "" # Path to the firmware. # If you want that qemu uses the default firmware leave this option empty firmware = "" # Machine accelerators # comma-separated list of machine accelerators to pass to the hypervisor. # For example, `machine_accelerators = "nosmm,nosmbus,nosata,nopit,static-prt,nofw"` machine_accelerators="" # Default number of vCPUs per SB/VM: # unspecified or 0 --> will be set to 1 # < 0 --> will be set to the actual number of physical cores # > 0 <= number of physical cores --> will be set to the specified number # > number of physical cores --> will be set to the actual number of physical cores default_vcpus = 1 # Default maximum number of vCPUs per SB/VM: # unspecified or == 0 --> will be set to the actual number of physical cores or to the maximum number # of vCPUs supported by KVM if that number is exceeded # > 0 <= number of physical cores --> will be set to the specified number # > number of physical cores --> will be set to the actual number of physical cores or to the maximum number # of vCPUs supported by KVM if that number is exceeded # WARNING: Depending of the architecture, the maximum number of vCPUs supported by KVM is used when # the actual number of physical cores is greater than it. # WARNING: Be aware that this value impacts the virtual machine's memory footprint and CPU # the hotplug functionality. For example, `default_maxvcpus = 240` specifies that until 240 vCPUs # can be added to a SB/VM, but the memory footprint will be big. Another example, with # `default_maxvcpus = 8` the memory footprint will be small, but 8 will be the maximum number of # vCPUs supported by the SB/VM. In general, we recommend that you do not edit this variable, # unless you know what are you doing. default_maxvcpus = 0 # Bridges can be used to hot plug devices. # Limitations: # * Currently only pci bridges are supported # * Until 30 devices per bridge can be hot plugged. # * Until 5 PCI bridges can be cold plugged per VM. # This limitation could be a bug in qemu or in the kernel # Default number of bridges per SB/VM: # unspecified or 0 --> will be set to 1 # > 1 <= 5 --> will be set to the specified number # > 5 --> will be set to 5 default_bridges = 1 # Default memory size in MiB for SB/VM. # If unspecified then it will be set 2048 MiB. default_memory = 2048 # # Default memory slots per SB/VM. # If unspecified then it will be set 10. # This is will determine the times that memory will be hotadded to sandbox/VM. #memory_slots = 10 # The size in MiB will be plused to max memory of hypervisor. # It is the memory address space for the NVDIMM devie. # If set block storage driver (block_device_driver) to "nvdimm", # should set memory_offset to the size of block device. # Default 0 #memory_offset = 0 # Specifies virtio-mem will be enabled or not. # Please note that this option should be used with the command # "echo 1 > /proc/sys/vm/overcommit_memory". # Default false #enable_virtio_mem = true # Disable block device from being used for a container's rootfs. # In case of a storage driver like devicemapper where a container's # root file system is backed by a block device, the block device is passed # directly to the hypervisor for performance reasons. # This flag prevents the block device from being passed to the hypervisor, # 9pfs is used instead to pass the rootfs. disable_block_device_use = false # Shared file system type: # - virtio-9p (default) # - virtio-fs shared_fs = "virtio-9p" # Path to vhost-user-fs daemon. virtio_fs_daemon = "/usr/bin/virtiofsd" # Default size of DAX cache in MiB virtio_fs_cache_size = 1024 # Extra args for virtiofsd daemon # # Format example: # ["-o", "arg1=xxx,arg2", "-o", "hello world", "--arg3=yyy"] # # see `virtiofsd -h` for possible options. virtio_fs_extra_args = [] # Cache mode: # # - none # Metadata, data, and pathname lookup are not cached in guest. They are # always fetched from host and any changes are immediately pushed to host. # # - auto # Metadata and pathname lookup cache expires after a configured amount of # time (default is 1 second). Data is cached while the file is open (close # to open consistency). # # - always # Metadata, data, and pathname lookup are cached in guest and never expire. virtio_fs_cache = "always" # Block storage driver to be used for the hypervisor in case the container # rootfs is backed by a block device. This is virtio-scsi, virtio-blk # or nvdimm. block_device_driver = "virtio-scsi" # Specifies cache-related options will be set to block devices or not. # Default false #block_device_cache_set = true # Specifies cache-related options for block devices. # Denotes whether use of O_DIRECT (bypass the host page cache) is enabled. # Default false #block_device_cache_direct = true # Specifies cache-related options for block devices. # Denotes whether flush requests for the device are ignored. # Default false #block_device_cache_noflush = true # Enable iothreads (data-plane) to be used. This causes IO to be # handled in a separate IO thread. This is currently only implemented # for SCSI. # enable_iothreads = false # Enable pre allocation of VM RAM, default false # Enabling this will result in lower container density # as all of the memory will be allocated and locked # This is useful when you want to reserve all the memory # upfront or in the cases where you want memory latencies # to be very predictable # Default false #enable_mem_prealloc = true # Enable huge pages for VM RAM, default false # Enabling this will result in the VM memory # being allocated using huge pages. # This is useful when you want to use vhost-user network # stacks within the container. This will automatically # result in memory pre allocation #enable_hugepages = true # Enable vhost-user storage device, default false # Enabling this will result in some Linux reserved block type # major range 240-254 being chosen to represent vhost-user devices. enable_vhost_user_store = false # The base directory specifically used for vhost-user devices. # Its sub-path "block" is used for block devices; "block/sockets" is # where we expect vhost-user sockets to live; "block/devices" is where # simulated block device nodes for vhost-user devices to live. vhost_user_store_path = "/var/run/kata-containers/vhost-user" # Enable file based guest memory support. The default is an empty string which # will disable this feature. In the case of virtio-fs, this is enabled # automatically and '/dev/shm' is used as the backing folder. # This option will be ignored if VM templating is enabled. #file_mem_backend = "" # Enable swap of vm memory. Default false. # The behaviour is undefined if mem_prealloc is also set to true #enable_swap = true # This option changes the default hypervisor and kernel parameters # to enable debug output where available. This extra output is added # to the proxy logs, but only when proxy debug is also enabled. # # Default false #enable_debug = true # Disable the customizations done in the runtime when it detects # that it is running on top a VMM. This will result in the runtime # behaving as it would when running on bare metal. # #disable_nesting_checks = true # This is the msize used for 9p shares. It is the number of bytes # used for 9p packet payload. #msize_9p = 8192 # If true and vsocks are supported, use vsocks to communicate directly # with the agent and no proxy is started, otherwise use unix # sockets and start a proxy to communicate with the agent. # Default false #use_vsock = true # If false and nvdimm is supported, use nvdimm device to plug guest image. # Otherwise virtio-block device is used. # Default is false #disable_image_nvdimm = true # VFIO devices are hotplugged on a bridge by default. # Enable hotplugging on root bus. This may be required for devices with # a large PCI bar, as this is a current limitation with hotplugging on # a bridge. This value is valid for "pc" machine type. # Default false #hotplug_vfio_on_root_bus = true # Before hot plugging a PCIe device, you need to add a pcie_root_port device. # Use this parameter when using some large PCI bar devices, such as Nvidia GPU # The value means the number of pcie_root_port # This value is valid when hotplug_vfio_on_root_bus is true and machine_type is "q35" # Default 0 #pcie_root_port = 2 # If vhost-net backend for virtio-net is not desired, set to true. Default is false, which trades off # security (vhost-net runs ring0) for network I/O performance. #disable_vhost_net = true # # Default entropy source. # The path to a host source of entropy (including a real hardware RNG) # /dev/urandom and /dev/random are two main options. # Be aware that /dev/random is a blocking source of entropy. If the host # runs out of entropy, the VMs boot time will increase leading to get startup # timeouts. # The source of entropy /dev/urandom is non-blocking and provides a # generally acceptable source of entropy. It should work well for pretty much # all practical purposes. #entropy_source= "/dev/urandom" # Path to OCI hook binaries in the *guest rootfs*. # This does not affect host-side hooks which must instead be added to # the OCI spec passed to the runtime. # # You can create a rootfs with hooks by customizing the osbuilder scripts: # https://github.com/kata-containers/osbuilder # # Hooks must be stored in a subdirectory of guest_hook_path according to their # hook type, i.e. "guest_hook_path/{prestart,postart,poststop}". # The agent will scan these directories for executable files and add them, in # lexicographical order, to the lifecycle of the guest container. # Hooks are executed in the runtime namespace of the guest. See the official documentation: # https://github.com/opencontainers/runtime-spec/blob/v1.0.1/config.md#posix-platform-hooks # Warnings will be logged if any error is encountered will scanning for hooks, # but it will not abort container execution. #guest_hook_path = "/usr/share/oci/hooks" [factory] # VM templating support. Once enabled, new VMs are created from template # using vm cloning. They will share the same initial kernel, initramfs and # agent memory by mapping it readonly. It helps speeding up new container # creation and saves a lot of memory if there are many kata containers running # on the same host. # # When disabled, new VMs are created from scratch. # # Note: Requires "initrd=" to be set ("image=" is not supported). # # Default false #enable_template = true # Specifies the path of template. # # Default "/run/vc/vm/template" #template_path = "/run/vc/vm/template" # The number of caches of VMCache: # unspecified or == 0 --> VMCache is disabled # > 0 --> will be set to the specified number # # VMCache is a function that creates VMs as caches before using it. # It helps speed up new container creation. # The function consists of a server and some clients communicating # through Unix socket. The protocol is gRPC in protocols/cache/cache.proto. # The VMCache server will create some VMs and cache them by factory cache. # It will convert the VM to gRPC format and transport it when gets # requestion from clients. # Factory grpccache is the VMCache client. It will request gRPC format # VM and convert it back to a VM. If VMCache function is enabled, # kata-runtime will request VM from factory grpccache when it creates # a new sandbox. # # Default 0 #vm_cache_number = 0 # Specify the address of the Unix socket that is used by VMCache. # # Default /var/run/kata-containers/cache.sock #vm_cache_endpoint = "/var/run/kata-containers/cache.sock" [proxy.kata] path = "/usr/libexec/kata-containers/kata-proxy" # If enabled, proxy messages will be sent to the system log # (default: disabled) #enable_debug = true [shim.kata] path = "/usr/libexec/kata-containers/kata-shim" # If enabled, shim messages will be sent to the system log # (default: disabled) #enable_debug = true # If enabled, the shim will create opentracing.io traces and spans. # (See https://www.jaegertracing.io/docs/getting-started). # # Note: By default, the shim runs in a separate network namespace. Therefore, # to allow it to send trace details to the Jaeger agent running on the host, # it is necessary to set 'disable_new_netns=true' so that it runs in the host # network namespace. # # (default: disabled) #enable_tracing = true [agent.kata] # If enabled, make the agent display debug-level messages. # (default: disabled) #enable_debug = true # Enable agent tracing. # # If enabled, the default trace mode is "dynamic" and the # default trace type is "isolated". The trace mode and type are set # explicity with the `trace_type=` and `trace_mode=` options. # # Notes: # # - Tracing is ONLY enabled when `enable_tracing` is set: explicitly # setting `trace_mode=` and/or `trace_type=` without setting `enable_tracing` # will NOT activate agent tracing. # # - See https://github.com/kata-containers/agent/blob/master/TRACING.md for # full details. # # (default: disabled) #enable_tracing = true # #trace_mode = "dynamic" #trace_type = "isolated" # Comma separated list of kernel modules and their parameters. # These modules will be loaded in the guest kernel using modprobe(8). # The following example can be used to load two kernel modules with parameters # - kernel_modules=["e1000e InterruptThrottleRate=3000,3000,3000 EEE=1", "i915 enable_ppgtt=0"] # The first word is considered as the module name and the rest as its parameters. # Container will not be started when: # * A kernel module is specified and the modprobe command is not installed in the guest # or it fails loading the module. # * The module is not available in the guest or it doesn't met the guest kernel # requirements, like architecture and version. # kernel_modules=[] [netmon] # If enabled, the network monitoring process gets started when the # sandbox is created. This allows for the detection of some additional # network being added to the existing network namespace, after the # sandbox has been created. # (default: disabled) #enable_netmon = true # Specify the path to the netmon binary. path = "/usr/libexec/kata-containers/kata-netmon" # If enabled, netmon messages will be sent to the system log # (default: disabled) #enable_debug = true [runtime] # If enabled, the runtime will log additional debug messages to the # system log # (default: disabled) #enable_debug = true # # Internetworking model # Determines how the VM should be connected to the # the container network interface # Options: # # - macvtap # Used when the Container network interface can be bridged using # macvtap. # # - none # Used when customize network. Only creates a tap device. No veth pair. # # - tcfilter # Uses tc filter rules to redirect traffic from the network interface # provided by plugin to a tap interface connected to the VM. # internetworking_model="tcfilter" # disable guest seccomp # Determines whether container seccomp profiles are passed to the virtual # machine and applied by the kata agent. If set to true, seccomp is not applied # within the guest # (default: true) disable_guest_seccomp=true # If enabled, the runtime will create opentracing.io traces and spans. # (See https://www.jaegertracing.io/docs/getting-started). # (default: disabled) #enable_tracing = true # If enabled, the runtime will not create a network namespace for shim and hypervisor processes. # This option may have some potential impacts to your host. It should only be used when you know what you're doing. # `disable_new_netns` conflicts with `enable_netmon` # `disable_new_netns` conflicts with `internetworking_model=tcfilter` and `internetworking_model=macvtap`. It works only # with `internetworking_model=none`. The tap device will be in the host network namespace and can connect to a bridge # (like OVS) directly. # If you are using docker, `disable_new_netns` only works with `docker run --net=none` # (default: false) #disable_new_netns = true # if enabled, the runtime will add all the kata processes inside one dedicated cgroup. # The container cgroups in the host are not created, just one single cgroup per sandbox. # The runtime caller is free to restrict or collect cgroup stats of the overall Kata sandbox. # The sandbox cgroup path is the parent cgroup of a container with the PodSandbox annotation. # The sandbox cgroup is constrained if there is no container type annotation. # See: https://godoc.org/github.com/kata-containers/runtime/virtcontainers#ContainerType sandbox_cgroup_only=false # Enabled experimental feature list, format: ["a", "b"]. # Experimental features are features not stable enough for production, # they may break compatibility, and are prepared for a big version bump. # Supported experimental features: # (default: []) experimental=[] ``` --- # KSM throttler ## version Output of "`/usr/lib/systemd/system/kata-ksm-throttler.service --version`": ``` /usr/local/bin/kata-collect-data.sh: line 178: /usr/lib/systemd/system/kata-ksm-throttler.service: Permission denied ``` Output of "`/usr/libexec/kata-ksm-throttler/kata-ksm-throttler --version`": ``` kata-ksm-throttler version 1.11.0-alpha0-e0cd739 ``` ## systemd service # Image details ```yaml --- osbuilder: url: "https://github.com/kata-containers/osbuilder" version: "unknown" rootfs-creation-time: "2020-02-24T19:20:36.123893157+0000Z" description: "osbuilder rootfs" file-format-version: "0.0.2" architecture: "x86_64" base-distro: name: "Clear" version: "32380" packages: default: - "chrony" - "iptables-bin" - "kmod-bin" - "libudev0-shim" - "systemd" - "util-linux-bin" extra: agent: url: "https://github.com/kata-containers/agent" name: "kata-agent" version: "1.11.0-alpha0-d26a505efd336e966636f9aa30eaacd29cf8a58f" agent-is-init-daemon: "no" ``` --- # Initrd details No initrd --- # Logfiles ## Runtime logs No recent runtime problems found in system journal. ## Proxy logs No recent proxy problems found in system journal. ## Shim logs No recent shim problems found in system journal. ## Throttler logs No recent throttler problems found in system journal. --- # Container manager details No `docker` Have `kubectl` ## Kubernetes Output of "`kubectl version`": ``` Client Version: version.Info{Major:"1", Minor:"17", GitVersion:"v1.17.4", GitCommit:"8d8aa39598534325ad77120c120a22b3a990b5ea", GitTreeState:"clean", BuildDate:"2020-03-12T21:03:42Z", GoVersion:"go1.13.8", Compiler:"gc", Platform:"linux/amd64"} Server Version: version.Info{Major:"1", Minor:"17", GitVersion:"v1.17.4", GitCommit:"8d8aa39598534325ad77120c120a22b3a990b5ea", GitTreeState:"clean", BuildDate:"2020-03-12T20:55:23Z", GoVersion:"go1.13.8", Compiler:"gc", Platform:"linux/amd64"} ``` Output of "`kubectl config view`": ``` apiVersion: v1 clusters: - cluster: certificate-authority-data: DATA+OMITTED server: https://192.168.219.148:6443 name: kubernetes contexts: - context: cluster: kubernetes user: kubernetes-admin name: kubernetes-admin@kubernetes current-context: kubernetes-admin@kubernetes kind: Config preferences: {} users: - name: kubernetes-admin user: client-certificate-data: REDACTED client-key-data: REDACTED ``` Output of "`systemctl show kubelet`": ``` Type=simple Restart=always NotifyAccess=none RestartUSec=10s TimeoutStartUSec=1min 30s TimeoutStopUSec=1min 30s RuntimeMaxUSec=infinity WatchdogUSec=0 WatchdogTimestampMonotonic=0 RootDirectoryStartOnly=no RemainAfterExit=no GuessMainPID=yes MainPID=690 ControlPID=0 FileDescriptorStoreMax=0 NFileDescriptorStore=0 StatusErrno=0 Result=success UID=[not set] GID=[not set] NRestarts=11 ExecMainStartTimestamp=Mon 2020-04-06 21:17:53 HKT ExecMainStartTimestampMonotonic=116912083 ExecMainExitTimestampMonotonic=0 ExecMainPID=690 ExecMainCode=0 ExecMainStatus=0 ExecStart={ path=/usr/bin/kubelet ; argv[]=/usr/bin/kubelet $KUBELET_KUBECONFIG_ARGS $KUBELET_CONFIG_ARGS $KUBELET_KUBEADM_ARGS $KUBELET_EXTRA_ARGS ; ignore_errors=no ; start_time=[Mon 2020-04-06 21:17:53 HKT] ; stop_time=[n/a] ; pid=690 ; code=(null) ; status=0/0 } Slice=system.slice ControlGroup=/system.slice/kubelet.service MemoryCurrent=71266304 CPUUsageNSec=[not set] TasksCurrent=24 IPIngressBytes=18446744073709551615 IPIngressPackets=18446744073709551615 IPEgressBytes=18446744073709551615 IPEgressPackets=18446744073709551615 Delegate=no CPUAccounting=no CPUWeight=[not set] StartupCPUWeight=[not set] CPUShares=[not set] StartupCPUShares=[not set] CPUQuotaPerSecUSec=infinity IOAccounting=no IOWeight=[not set] StartupIOWeight=[not set] BlockIOAccounting=no BlockIOWeight=[not set] StartupBlockIOWeight=[not set] MemoryAccounting=yes MemoryMin=0 MemoryLow=0 MemoryHigh=infinity MemoryMax=infinity MemorySwapMax=infinity MemoryLimit=infinity DevicePolicy=auto TasksAccounting=yes TasksMax=2303 IPAccounting=no Environment=[unprintable] KUBELET_CONFIG_ARGS=--config=/var/lib/kubelet/config.yaml EnvironmentFiles=/var/lib/kubelet/kubeadm-flags.env (ignore_errors=yes) EnvironmentFiles=/etc/default/kubelet (ignore_errors=yes) UMask=0022 LimitCPU=infinity LimitCPUSoft=infinity LimitFSIZE=infinity LimitFSIZESoft=infinity LimitDATA=infinity LimitDATASoft=infinity LimitSTACK=infinity LimitSTACKSoft=8388608 LimitCORE=infinity LimitCORESoft=0 LimitRSS=infinity LimitRSSSoft=infinity LimitNOFILE=524288 LimitNOFILESoft=1024 LimitAS=infinity LimitASSoft=infinity LimitNPROC=7677 LimitNPROCSoft=7677 LimitMEMLOCK=65536 LimitMEMLOCKSoft=65536 LimitLOCKS=infinity LimitLOCKSSoft=infinity LimitSIGPENDING=7677 LimitSIGPENDINGSoft=7677 LimitMSGQUEUE=819200 LimitMSGQUEUESoft=819200 LimitNICE=0 LimitNICESoft=0 LimitRTPRIO=0 LimitRTPRIOSoft=0 LimitRTTIME=infinity LimitRTTIMESoft=infinity OOMScoreAdjust=0 Nice=0 IOSchedulingClass=0 IOSchedulingPriority=0 CPUSchedulingPolicy=0 CPUSchedulingPriority=0 TimerSlackNSec=50000 CPUSchedulingResetOnFork=no NonBlocking=no StandardInput=null StandardInputData= StandardOutput=journal StandardError=inherit TTYReset=no TTYVHangup=no TTYVTDisallocate=no SyslogPriority=30 SyslogLevelPrefix=yes SyslogLevel=6 SyslogFacility=3 LogLevelMax=-1 LogRateLimitIntervalUSec=0 LogRateLimitBurst=0 SecureBits=0 CapabilityBoundingSet=cap_chown cap_dac_override cap_dac_read_search cap_fowner cap_fsetid cap_kill cap_setgid cap_setuid cap_setpcap cap_linux_immutable cap_net_bind_service cap_net_broadcast cap_net_admin cap_net_raw cap_ipc_lock cap_ipc_owner cap_sys_module cap_sys_rawio cap_sys_chroot cap_sys_ptrace cap_sys_pacct cap_sys_admin cap_sys_boot cap_sys_nice cap_sys_resource cap_sys_time cap_sys_tty_config cap_mknod cap_lease cap_audit_write cap_audit_control cap_setfcap cap_mac_override cap_mac_admin cap_syslog cap_wake_alarm cap_block_suspend AmbientCapabilities= DynamicUser=no RemoveIPC=no MountFlags= PrivateTmp=no PrivateDevices=no ProtectKernelTunables=no ProtectKernelModules=no ProtectControlGroups=no PrivateNetwork=no PrivateUsers=no PrivateMounts=no ProtectHome=no ProtectSystem=no SameProcessGroup=no UtmpMode=init IgnoreSIGPIPE=yes NoNewPrivileges=no SystemCallErrorNumber=0 LockPersonality=no RuntimeDirectoryPreserve=no RuntimeDirectoryMode=0755 StateDirectoryMode=0755 CacheDirectoryMode=0755 LogsDirectoryMode=0755 ConfigurationDirectoryMode=0755 MemoryDenyWriteExecute=no RestrictRealtime=no RestrictNamespaces=no MountAPIVFS=no KeyringMode=private KillMode=control-group KillSignal=15 FinalKillSignal=9 SendSIGKILL=yes SendSIGHUP=no WatchdogSignal=6 Id=kubelet.service Names=kubelet.service Requires=system.slice sysinit.target WantedBy=multi-user.target Conflicts=shutdown.target Before=shutdown.target multi-user.target After=system.slice sysinit.target basic.target systemd-journald.socket Documentation=https://kubernetes.io/docs/home/ Description=kubelet: The Kubernetes Node Agent LoadState=loaded ActiveState=active SubState=running FragmentPath=/lib/systemd/system/kubelet.service DropInPaths=/etc/systemd/system/kubelet.service.d/10-kubeadm.conf UnitFileState=enabled UnitFilePreset=enabled StateChangeTimestamp=Mon 2020-04-06 21:17:53 HKT StateChangeTimestampMonotonic=116912391 InactiveExitTimestamp=Mon 2020-04-06 21:17:53 HKT InactiveExitTimestampMonotonic=116912391 ActiveEnterTimestamp=Mon 2020-04-06 21:17:53 HKT ActiveEnterTimestampMonotonic=116912391 ActiveExitTimestamp=Mon 2020-04-06 21:17:43 HKT ActiveExitTimestampMonotonic=106731439 InactiveEnterTimestamp=Mon 2020-04-06 21:17:53 HKT InactiveEnterTimestampMonotonic=116910696 CanStart=yes CanStop=yes CanReload=no CanIsolate=no StopWhenUnneeded=no RefuseManualStart=no RefuseManualStop=no AllowIsolate=no DefaultDependencies=yes OnFailureJobMode=replace IgnoreOnIsolate=no NeedDaemonReload=no JobTimeoutUSec=infinity JobRunningTimeoutUSec=infinity JobTimeoutAction=none ConditionResult=yes AssertResult=yes ConditionTimestamp=Mon 2020-04-06 21:17:53 HKT ConditionTimestampMonotonic=116910762 AssertTimestamp=Mon 2020-04-06 21:17:53 HKT AssertTimestampMonotonic=116910762 Transient=no Perpetual=no StartLimitIntervalUSec=0 StartLimitBurst=5 StartLimitAction=none FailureAction=none FailureActionExitStatus=-1 SuccessAction=none SuccessActionExitStatus=-1 InvocationID=fda7305aacc34b2b902c85801b05ebf4 CollectMode=inactive ``` No `crio` Have `containerd` ## containerd Output of "`containerd --version`": ``` containerd containerd.io 1.2.13 7ad184331fa3e55e52b890ea95e65ba581ae3429 ``` Output of "`systemctl show containerd`": ``` Type=simple Restart=no NotifyAccess=none RestartUSec=100ms TimeoutStartUSec=1min 30s TimeoutStopUSec=1min 30s RuntimeMaxUSec=infinity WatchdogUSec=0 WatchdogTimestampMonotonic=0 RootDirectoryStartOnly=no RemainAfterExit=no GuessMainPID=yes MainPID=11870 ControlPID=0 FileDescriptorStoreMax=0 NFileDescriptorStore=0 StatusErrno=0 Result=success UID=[not set] GID=[not set] NRestarts=0 ExecMainStartTimestamp=Mon 2020-04-06 21:52:02 HKT ExecMainStartTimestampMonotonic=2165703168 ExecMainExitTimestampMonotonic=0 ExecMainPID=11870 ExecMainCode=0 ExecMainStatus=0 ExecStartPre={ path=/sbin/modprobe ; argv[]=/sbin/modprobe overlay ; ignore_errors=yes ; start_time=[Mon 2020-04-06 21:52:02 HKT] ; stop_time=[Mon 2020-04-06 21:52:02 HKT] ; pid=11869 ; code=exited ; status=0 } ExecStart={ path=/usr/bin/containerd ; argv[]=/usr/bin/containerd ; ignore_errors=no ; start_time=[Mon 2020-04-06 21:52:02 HKT] ; stop_time=[n/a] ; pid=11870 ; code=(null) ; status=0/0 } Slice=system.slice ControlGroup=/system.slice/containerd.service MemoryCurrent=518443008 CPUUsageNSec=[not set] TasksCurrent=191 IPIngressBytes=18446744073709551615 IPIngressPackets=18446744073709551615 IPEgressBytes=18446744073709551615 IPEgressPackets=18446744073709551615 Delegate=yes DelegateControllers=cpu cpuacct io blkio memory devices pids bpf-firewall bpf-devices CPUAccounting=no CPUWeight=[not set] StartupCPUWeight=[not set] CPUShares=[not set] StartupCPUShares=[not set] CPUQuotaPerSecUSec=infinity IOAccounting=no IOWeight=[not set] StartupIOWeight=[not set] BlockIOAccounting=no BlockIOWeight=[not set] StartupBlockIOWeight=[not set] MemoryAccounting=yes MemoryMin=0 MemoryLow=0 MemoryHigh=infinity MemoryMax=infinity MemorySwapMax=infinity MemoryLimit=infinity DevicePolicy=auto TasksAccounting=yes TasksMax=infinity IPAccounting=no UMask=0022 LimitCPU=infinity LimitCPUSoft=infinity LimitFSIZE=infinity LimitFSIZESoft=infinity LimitDATA=infinity LimitDATASoft=infinity LimitSTACK=infinity LimitSTACKSoft=8388608 LimitCORE=infinity LimitCORESoft=infinity LimitRSS=infinity LimitRSSSoft=infinity LimitNOFILE=1048576 LimitNOFILESoft=1048576 LimitAS=infinity LimitASSoft=infinity LimitNPROC=infinity LimitNPROCSoft=infinity LimitMEMLOCK=65536 LimitMEMLOCKSoft=65536 LimitLOCKS=infinity LimitLOCKSSoft=infinity LimitSIGPENDING=7677 LimitSIGPENDINGSoft=7677 LimitMSGQUEUE=819200 LimitMSGQUEUESoft=819200 LimitNICE=0 LimitNICESoft=0 LimitRTPRIO=0 LimitRTPRIOSoft=0 LimitRTTIME=infinity LimitRTTIMESoft=infinity OOMScoreAdjust=0 Nice=0 IOSchedulingClass=0 IOSchedulingPriority=0 CPUSchedulingPolicy=0 CPUSchedulingPriority=0 TimerSlackNSec=50000 CPUSchedulingResetOnFork=no NonBlocking=no StandardInput=null StandardInputData= StandardOutput=journal StandardError=inherit TTYReset=no TTYVHangup=no TTYVTDisallocate=no SyslogPriority=30 SyslogLevelPrefix=yes SyslogLevel=6 SyslogFacility=3 LogLevelMax=-1 LogRateLimitIntervalUSec=0 LogRateLimitBurst=0 SecureBits=0 CapabilityBoundingSet=cap_chown cap_dac_override cap_dac_read_search cap_fowner cap_fsetid cap_kill cap_setgid cap_setuid cap_setpcap cap_linux_immutable cap_net_bind_service cap_net_broadcast cap_net_admin cap_net_raw cap_ipc_lock cap_ipc_owner cap_sys_module cap_sys_rawio cap_sys_chroot cap_sys_ptrace cap_sys_pacct cap_sys_admin cap_sys_boot cap_sys_nice cap_sys_resource cap_sys_time cap_sys_tty_config cap_mknod cap_lease cap_audit_write cap_audit_control cap_setfcap cap_mac_override cap_mac_admin cap_syslog cap_wake_alarm cap_block_suspend AmbientCapabilities= DynamicUser=no RemoveIPC=no MountFlags= PrivateTmp=no PrivateDevices=no ProtectKernelTunables=no ProtectKernelModules=no ProtectControlGroups=no PrivateNetwork=no PrivateUsers=no PrivateMounts=no ProtectHome=no ProtectSystem=no SameProcessGroup=no UtmpMode=init IgnoreSIGPIPE=yes NoNewPrivileges=no SystemCallErrorNumber=0 LockPersonality=no RuntimeDirectoryPreserve=no RuntimeDirectoryMode=0755 StateDirectoryMode=0755 CacheDirectoryMode=0755 LogsDirectoryMode=0755 ConfigurationDirectoryMode=0755 MemoryDenyWriteExecute=no RestrictRealtime=no RestrictNamespaces=no MountAPIVFS=no KeyringMode=private KillMode=process KillSignal=15 FinalKillSignal=9 SendSIGKILL=yes SendSIGHUP=no WatchdogSignal=6 Id=containerd.service Names=containerd.service Requires=system.slice sysinit.target WantedBy=multi-user.target Conflicts=shutdown.target Before=shutdown.target multi-user.target After=network.target basic.target systemd-journald.socket system.slice sysinit.target Documentation=https://containerd.io Description=containerd container runtime LoadState=loaded ActiveState=active SubState=running FragmentPath=/lib/systemd/system/containerd.service UnitFileState=enabled UnitFilePreset=enabled StateChangeTimestamp=Mon 2020-04-06 21:52:02 HKT StateChangeTimestampMonotonic=2165703222 InactiveExitTimestamp=Mon 2020-04-06 21:52:02 HKT InactiveExitTimestampMonotonic=2165687503 ActiveEnterTimestamp=Mon 2020-04-06 21:52:02 HKT ActiveEnterTimestampMonotonic=2165703222 ActiveExitTimestamp=Mon 2020-04-06 21:52:02 HKT ActiveExitTimestampMonotonic=2165678493 InactiveEnterTimestamp=Mon 2020-04-06 21:52:02 HKT InactiveEnterTimestampMonotonic=2165684771 CanStart=yes CanStop=yes CanReload=no CanIsolate=no StopWhenUnneeded=no RefuseManualStart=no RefuseManualStop=no AllowIsolate=no DefaultDependencies=yes OnFailureJobMode=replace IgnoreOnIsolate=no NeedDaemonReload=no JobTimeoutUSec=infinity JobRunningTimeoutUSec=infinity JobTimeoutAction=none ConditionResult=yes AssertResult=yes ConditionTimestamp=Mon 2020-04-06 21:52:02 HKT ConditionTimestampMonotonic=2165685219 AssertTimestamp=Mon 2020-04-06 21:52:02 HKT AssertTimestampMonotonic=2165685220 Transient=no Perpetual=no StartLimitIntervalUSec=10s StartLimitBurst=5 StartLimitAction=none FailureAction=none FailureActionExitStatus=-1 SuccessAction=none SuccessActionExitStatus=-1 InvocationID=a637755b46634a2f83f53b301a653c63 CollectMode=inactive ``` Output of "`cat /etc/containerd/config.toml`": ``` root = "/var/lib/containerd" state = "/run/containerd" oom_score = 0 [grpc] address = "/run/containerd/containerd.sock" uid = 0 gid = 0 max_recv_message_size = 16777216 max_send_message_size = 16777216 [debug] address = "" uid = 0 gid = 0 level = "" [metrics] address = "" grpc_histogram = false [cgroup] path = "" [plugins] [plugins.cgroups] no_prometheus = false [plugins.cri] stream_server_address = "127.0.0.1" stream_server_port = "0" enable_selinux = false # sandbox_image = "k8s.gcr.io/pause:3.1" sandbox_image = "registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.1" stats_collect_period = 10 systemd_cgroup = false enable_tls_streaming = false max_container_log_line_size = 16384 disable_proc_mount = false [plugins.cri.containerd] snapshotter = "overlayfs" no_pivot = false [plugins.cri.containerd.runtimes] [plugins.cri.containerd.runtimes.runc] runtime_type = "io.containerd.runc.v1" [plugins.cri.containerd.runtimes.runc.options] NoPivotRoot = false NoNewKeyring = false ShimCgroup = "" IoUid = 0 IoGid = 0 BinaryName = "runc" Root = "" CriuPath = "" SystemdCgroup = false [plugins.cri.containerd.runtimes.kata] runtime_type = "io.containerd.kata.v2" [plugins.cri.containerd.runtimes.katacli] runtime_type = "io.containerd.runc.v1" [plugins.cri.containerd.runtimes.katacli.options] NoPivotRoot = false NoNewKeyring = false ShimCgroup = "" IoUid = 0 IoGid = 0 BinaryName = "/usr/bin/kata-runtime" Root = "" CriuPath = "" SystemdCgroup = false [plugins.cri.containerd.default_runtime] runtime_type = "io.containerd.runtime.v1.linux" runtime_engine = "" runtime_root = "" [plugins.cri.containerd.untrusted_workload_runtime] runtime_type = "io.containerd.kata.v2" runtime_engine = "" runtime_root = "" [plugins.cri.cni] bin_dir = "/opt/cni/bin" conf_dir = "/etc/cni/net.d" conf_template = "" [plugins.cri.registry] [plugins.cri.registry.mirrors] [plugins.cri.registry.mirrors."docker.io"] endpoint = ["https://registry-1.docker.io"] [plugins.cri.x509_key_pair_streaming] tls_cert_file = "" tls_key_file = "" [plugins.diff-service] default = ["walking"] [plugins.linux] shim = "containerd-shim" runtime = "runc" runtime_root = "" no_shim = false shim_debug = false [plugins.opt] path = "/opt/containerd" [plugins.restart] interval = "10s" [plugins.scheduler] pause_threshold = 0.02 deletion_threshold = 0 mutation_threshold = 100 schedule_delay = "0s" startup_delay = "100ms" ``` --- # Packages Have `dpkg` Output of "`dpkg -l|egrep "(cc-oci-runtimecc-runtimerunv|kata-proxy|kata-runtime|kata-shim|kata-ksm-throttler|kata-containers-image|linux-container|qemu-)"`": ``` ii kata-containers-image 1.11.0~alpha0-42 amd64 Kata containers image ii kata-ksm-throttler 1.11.0~alpha0-45 amd64 ii kata-linux-container 5.4.15.66-45 amd64 linux kernel optimised for container-like workloads. ii kata-proxy 1.11.0~alpha0-43 amd64 ii kata-runtime 1.11.0~alpha0-51 amd64 ii kata-shim 1.11.0~alpha0-41 amd64 ii qemu-system-common 1:3.1+dfsg-8+deb10u4 amd64 QEMU full system emulation binaries (common files) ii qemu-system-data 1:3.1+dfsg-8+deb10u4 all QEMU full system emulation (data files) ii qemu-system-gui 1:3.1+dfsg-8+deb10u4 amd64 QEMU full system emulation binaries (user interface and audio support) ii qemu-system-x86 1:3.1+dfsg-8+deb10u4 amd64 QEMU full system emulation binaries (x86) ii qemu-utils 1:3.1+dfsg-8+deb10u4 amd64 QEMU utilities ii qemu-vanilla 4.1.1+git.99c5874a9b-46 amd64 linux kernel optimised for container-like workloads. ``` No `rpm` ---

devimc commented 4 years ago

@scott0000 looks like you're running kata on an embedded system?

he SB has 2048 MiB and the maximum amount is 1971 MiB

you will need more memory or change the default memory to 512 MB

https://github.com/kata-containers/runtime/blob/af24829c2ae78c3b811c4cff6736ddaba500d37c/cli/config/configuration-qemu.toml.in#L76-L78

lining2020x commented 4 years ago

Sorry for a so long time delay. It can work by changing the default_memory smaller.