kata-containers / runtime

Kata Containers version 1.x runtime (for version 2.x see https://github.com/kata-containers/kata-containers).
https://katacontainers.io/
Apache License 2.0
2.1k stars 374 forks source link

Pods stuck in ContainerCreating and Running #1672

Closed awprice closed 5 years ago

awprice commented 5 years ago

Description of problem

When scheduling a large number of pods at once, e.g. 40+ at a time, most of the pods transition from Pending -> ContainerCreating -> Running -> Completed, but a large number of them will either get stuck in ContainerCreating or Running.

The process inside the pod is a simple sleep 5 and should finish quite quickly, but we see pods stay running for multiple minutes or indefinitely.

We see the containerd-shim-kata-v2 consuming large amounts of CPU when this is occurring, see output of top below.

Expected result

Pods to only be in the Running state for 5 seconds and then transition to Completed.

Actual result

Pods get stuck in either the Running state or ContainerCreating state, and either stay in that state indefinitely or eventually transition to Completed.

We see the following side effects of this:

The output of kubectl get pods shows pods running for much longer than 5 seconds: We used the pod script below to create two sets of 40 pods.

$ kubectl get pods
NAME          READY   STATUS              RESTARTS   AGE
sleep-2dktr   0/1     Completed           0          3m47s
sleep-2lxwj   0/1     Completed           0          3m51s
sleep-2vfr4   0/1     ContainerCreating   0          54s
sleep-42tlt   0/1     Completed           0          3m43s
sleep-4grwr   0/1     Completed           0          3m50s
sleep-4h8gn   0/1     Completed           0          3m45s
sleep-4mfjn   0/1     Completed           0          3m47s
sleep-4vg4z   1/1     Running             0          51s
sleep-5smxx   1/1     Running             0          53s
sleep-6kkd9   1/1     Running             0          48s
sleep-6tqdk   0/1     Completed           0          3m44s
sleep-6vdl9   0/1     Completed           0          3m48s
sleep-6z7mv   0/1     Completed           0          3m48s
sleep-77qff   0/1     Completed           0          3m47s
sleep-7fb4v   0/1     Completed           0          3m52s
sleep-8dqrh   0/1     ContainerCreating   0          50s
sleep-8sccz   0/1     ContainerCreating   0          47s
sleep-8xl54   0/1     Completed           0          3m52s
sleep-9dc6n   0/1     Completed           0          3m44s
sleep-b5s5q   0/1     Completed           0          3m51s
sleep-b5xk8   1/1     Running             0          49s
sleep-bvrqk   0/1     Completed           0          3m52s
sleep-ckv9k   1/1     Running             0          47s
sleep-cqd4m   1/1     Running             0          56s
sleep-cvz5t   0/1     Completed           0          3m45s
sleep-dmz9l   1/1     Running             0          56s
sleep-drblk   0/1     ContainerCreating   0          54s
sleep-f6d7z   0/1     ContainerCreating   0          50s
sleep-f8rk5   1/1     Running             0          51s
sleep-ftjl5   0/1     Completed           0          3m51s
sleep-g2b74   0/1     Completed           0          3m44s
sleep-g2tsq   1/1     Running             0          53s
sleep-gcwd9   0/1     Completed           0          3m46s
sleep-gg829   1/1     Running             0          55s
sleep-ggl5t   0/1     ContainerCreating   0          47s
sleep-h67zm   1/1     Running             0          48s
sleep-hnk9z   0/1     ContainerCreating   0          48s
sleep-hxccd   0/1     Completed           0          3m49s
sleep-j8fvj   0/1     Completed           0          3m50s
sleep-jqbcm   1/1     Running             0          49s
sleep-kgj7g   0/1     Completed           0          3m49s
sleep-ks574   0/1     ContainerCreating   0          47s
sleep-lftkp   0/1     Completed           0          3m45s
sleep-lnhrp   1/1     Running             0          57s
sleep-lzvtm   0/1     ContainerCreating   0          52s
sleep-m6z69   1/1     Running             0          53s
sleep-m8dm7   0/1     Completed           0          3m47s
sleep-m96pd   1/1     Running             0          57s
sleep-n67vc   0/1     ContainerCreating   0          53s
sleep-n86xm   0/1     Completed           0          3m46s
sleep-nfbrx   0/1     Completed           0          3m49s
sleep-p2lcs   0/1     Completed           0          3m50s
sleep-pbq7t   0/1     ContainerCreating   0          49s
sleep-pdq7x   0/1     Completed           0          3m52s
sleep-pr76f   0/1     ContainerCreating   0          50s
sleep-qn5bw   0/1     Completed           0          3m49s
sleep-qnkcm   1/1     Running             0          56s
sleep-qwpk7   0/1     Completed           0          3m48s
sleep-qxrh5   0/1     Completed           0          3m49s
sleep-rrhff   1/1     Running             0          51s
sleep-rwcw5   0/1     Completed           0          3m45s
sleep-s49sw   0/1     Completed           0          3m53s
sleep-s6fnk   0/1     Completed           0          3m44s
sleep-sjfc5   0/1     ContainerCreating   0          49s
sleep-sp2v6   0/1     ContainerCreating   0          50s
sleep-t2dnm   0/1     Completed           0          3m51s
sleep-tqxlb   0/1     ContainerCreating   0          52s
sleep-txckv   0/1     Completed           0          3m50s
sleep-v5dw7   1/1     Running             0          55s
sleep-v9jp6   1/1     Running             0          54s
sleep-vqzqn   0/1     ContainerCreating   0          52s
sleep-wxfx7   0/1     Completed           0          3m48s
sleep-x2bhd   1/1     Running             0          56s
sleep-x2xlm   0/1     Completed           0          3m46s
sleep-xjjgf   0/1     ContainerCreating   0          54s
sleep-xpttr   1/1     Running             0          55s
sleep-zcnsp   0/1     Completed           0          3m46s
sleep-zpqhb   0/1     Completed           0          3m46s
sleep-zw8pg   1/1     Running             0          51s
sleep-zw99x   1/1     Running             0          55s

Output of top:

top - 05:20:02 up 25 min,  1 user,  load average: 38.83, 25.91, 11.65
Tasks: 1339 total,   2 running, 1337 sleeping,   0 stopped,   0 zombie
%Cpu(s):  6.1 us, 46.8 sy,  0.0 ni, 46.2 id,  0.0 wa,  0.3 hi,  0.5 si,  0.0 st
MiB Mem : 515928.8 total, 490083.3 free,  22615.3 used,   3230.1 buff/cache
MiB Swap:      0.0 total,      0.0 free,      0.0 used. 490167.3 avail Mem 

  PID USER      PR  NI    VIRT    RES    SHR S  %CPU  %MEM     TIME+ COMMAND                                                                                                            
36172 root      20   0  940780  28564  17264 S 103.9   0.0   5:20.55 containerd-shim                                                                                                    
31110 root      20   0  940780  30008  16804 S 103.6   0.0   5:03.21 containerd-shim                                                                                                    
31213 root      20   0  940780  28360  17052 S 103.6   0.0   5:07.49 containerd-shim                                                                                                    
31310 root      20   0  940780  28456  17324 S 103.6   0.0   5:09.93 containerd-shim                                                                                                    
31658 root      20   0  940780  29704  16492 S 103.6   0.0   5:26.65 containerd-shim                                                                                                    
31807 root      20   0  940780  27744  16624 S 103.6   0.0   4:59.33 containerd-shim                                                                                                    
34011 root      20   0  940780  28656  17392 S 103.6   0.0   5:03.47 containerd-shim                                                                                                    
34373 root      20   0  940780  27924  16788 S 103.6   0.0   5:44.46 containerd-shim   
<truncated>

If I using kubectl exec on one of the "running" pods, it states that the container has actually stopped:

$ kubectl get pods 
<truncated>
sleep-zw99x   1/1     Running     0          6m31s
$ kubectl exec -it sleep-zw99x sh
error: Internal error occurred: error executing command in container: failed to exec in container: failed to start exec "7bea40437d4b5ffe0f9f335b86704d799e983083c04f013a7af7fd3c220bf1a4": cannot enter container 0ec0acda746d094f5f9a210c745c768ec034399557a1db2f3d1e0ceea1e6858f, with err rpc error: code = FailedPrecondition desc = Cannot exec in stopped container 0ec0acda746d094f5f9a210c745c768ec034399557a1db2f3d1e0ceea1e6858f: unknown

Pod Spec

We are using the following bash script to generate pods that sleep:

#!/bin/bash

template=$(cat <<EOF
apiVersion: v1
kind: Pod
metadata:
  namespace: default
  generateName: sleep-
  labels:
    app: sleep
  annotations:
    io.kubernetes.cri.untrusted-workload: "true"
spec:
  nodeSelector:
    customer: kata
  tolerations:
    - operator: Exists
  restartPolicy: Never
  containers:
  - name: sleep
    image: alpine:3.7
    command: ["sleep", "5"]
    resources:
      limits:
        cpu: "1000m"
        memory: "1024Mi"
---
)

rm -f /tmp/sleep-pod.yaml
for ((n=0;n<$1;n++))
do
  echo "$template" >> /tmp/sleep-pod.yaml
done
kubectl create -f /tmp/sleep-pod.yaml

Invoked with: ./generate-pods.sh <number-of-pods> <number-of-pods> being the number of pods you want to generate.

Additional Details

Containerd logs

Here are logs from containerd for one of the Pods:

May 14 05:34:00 ip-10-149-77-186.us-west-2.compute.internal containerd[3295]: time="2019-05-14T05:34:00.638746279Z" level=info msg="StopPodSandbox for "7ddc9eed0ddbd77930d29b22e6836132a20c4a4c09868468e9b236fd708f73eb""
May 14 05:34:00 ip-10-149-77-186.us-west-2.compute.internal containerd[3295]: 2019-05-14 05:34:00.710 [INFO][34031] plugin.go 442: Extracted identifiers ContainerID="7ddc9eed0ddbd77930d29b22e6836132a20c4a4c09868468e9b236fd708f73eb" Node="ip-10-149-77-186.us-west-2.compute.internal" Orchestrator="k8s" WorkloadEndpoint="ip--10--149--77--186.us--west--2.compute.internal-k8s-sleep--6vdl9-eth0"
May 14 05:34:00 ip-10-149-77-186.us-west-2.compute.internal containerd[3295]: 2019-05-14 05:34:00.722 [INFO][34031] k8s.go 470: Endpoint deletion will be handled by Kubernetes deletion of the Pod. ContainerID="7ddc9eed0ddbd77930d29b22e6836132a20c4a4c09868468e9b236fd708f73eb" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--10--149--77--186.us--west--2.compute.internal-k8s-sleep--6vdl9-eth0", GenerateName:"sleep-", Namespace:"default", SelfLink:"", UID:"1a6d2437-7607-11e9-9548-063fa432485e", ResourceVersion:"2946938", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63693407649, loc:(*time.Location)(0x22df9a0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"sleep", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:""}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-10-149-77-186.us-west-2.compute.internal", ContainerID:"", Pod:"sleep-6vdl9", Endpoint:"eth0", IPNetworks:[]string{"10.32.1.149/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali5ea66b2e0af", MAC:"", Ports:[]v3.EndpointPort(nil)}}
May 14 05:34:00 ip-10-149-77-186.us-west-2.compute.internal containerd[3295]: 2019-05-14 05:34:00.722 [INFO][34031] k8s.go 477: Releasing IP address(es) ContainerID="7ddc9eed0ddbd77930d29b22e6836132a20c4a4c09868468e9b236fd708f73eb"
May 14 05:34:00 ip-10-149-77-186.us-west-2.compute.internal containerd[3295]: 2019-05-14 05:34:00.722 [INFO][34031] utils.go 168: Calico CNI releasing IP address ContainerID="7ddc9eed0ddbd77930d29b22e6836132a20c4a4c09868468e9b236fd708f73eb"
May 14 05:34:00 ip-10-149-77-186.us-west-2.compute.internal containerd[3295]: 2019-05-14 05:34:00.722 [INFO][34031] utils.go 184: Using a dummy podCidr to release the IP ContainerID="7ddc9eed0ddbd77930d29b22e6836132a20c4a4c09868468e9b236fd708f73eb" podCidr="0.0.0.0/0"
May 14 05:34:00 ip-10-149-77-186.us-west-2.compute.internal containerd[3295]: 2019-05-14 05:34:00.722 [INFO][34031] utils.go 303: Calico CNI fetching podCidr from Kubernetes ContainerID="7ddc9eed0ddbd77930d29b22e6836132a20c4a4c09868468e9b236fd708f73eb"
May 14 05:34:00 ip-10-149-77-186.us-west-2.compute.internal containerd[3295]: 2019-05-14 05:34:00.722 [INFO][34031] utils.go 309: Fetched podCidr ContainerID="7ddc9eed0ddbd77930d29b22e6836132a20c4a4c09868468e9b236fd708f73eb" podCidr="0.0.0.0/0"
May 14 05:34:00 ip-10-149-77-186.us-west-2.compute.internal containerd[3295]: 2019-05-14 05:34:00.722 [INFO][34031] utils.go 311: Calico CNI passing podCidr to host-local IPAM: 0.0.0.0/0 ContainerID="7ddc9eed0ddbd77930d29b22e6836132a20c4a4c09868468e9b236fd708f73eb"
May 14 05:34:00 ip-10-149-77-186.us-west-2.compute.internal containerd[3295]: 2019-05-14 05:34:00.726 [INFO][34031] k8s.go 481: Cleaning up netns ContainerID="7ddc9eed0ddbd77930d29b22e6836132a20c4a4c09868468e9b236fd708f73eb"
May 14 05:34:00 ip-10-149-77-186.us-west-2.compute.internal containerd[3295]: 2019-05-14 05:34:00.726 [INFO][34031] k8s.go 493: Teardown processing complete. ContainerID="7ddc9eed0ddbd77930d29b22e6836132a20c4a4c09868468e9b236fd708f73eb"
May 14 05:34:00 ip-10-149-77-186.us-west-2.compute.internal containerd[3295]: time="2019-05-14T05:34:00.727426483Z" level=info msg="TearDown network for sandbox "7ddc9eed0ddbd77930d29b22e6836132a20c4a4c09868468e9b236fd708f73eb" successfully"
May 14 05:34:00 ip-10-149-77-186.us-west-2.compute.internal kata[35217]: time="2019-05-14T05:34:00.728533596Z" level=debug msg="sending request" ID=7ddc9eed0ddbd77930d29b22e6836132a20c4a4c09868468e9b236fd708f73eb name=grpc.SignalProcessRequest req="container_id:\"7ddc9eed0ddbd77930d29b22e6836132a20c4a4c09868468e9b236fd708f73eb\" exec_id:\"7ddc9eed0ddbd77930d29b22e6836132a20c4a4c09868468e9b236fd708f73eb\" signal:9 " source=virtcontainers subsystem=kata_agent
May 14 05:34:00 ip-10-149-77-186.us-west-2.compute.internal containerd[3295]: time="2019-05-14T05:34:00.728533596Z" level=debug msg="sending request" ID=7ddc9eed0ddbd77930d29b22e6836132a20c4a4c09868468e9b236fd708f73eb name=grpc.SignalProcessRequest req="container_id:\"7ddc9eed0ddbd77930d29b22e6836132a20c4a4c09868468e9b236fd708f73eb\" exec_id:\"7ddc9eed0ddbd77930d29b22e6836132a20c4a4c09868468e9b236fd708f73eb\" signal:9 " source=virtcontainers subsystem=kata_agent
May 14 05:35:49 ip-10-149-77-186.us-west-2.compute.internal containerd[3295]: time="2019-05-14T05:35:49.486896760Z" level=error msg="StopPodSandbox for "7ddc9eed0ddbd77930d29b22e6836132a20c4a4c09868468e9b236fd708f73eb" failed" error="failed to stop sandbox container "7ddc9eed0ddbd77930d29b22e6836132a20c4a4c09868468e9b236fd708f73eb" in '\x01' state: failed to kill sandbox container: all SubConns are in TransientFailure, latest connection error: connection error: desc = "transport: Error while dialing rpc error: code = DeadlineExceeded desc = timed out connecting to unix socket ////run/vc/vm/7ddc9eed0ddbd77930d29b22e6836132a20c4a4c09868468e9b236fd708f73eb/kata.sock": unavailable"

Show kata-collect-data.sh details

# Meta details Running `kata-collect-data.sh` version `1.7.0-rc1 (commit bce1167c33e3a43cd194e5d06cc125cb053c27b5)` at `2019-05-14.05:05:27.976570695+0000`. --- Runtime is `/opt/kata/bin/kata-runtime`. # `kata-env` Output of "`/opt/kata/bin/kata-runtime kata-env`": ```toml [Meta] Version = "1.0.23" [Runtime] Debug = true Trace = false DisableGuestSeccomp = true DisableNewNetNs = false Path = "/opt/kata/bin/kata-runtime" [Runtime.Version] Semver = "1.7.0-rc1" Commit = "bce1167c33e3a43cd194e5d06cc125cb053c27b5" OCI = "1.0.1-dev" [Runtime.Config] Path = "/etc/kata-containers/configuration.toml" [Hypervisor] MachineType = "pc" Version = "QEMU emulator version 2.11.2(kata-static)\nCopyright (c) 2003-2017 Fabrice Bellard and the QEMU Project developers" Path = "/opt/kata/bin/qemu-system-x86_64" BlockDeviceDriver = "virtio-scsi" EntropySource = "/dev/urandom" Msize9p = 8192 MemorySlots = 10 Debug = true UseVSock = false SharedFS = "virtio-9p" [Image] Path = "/opt/kata/share/kata-containers/kata-containers-image_clearlinux_1.7.0-rc1_agent_f983b3665f.img" [Kernel] Path = "/opt/kata/share/kata-containers/vmlinuz-4.19.28-39" Parameters = "init=/usr/lib/systemd/systemd systemd.unit=kata-containers.target systemd.mask=systemd-networkd.service systemd.mask=systemd-networkd.socket systemd.mask=systemd-journald.service systemd.mask=systemd-journald.socket systemd.mask=systemd-journal-flush.service systemd.mask=systemd-udevd.service systemd.mask=systemd-udevd.socket systemd.mask=systemd-udev-trigger.service systemd.mask=systemd-timesyncd.service systemd.mask=systemd-update-utmp.service systemd.mask=systemd-tmpfiles-setup.service systemd.mask=systemd-tmpfiles-cleanup.service systemd.mask=systemd-tmpfiles-cleanup.timer systemd.mask=tmp.mount systemd.mask=systemd-random-seed.service agent.log=debug" [Initrd] Path = "" [Proxy] Type = "kataProxy" Version = "kata-proxy version 1.7.0-rc1-c5c4bc32f3aafd7141f93fb10a2349734a1288a1" Path = "/opt/kata/libexec/kata-containers/kata-proxy" Debug = true [Shim] Type = "kataShim" Version = "kata-shim version 1.7.0-rc1-d2c94a0b680d5f2f8bf2120fa8050e600aa71e31" Path = "/opt/kata/libexec/kata-containers/kata-shim" Debug = true [Agent] Type = "kata" Debug = true Trace = false TraceMode = "" TraceType = "" [Host] Kernel = "4.19.23-coreos-r1" Architecture = "amd64" VMContainerCapable = true SupportVSocks = false [Host.Distro] Name = "Container Linux by CoreOS" Version = "2023.4.0" [Host.CPU] Vendor = "GenuineIntel" Model = "Intel(R) Xeon(R) CPU E5-2686 v4 @ 2.30GHz" [Netmon] Version = "kata-netmon version 1.7.0-rc1" Path = "/opt/kata/libexec/kata-containers/kata-netmon" Debug = true Enable = false ``` --- # Runtime config files ## Runtime default config files ``` /etc/kata-containers/configuration.toml /opt/kata/share/defaults/kata-containers/configuration.toml ``` ## Runtime config file contents Output of "`cat "/etc/kata-containers/configuration.toml"`": ```toml # Copyright (c) 2017-2019 Intel Corporation # # SPDX-License-Identifier: Apache-2.0 # # XXX: WARNING: this file is auto-generated. # XXX: # XXX: Source file: "cli/config/configuration-qemu.toml.in" # XXX: Project: # XXX: Name: Kata Containers # XXX: Type: kata [hypervisor.qemu] path = "/opt/kata/bin/qemu-system-x86_64" kernel = "/opt/kata/share/kata-containers/vmlinuz.container" image = "/opt/kata/share/kata-containers/kata-containers.img" machine_type = "pc" # Optional space-separated list of options to pass to the guest kernel. # For example, use `kernel_params = "vsyscall=emulate"` if you are having # trouble running pre-2.15 glibc. # # WARNING: - any parameter specified here will take priority over the default # parameter value of the same name used to start the virtual machine. # Do not set values here unless you understand the impact of doing so as you # may stop the virtual machine from booting. # To see the list of default parameters, enable hypervisor debug, create a # container and look for 'default-kernel-parameters' log entries. kernel_params = "" # Path to the firmware. # If you want that qemu uses the default firmware leave this option empty firmware = "" # Machine accelerators # comma-separated list of machine accelerators to pass to the hypervisor. # For example, `machine_accelerators = "nosmm,nosmbus,nosata,nopit,static-prt,nofw"` machine_accelerators="" # Default number of vCPUs per SB/VM: # unspecified or 0 --> will be set to 1 # < 0 --> will be set to the actual number of physical cores # > 0 <= number of physical cores --> will be set to the specified number # > number of physical cores --> will be set to the actual number of physical cores default_vcpus = 1 # Default maximum number of vCPUs per SB/VM: # unspecified or == 0 --> will be set to the actual number of physical cores or to the maximum number # of vCPUs supported by KVM if that number is exceeded # > 0 <= number of physical cores --> will be set to the specified number # > number of physical cores --> will be set to the actual number of physical cores or to the maximum number # of vCPUs supported by KVM if that number is exceeded # WARNING: Depending of the architecture, the maximum number of vCPUs supported by KVM is used when # the actual number of physical cores is greater than it. # WARNING: Be aware that this value impacts the virtual machine's memory footprint and CPU # the hotplug functionality. For example, `default_maxvcpus = 240` specifies that until 240 vCPUs # can be added to a SB/VM, but the memory footprint will be big. Another example, with # `default_maxvcpus = 8` the memory footprint will be small, but 8 will be the maximum number of # vCPUs supported by the SB/VM. In general, we recommend that you do not edit this variable, # unless you know what are you doing. default_maxvcpus = 0 # Bridges can be used to hot plug devices. # Limitations: # * Currently only pci bridges are supported # * Until 30 devices per bridge can be hot plugged. # * Until 5 PCI bridges can be cold plugged per VM. # This limitation could be a bug in qemu or in the kernel # Default number of bridges per SB/VM: # unspecified or 0 --> will be set to 1 # > 1 <= 5 --> will be set to the specified number # > 5 --> will be set to 5 default_bridges = 1 # Default memory size in MiB for SB/VM. # If unspecified then it will be set 2048 MiB. default_memory = 2048 # # Default memory slots per SB/VM. # If unspecified then it will be set 10. # This is will determine the times that memory will be hotadded to sandbox/VM. #memory_slots = 10 # The size in MiB will be plused to max memory of hypervisor. # It is the memory address space for the NVDIMM devie. # If set block storage driver (block_device_driver) to "nvdimm", # should set memory_offset to the size of block device. # Default 0 #memory_offset = 0 # Disable block device from being used for a container's rootfs. # In case of a storage driver like devicemapper where a container's # root file system is backed by a block device, the block device is passed # directly to the hypervisor for performance reasons. # This flag prevents the block device from being passed to the hypervisor, # 9pfs is used instead to pass the rootfs. disable_block_device_use = false # Shared file system type: # - virtio-9p (default) # - virtio-fs shared_fs = "virtio-9p" # Path to vhost-user-fs daemon. virtio_fs_daemon = "/opt/kata/bin/virtiofsd" # Default size of DAX cache in MiB virtio_fs_cache_size = 8192 # Cache mode: # # - none # Metadata, data, and pathname lookup are not cached in guest. They are # always fetched from host and any changes are immediately pushed to host. # # - auto # Metadata and pathname lookup cache expires after a configured amount of # time (default is 1 second). Data is cached while the file is open (close # to open consistency). # # - always # Metadata, data, and pathname lookup are cached in guest and never expire. virtio_fs_cache = "always" # Block storage driver to be used for the hypervisor in case the container # rootfs is backed by a block device. This is virtio-scsi, virtio-blk # or nvdimm. block_device_driver = "virtio-scsi" # Specifies cache-related options will be set to block devices or not. # Default false #block_device_cache_set = true # Specifies cache-related options for block devices. # Denotes whether use of O_DIRECT (bypass the host page cache) is enabled. # Default false #block_device_cache_direct = true # Specifies cache-related options for block devices. # Denotes whether flush requests for the device are ignored. # Default false #block_device_cache_noflush = true # Enable iothreads (data-plane) to be used. This causes IO to be # handled in a separate IO thread. This is currently only implemented # for SCSI. # enable_iothreads = false # Enable pre allocation of VM RAM, default false # Enabling this will result in lower container density # as all of the memory will be allocated and locked # This is useful when you want to reserve all the memory # upfront or in the cases where you want memory latencies # to be very predictable # Default false #enable_mem_prealloc = true # Enable huge pages for VM RAM, default false # Enabling this will result in the VM memory # being allocated using huge pages. # This is useful when you want to use vhost-user network # stacks within the container. This will automatically # result in memory pre allocation #enable_hugepages = true # Enable swap of vm memory. Default false. # The behaviour is undefined if mem_prealloc is also set to true #enable_swap = true # This option changes the default hypervisor and kernel parameters # to enable debug output where available. This extra output is added # to the proxy logs, but only when proxy debug is also enabled. # # Default false enable_debug = true # Disable the customizations done in the runtime when it detects # that it is running on top a VMM. This will result in the runtime # behaving as it would when running on bare metal. # #disable_nesting_checks = true # This is the msize used for 9p shares. It is the number of bytes # used for 9p packet payload. #msize_9p = 8192 # If true and vsocks are supported, use vsocks to communicate directly # with the agent and no proxy is started, otherwise use unix # sockets and start a proxy to communicate with the agent. # Default false #use_vsock = true # VFIO devices are hotplugged on a bridge by default. # Enable hotplugging on root bus. This may be required for devices with # a large PCI bar, as this is a current limitation with hotplugging on # a bridge. This value is valid for "pc" machine type. # Default false #hotplug_vfio_on_root_bus = true # If host doesn't support vhost_net, set to true. Thus we won't create vhost fds for nics. # Default false #disable_vhost_net = true # # Default entropy source. # The path to a host source of entropy (including a real hardware RNG) # /dev/urandom and /dev/random are two main options. # Be aware that /dev/random is a blocking source of entropy. If the host # runs out of entropy, the VMs boot time will increase leading to get startup # timeouts. # The source of entropy /dev/urandom is non-blocking and provides a # generally acceptable source of entropy. It should work well for pretty much # all practical purposes. #entropy_source= "/dev/urandom" # Path to OCI hook binaries in the *guest rootfs*. # This does not affect host-side hooks which must instead be added to # the OCI spec passed to the runtime. # # You can create a rootfs with hooks by customizing the osbuilder scripts: # https://github.com/kata-containers/osbuilder # # Hooks must be stored in a subdirectory of guest_hook_path according to their # hook type, i.e. "guest_hook_path/{prestart,postart,poststop}". # The agent will scan these directories for executable files and add them, in # lexicographical order, to the lifecycle of the guest container. # Hooks are executed in the runtime namespace of the guest. See the official documentation: # https://github.com/opencontainers/runtime-spec/blob/v1.0.1/config.md#posix-platform-hooks # Warnings will be logged if any error is encountered will scanning for hooks, # but it will not abort container execution. #guest_hook_path = "/usr/share/oci/hooks" [factory] # VM templating support. Once enabled, new VMs are created from template # using vm cloning. They will share the same initial kernel, initramfs and # agent memory by mapping it readonly. It helps speeding up new container # creation and saves a lot of memory if there are many kata containers running # on the same host. # # When disabled, new VMs are created from scratch. # # Note: Requires "initrd=" to be set ("image=" is not supported). # # Default false #enable_template = true # Specifies the path of template. # # Default "/run/vc/vm/template" #template_path = "/run/vc/vm/template" # The number of caches of VMCache: # unspecified or == 0 --> VMCache is disabled # > 0 --> will be set to the specified number # # VMCache is a function that creates VMs as caches before using it. # It helps speed up new container creation. # The function consists of a server and some clients communicating # through Unix socket. The protocol is gRPC in protocols/cache/cache.proto. # The VMCache server will create some VMs and cache them by factory cache. # It will convert the VM to gRPC format and transport it when gets # requestion from clients. # Factory grpccache is the VMCache client. It will request gRPC format # VM and convert it back to a VM. If VMCache function is enabled, # kata-runtime will request VM from factory grpccache when it creates # a new sandbox. # # Default 0 #vm_cache_number = 0 # Specify the address of the Unix socket that is used by VMCache. # # Default /var/run/kata-containers/cache.sock #vm_cache_endpoint = "/var/run/kata-containers/cache.sock" [proxy.kata] path = "/opt/kata/libexec/kata-containers/kata-proxy" # If enabled, proxy messages will be sent to the system log # (default: disabled) enable_debug = true [shim.kata] path = "/opt/kata/libexec/kata-containers/kata-shim" # If enabled, shim messages will be sent to the system log # (default: disabled) enable_debug = true # If enabled, the shim will create opentracing.io traces and spans. # (See https://www.jaegertracing.io/docs/getting-started). # # Note: By default, the shim runs in a separate network namespace. Therefore, # to allow it to send trace details to the Jaeger agent running on the host, # it is necessary to set 'disable_new_netns=true' so that it runs in the host # network namespace. # # (default: disabled) #enable_tracing = true [agent.kata] # If enabled, make the agent display debug-level messages. # (default: disabled) enable_debug = true # Enable agent tracing. # # If enabled, the default trace mode is "dynamic" and the # default trace type is "isolated". The trace mode and type are set # explicity with the `trace_type=` and `trace_mode=` options. # # Notes: # # - Tracing is ONLY enabled when `enable_tracing` is set: explicitly # setting `trace_mode=` and/or `trace_type=` without setting `enable_tracing` # will NOT activate agent tracing. # # - See https://github.com/kata-containers/agent/blob/master/TRACING.md for # full details. # # (default: disabled) #enable_tracing = true # #trace_mode = "dynamic" #trace_type = "isolated" [netmon] # If enabled, the network monitoring process gets started when the # sandbox is created. This allows for the detection of some additional # network being added to the existing network namespace, after the # sandbox has been created. # (default: disabled) #enable_netmon = true # Specify the path to the netmon binary. path = "/opt/kata/libexec/kata-containers/kata-netmon" # If enabled, netmon messages will be sent to the system log # (default: disabled) enable_debug = true [runtime] # If enabled, the runtime will log additional debug messages to the # system log # (default: disabled) enable_debug = true # # Internetworking model # Determines how the VM should be connected to the # the container network interface # Options: # # - bridged # Uses a linux bridge to interconnect the container interface to # the VM. Works for most cases except macvlan and ipvlan. # # - macvtap # Used when the Container network interface can be bridged using # macvtap. # # - none # Used when customize network. Only creates a tap device. No veth pair. # # - tcfilter # Uses tc filter rules to redirect traffic from the network interface # provided by plugin to a tap interface connected to the VM. # internetworking_model="tcfilter" # disable guest seccomp # Determines whether container seccomp profiles are passed to the virtual # machine and applied by the kata agent. If set to true, seccomp is not applied # within the guest # (default: true) disable_guest_seccomp=true # If enabled, the runtime will create opentracing.io traces and spans. # (See https://www.jaegertracing.io/docs/getting-started). # (default: disabled) #enable_tracing = true # If enabled, the runtime will not create a network namespace for shim and hypervisor processes. # This option may have some potential impacts to your host. It should only be used when you know what you're doing. # `disable_new_netns` conflicts with `enable_netmon` # `disable_new_netns` conflicts with `internetworking_model=bridged` and `internetworking_model=macvtap`. It works only # with `internetworking_model=none`. The tap device will be in the host network namespace and can connect to a bridge # (like OVS) directly. # If you are using docker, `disable_new_netns` only works with `docker run --net=none` # (default: false) #disable_new_netns = true # Enabled experimental feature list, format: ["a", "b"]. # Experimental features are features not stable enough for production, # They may break compatibility, and are prepared for a big version bump. # Supported experimental features: # 1. "newstore": new persist storage driver which breaks backward compatibility, # expected to move out of experimental in 2.0.0. # (default: []) experimental=[] ``` Output of "`cat "/opt/kata/share/defaults/kata-containers/configuration.toml"`": ```toml # Copyright (c) 2017-2019 Intel Corporation # # SPDX-License-Identifier: Apache-2.0 # # XXX: WARNING: this file is auto-generated. # XXX: # XXX: Source file: "cli/config/configuration-qemu.toml.in" # XXX: Project: # XXX: Name: Kata Containers # XXX: Type: kata [hypervisor.qemu] path = "/opt/kata/bin/qemu-system-x86_64" kernel = "/opt/kata/share/kata-containers/vmlinuz.container" image = "/opt/kata/share/kata-containers/kata-containers.img" machine_type = "pc" # Optional space-separated list of options to pass to the guest kernel. # For example, use `kernel_params = "vsyscall=emulate"` if you are having # trouble running pre-2.15 glibc. # # WARNING: - any parameter specified here will take priority over the default # parameter value of the same name used to start the virtual machine. # Do not set values here unless you understand the impact of doing so as you # may stop the virtual machine from booting. # To see the list of default parameters, enable hypervisor debug, create a # container and look for 'default-kernel-parameters' log entries. kernel_params = "" # Path to the firmware. # If you want that qemu uses the default firmware leave this option empty firmware = "" # Machine accelerators # comma-separated list of machine accelerators to pass to the hypervisor. # For example, `machine_accelerators = "nosmm,nosmbus,nosata,nopit,static-prt,nofw"` machine_accelerators="" # Default number of vCPUs per SB/VM: # unspecified or 0 --> will be set to 1 # < 0 --> will be set to the actual number of physical cores # > 0 <= number of physical cores --> will be set to the specified number # > number of physical cores --> will be set to the actual number of physical cores default_vcpus = 1 # Default maximum number of vCPUs per SB/VM: # unspecified or == 0 --> will be set to the actual number of physical cores or to the maximum number # of vCPUs supported by KVM if that number is exceeded # > 0 <= number of physical cores --> will be set to the specified number # > number of physical cores --> will be set to the actual number of physical cores or to the maximum number # of vCPUs supported by KVM if that number is exceeded # WARNING: Depending of the architecture, the maximum number of vCPUs supported by KVM is used when # the actual number of physical cores is greater than it. # WARNING: Be aware that this value impacts the virtual machine's memory footprint and CPU # the hotplug functionality. For example, `default_maxvcpus = 240` specifies that until 240 vCPUs # can be added to a SB/VM, but the memory footprint will be big. Another example, with # `default_maxvcpus = 8` the memory footprint will be small, but 8 will be the maximum number of # vCPUs supported by the SB/VM. In general, we recommend that you do not edit this variable, # unless you know what are you doing. default_maxvcpus = 0 # Bridges can be used to hot plug devices. # Limitations: # * Currently only pci bridges are supported # * Until 30 devices per bridge can be hot plugged. # * Until 5 PCI bridges can be cold plugged per VM. # This limitation could be a bug in qemu or in the kernel # Default number of bridges per SB/VM: # unspecified or 0 --> will be set to 1 # > 1 <= 5 --> will be set to the specified number # > 5 --> will be set to 5 default_bridges = 1 # Default memory size in MiB for SB/VM. # If unspecified then it will be set 2048 MiB. default_memory = 2048 # # Default memory slots per SB/VM. # If unspecified then it will be set 10. # This is will determine the times that memory will be hotadded to sandbox/VM. #memory_slots = 10 # The size in MiB will be plused to max memory of hypervisor. # It is the memory address space for the NVDIMM devie. # If set block storage driver (block_device_driver) to "nvdimm", # should set memory_offset to the size of block device. # Default 0 #memory_offset = 0 # Disable block device from being used for a container's rootfs. # In case of a storage driver like devicemapper where a container's # root file system is backed by a block device, the block device is passed # directly to the hypervisor for performance reasons. # This flag prevents the block device from being passed to the hypervisor, # 9pfs is used instead to pass the rootfs. disable_block_device_use = false # Shared file system type: # - virtio-9p (default) # - virtio-fs shared_fs = "virtio-9p" # Path to vhost-user-fs daemon. virtio_fs_daemon = "/opt/kata/bin/virtiofsd" # Default size of DAX cache in MiB virtio_fs_cache_size = 8192 # Cache mode: # # - none # Metadata, data, and pathname lookup are not cached in guest. They are # always fetched from host and any changes are immediately pushed to host. # # - auto # Metadata and pathname lookup cache expires after a configured amount of # time (default is 1 second). Data is cached while the file is open (close # to open consistency). # # - always # Metadata, data, and pathname lookup are cached in guest and never expire. virtio_fs_cache = "always" # Block storage driver to be used for the hypervisor in case the container # rootfs is backed by a block device. This is virtio-scsi, virtio-blk # or nvdimm. block_device_driver = "virtio-scsi" # Specifies cache-related options will be set to block devices or not. # Default false #block_device_cache_set = true # Specifies cache-related options for block devices. # Denotes whether use of O_DIRECT (bypass the host page cache) is enabled. # Default false #block_device_cache_direct = true # Specifies cache-related options for block devices. # Denotes whether flush requests for the device are ignored. # Default false #block_device_cache_noflush = true # Enable iothreads (data-plane) to be used. This causes IO to be # handled in a separate IO thread. This is currently only implemented # for SCSI. # enable_iothreads = false # Enable pre allocation of VM RAM, default false # Enabling this will result in lower container density # as all of the memory will be allocated and locked # This is useful when you want to reserve all the memory # upfront or in the cases where you want memory latencies # to be very predictable # Default false #enable_mem_prealloc = true # Enable huge pages for VM RAM, default false # Enabling this will result in the VM memory # being allocated using huge pages. # This is useful when you want to use vhost-user network # stacks within the container. This will automatically # result in memory pre allocation #enable_hugepages = true # Enable swap of vm memory. Default false. # The behaviour is undefined if mem_prealloc is also set to true #enable_swap = true # This option changes the default hypervisor and kernel parameters # to enable debug output where available. This extra output is added # to the proxy logs, but only when proxy debug is also enabled. # # Default false #enable_debug = true # Disable the customizations done in the runtime when it detects # that it is running on top a VMM. This will result in the runtime # behaving as it would when running on bare metal. # #disable_nesting_checks = true # This is the msize used for 9p shares. It is the number of bytes # used for 9p packet payload. #msize_9p = 8192 # If true and vsocks are supported, use vsocks to communicate directly # with the agent and no proxy is started, otherwise use unix # sockets and start a proxy to communicate with the agent. # Default false #use_vsock = true # VFIO devices are hotplugged on a bridge by default. # Enable hotplugging on root bus. This may be required for devices with # a large PCI bar, as this is a current limitation with hotplugging on # a bridge. This value is valid for "pc" machine type. # Default false #hotplug_vfio_on_root_bus = true # If host doesn't support vhost_net, set to true. Thus we won't create vhost fds for nics. # Default false #disable_vhost_net = true # # Default entropy source. # The path to a host source of entropy (including a real hardware RNG) # /dev/urandom and /dev/random are two main options. # Be aware that /dev/random is a blocking source of entropy. If the host # runs out of entropy, the VMs boot time will increase leading to get startup # timeouts. # The source of entropy /dev/urandom is non-blocking and provides a # generally acceptable source of entropy. It should work well for pretty much # all practical purposes. #entropy_source= "/dev/urandom" # Path to OCI hook binaries in the *guest rootfs*. # This does not affect host-side hooks which must instead be added to # the OCI spec passed to the runtime. # # You can create a rootfs with hooks by customizing the osbuilder scripts: # https://github.com/kata-containers/osbuilder # # Hooks must be stored in a subdirectory of guest_hook_path according to their # hook type, i.e. "guest_hook_path/{prestart,postart,poststop}". # The agent will scan these directories for executable files and add them, in # lexicographical order, to the lifecycle of the guest container. # Hooks are executed in the runtime namespace of the guest. See the official documentation: # https://github.com/opencontainers/runtime-spec/blob/v1.0.1/config.md#posix-platform-hooks # Warnings will be logged if any error is encountered will scanning for hooks, # but it will not abort container execution. #guest_hook_path = "/usr/share/oci/hooks" [factory] # VM templating support. Once enabled, new VMs are created from template # using vm cloning. They will share the same initial kernel, initramfs and # agent memory by mapping it readonly. It helps speeding up new container # creation and saves a lot of memory if there are many kata containers running # on the same host. # # When disabled, new VMs are created from scratch. # # Note: Requires "initrd=" to be set ("image=" is not supported). # # Default false #enable_template = true # Specifies the path of template. # # Default "/run/vc/vm/template" #template_path = "/run/vc/vm/template" # The number of caches of VMCache: # unspecified or == 0 --> VMCache is disabled # > 0 --> will be set to the specified number # # VMCache is a function that creates VMs as caches before using it. # It helps speed up new container creation. # The function consists of a server and some clients communicating # through Unix socket. The protocol is gRPC in protocols/cache/cache.proto. # The VMCache server will create some VMs and cache them by factory cache. # It will convert the VM to gRPC format and transport it when gets # requestion from clients. # Factory grpccache is the VMCache client. It will request gRPC format # VM and convert it back to a VM. If VMCache function is enabled, # kata-runtime will request VM from factory grpccache when it creates # a new sandbox. # # Default 0 #vm_cache_number = 0 # Specify the address of the Unix socket that is used by VMCache. # # Default /var/run/kata-containers/cache.sock #vm_cache_endpoint = "/var/run/kata-containers/cache.sock" [proxy.kata] path = "/opt/kata/libexec/kata-containers/kata-proxy" # If enabled, proxy messages will be sent to the system log # (default: disabled) #enable_debug = true [shim.kata] path = "/opt/kata/libexec/kata-containers/kata-shim" # If enabled, shim messages will be sent to the system log # (default: disabled) #enable_debug = true # If enabled, the shim will create opentracing.io traces and spans. # (See https://www.jaegertracing.io/docs/getting-started). # # Note: By default, the shim runs in a separate network namespace. Therefore, # to allow it to send trace details to the Jaeger agent running on the host, # it is necessary to set 'disable_new_netns=true' so that it runs in the host # network namespace. # # (default: disabled) #enable_tracing = true [agent.kata] # If enabled, make the agent display debug-level messages. # (default: disabled) #enable_debug = true # Enable agent tracing. # # If enabled, the default trace mode is "dynamic" and the # default trace type is "isolated". The trace mode and type are set # explicity with the `trace_type=` and `trace_mode=` options. # # Notes: # # - Tracing is ONLY enabled when `enable_tracing` is set: explicitly # setting `trace_mode=` and/or `trace_type=` without setting `enable_tracing` # will NOT activate agent tracing. # # - See https://github.com/kata-containers/agent/blob/master/TRACING.md for # full details. # # (default: disabled) #enable_tracing = true # #trace_mode = "dynamic" #trace_type = "isolated" [netmon] # If enabled, the network monitoring process gets started when the # sandbox is created. This allows for the detection of some additional # network being added to the existing network namespace, after the # sandbox has been created. # (default: disabled) #enable_netmon = true # Specify the path to the netmon binary. path = "/opt/kata/libexec/kata-containers/kata-netmon" # If enabled, netmon messages will be sent to the system log # (default: disabled) #enable_debug = true [runtime] # If enabled, the runtime will log additional debug messages to the # system log # (default: disabled) #enable_debug = true # # Internetworking model # Determines how the VM should be connected to the # the container network interface # Options: # # - bridged # Uses a linux bridge to interconnect the container interface to # the VM. Works for most cases except macvlan and ipvlan. # # - macvtap # Used when the Container network interface can be bridged using # macvtap. # # - none # Used when customize network. Only creates a tap device. No veth pair. # # - tcfilter # Uses tc filter rules to redirect traffic from the network interface # provided by plugin to a tap interface connected to the VM. # internetworking_model="tcfilter" # disable guest seccomp # Determines whether container seccomp profiles are passed to the virtual # machine and applied by the kata agent. If set to true, seccomp is not applied # within the guest # (default: true) disable_guest_seccomp=true # If enabled, the runtime will create opentracing.io traces and spans. # (See https://www.jaegertracing.io/docs/getting-started). # (default: disabled) #enable_tracing = true # If enabled, the runtime will not create a network namespace for shim and hypervisor processes. # This option may have some potential impacts to your host. It should only be used when you know what you're doing. # `disable_new_netns` conflicts with `enable_netmon` # `disable_new_netns` conflicts with `internetworking_model=bridged` and `internetworking_model=macvtap`. It works only # with `internetworking_model=none`. The tap device will be in the host network namespace and can connect to a bridge # (like OVS) directly. # If you are using docker, `disable_new_netns` only works with `docker run --net=none` # (default: false) #disable_new_netns = true # Enabled experimental feature list, format: ["a", "b"]. # Experimental features are features not stable enough for production, # They may break compatibility, and are prepared for a big version bump. # Supported experimental features: # 1. "newstore": new persist storage driver which breaks backward compatibility, # expected to move out of experimental in 2.0.0. # (default: []) experimental=[] ``` Config file `/usr/share/defaults/kata-containers/configuration.toml` not found --- # KSM throttler ## version Output of "` --version`": ``` ./kata-collect-data.sh: line 176: --version: command not found ``` ## systemd service # Image details ```yaml --- osbuilder: url: "https://github.com/kata-containers/osbuilder" version: "unknown" rootfs-creation-time: "2019-05-10T15:55:04.036142857+0000Z" description: "osbuilder rootfs" file-format-version: "0.0.2" architecture: "x86_64" base-distro: name: "Clear" version: "29350" packages: default: - "chrony" - "iptables-bin" - "libudev0-shim" - "systemd" extra: agent: url: "https://github.com/kata-containers/agent" name: "kata-agent" version: "1.7.0-rc1-f983b3665ff954864de23c0a81e15378ef300855" agent-is-init-daemon: "no" dax-nvdimm-header: "true" ``` --- # Initrd details No initrd --- # Logfiles ## Runtime logs No recent runtime problems found in system journal. ## Proxy logs No recent proxy problems found in system journal. ## Shim logs No recent shim problems found in system journal. ## Throttler logs No recent throttler problems found in system journal. --- # Container manager details Have `docker` ## Docker Output of "`docker version`": ``` Client: Version: 18.06.1-ce API version: 1.38 Go version: go1.10.8 Git commit: e68fc7a Built: Tue Aug 21 17:16:31 2018 OS/Arch: linux/amd64 Experimental: false Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running? ``` Output of "`docker info`": ``` Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running? ``` Output of "`systemctl show docker`": ``` Restart=no NotifyAccess=none RestartUSec=100ms TimeoutStartUSec=1min 30s TimeoutStopUSec=1min 30s RuntimeMaxUSec=infinity WatchdogUSec=0 WatchdogTimestampMonotonic=0 PermissionsStartOnly=no RootDirectoryStartOnly=no RemainAfterExit=no GuessMainPID=yes MainPID=0 ControlPID=0 FileDescriptorStoreMax=0 NFileDescriptorStore=0 StatusErrno=0 Result=success UID=[not set] GID=[not set] NRestarts=0 ExecMainStartTimestampMonotonic=0 ExecMainExitTimestampMonotonic=0 ExecMainPID=0 ExecMainCode=0 ExecMainStatus=0 MemoryCurrent=[not set] CPUUsageNSec=[not set] TasksCurrent=[not set] IPIngressBytes=18446744073709551615 IPIngressPackets=18446744073709551615 IPEgressBytes=18446744073709551615 IPEgressPackets=18446744073709551615 Delegate=no CPUAccounting=no CPUWeight=[not set] StartupCPUWeight=[not set] CPUShares=[not set] StartupCPUShares=[not set] CPUQuotaPerSecUSec=infinity IOAccounting=no IOWeight=[not set] StartupIOWeight=[not set] BlockIOAccounting=no BlockIOWeight=[not set] StartupBlockIOWeight=[not set] MemoryAccounting=yes MemoryLow=0 MemoryHigh=infinity MemoryMax=infinity MemorySwapMax=infinity MemoryLimit=infinity DevicePolicy=auto TasksAccounting=yes TasksMax=73727 IPAccounting=no UMask=0022 LimitCPU=infinity LimitCPUSoft=infinity LimitFSIZE=infinity LimitFSIZESoft=infinity LimitDATA=infinity LimitDATASoft=infinity LimitSTACK=infinity LimitSTACKSoft=8388608 LimitCORE=infinity LimitCORESoft=infinity LimitRSS=infinity LimitRSSSoft=infinity LimitNOFILE=1048576 LimitNOFILESoft=1048576 LimitAS=infinity LimitASSoft=infinity LimitNPROC=2063289 LimitNPROCSoft=2063289 LimitMEMLOCK=16777216 LimitMEMLOCKSoft=16777216 LimitLOCKS=infinity LimitLOCKSSoft=infinity LimitSIGPENDING=2063289 LimitSIGPENDINGSoft=2063289 LimitMSGQUEUE=819200 LimitMSGQUEUESoft=819200 LimitNICE=0 LimitNICESoft=0 LimitRTPRIO=0 LimitRTPRIOSoft=0 LimitRTTIME=infinity LimitRTTIMESoft=infinity OOMScoreAdjust=0 Nice=0 IOSchedulingClass=0 IOSchedulingPriority=0 CPUSchedulingPolicy=0 CPUSchedulingPriority=0 TimerSlackNSec=50000 CPUSchedulingResetOnFork=no NonBlocking=no StandardInput=null StandardInputData= StandardOutput=inherit StandardError=inherit TTYReset=no TTYVHangup=no TTYVTDisallocate=no SyslogPriority=30 SyslogLevelPrefix=yes SyslogLevel=6 SyslogFacility=3 LogLevelMax=-1 SecureBits=0 CapabilityBoundingSet=cap_chown cap_dac_override cap_dac_read_search cap_fowner cap_fsetid cap_kill cap_setgid cap_setuid cap_setpcap cap_linux_immutable cap_net_bind_service cap_net_broadcast cap_net_admin cap_net_raw cap_ipc_lock cap_ipc_owner cap_sys_module cap_sys_rawio cap_sys_chroot cap_sys_ptrace cap_sys_pacct cap_sys_admin cap_sys_boot cap_sys_nice cap_sys_resource cap_sys_time cap_sys_tty_config cap_mknod cap_lease cap_audit_write cap_audit_control cap_setfcap cap_mac_override cap_mac_admin cap_syslog cap_wake_alarm cap_block_suspend AmbientCapabilities= DynamicUser=no RemoveIPC=no MountFlags= PrivateTmp=no PrivateDevices=no ProtectKernelTunables=no ProtectKernelModules=no ProtectControlGroups=no PrivateNetwork=no PrivateUsers=no ProtectHome=no ProtectSystem=no SameProcessGroup=no UtmpMode=init IgnoreSIGPIPE=yes NoNewPrivileges=no SystemCallErrorNumber=0 LockPersonality=no RuntimeDirectoryPreserve=no RuntimeDirectoryMode=0755 StateDirectoryMode=0755 CacheDirectoryMode=0755 LogsDirectoryMode=0755 ConfigurationDirectoryMode=0755 MemoryDenyWriteExecute=no RestrictRealtime=no RestrictNamespaces=no MountAPIVFS=no KeyringMode=private KillMode=control-group KillSignal=15 SendSIGKILL=yes SendSIGHUP=no Id=docker.service Names=docker.service WantedBy=kitt-init.service ConsistsOf=docker.socket Before=kitt-init.service After=docker.socket TriggeredBy=docker.socket Description=docker.service LoadState=masked ActiveState=inactive SubState=dead FragmentPath=/dev/null UnitFileState=masked StateChangeTimestampMonotonic=0 InactiveExitTimestampMonotonic=0 ActiveEnterTimestampMonotonic=0 ActiveExitTimestampMonotonic=0 InactiveEnterTimestampMonotonic=0 CanStart=no CanStop=yes CanReload=no CanIsolate=no StopWhenUnneeded=no RefuseManualStart=no RefuseManualStop=no AllowIsolate=no DefaultDependencies=yes OnFailureJobMode=replace IgnoreOnIsolate=no NeedDaemonReload=no JobTimeoutUSec=infinity JobRunningTimeoutUSec=infinity JobTimeoutAction=none ConditionResult=no AssertResult=no ConditionTimestampMonotonic=0 AssertTimestampMonotonic=0 Transient=no Perpetual=no StartLimitIntervalUSec=10s StartLimitBurst=5 StartLimitAction=none FailureAction=none SuccessAction=none CollectMode=inactive ``` No `kubectl` No `crio` Have `containerd` ## containerd Output of "`containerd --version`": ``` containerd github.com/containerd/containerd v1.2.6 894b81a4b802e4eb2a91d1ce216b8817763c29fb ``` Output of "`systemctl show containerd`": ``` Type=simple Restart=always NotifyAccess=none RestartUSec=5s TimeoutStartUSec=1min 30s TimeoutStopUSec=1min 30s RuntimeMaxUSec=infinity WatchdogUSec=0 WatchdogTimestamp=Tue 2019-05-14 04:55:40 UTC WatchdogTimestampMonotonic=90202290 PermissionsStartOnly=no RootDirectoryStartOnly=no RemainAfterExit=no GuessMainPID=yes MainPID=3295 ControlPID=0 FileDescriptorStoreMax=0 NFileDescriptorStore=0 StatusErrno=0 Result=success UID=[not set] GID=[not set] NRestarts=0 ExecMainStartTimestamp=Tue 2019-05-14 04:55:40 UTC ExecMainStartTimestampMonotonic=90202246 ExecMainExitTimestampMonotonic=0 ExecMainPID=3295 ExecMainCode=0 ExecMainStatus=0 ExecStartPre={ path=/sbin/modprobe ; argv[]=/sbin/modprobe overlay ; ignore_errors=no ; start_time=[Tue 2019-05-14 04:55:38 UTC] ; stop_time=[Tue 2019-05-14 04:55:38 UTC] ; pid=3183 ; code=exited ; status=0 } ExecStartPre={ path=/opt/bin/containerd-init.sh ; argv[]=/opt/bin/containerd-init.sh ; ignore_errors=no ; start_time=[Tue 2019-05-14 04:55:38 UTC] ; stop_time=[Tue 2019-05-14 04:55:40 UTC] ; pid=3185 ; code=exited ; status=0 } ExecStart={ path=/opt/containerd/bin/containerd ; argv[]=/opt/containerd/bin/containerd --log-level=debug --config=/etc/containerd/config.toml ; ignore_errors=no ; start_time=[Tue 2019-05-14 04:55:40 UTC] ; stop_time=[n/a] ; pid=3295 ; code=(null) ; status=0/0 } Slice=system.slice ControlGroup=/system.slice/containerd.service MemoryCurrent=893468672 CPUUsageNSec=[not set] TasksCurrent=300 IPIngressBytes=18446744073709551615 IPIngressPackets=18446744073709551615 IPEgressBytes=18446744073709551615 IPEgressPackets=18446744073709551615 Delegate=yes DelegateControllers=cpu cpuacct io blkio memory devices pids CPUAccounting=no CPUWeight=[not set] StartupCPUWeight=[not set] CPUShares=[not set] StartupCPUShares=[not set] CPUQuotaPerSecUSec=infinity IOAccounting=no IOWeight=[not set] StartupIOWeight=[not set] BlockIOAccounting=no BlockIOWeight=[not set] StartupBlockIOWeight=[not set] MemoryAccounting=yes MemoryLow=0 MemoryHigh=infinity MemoryMax=infinity MemorySwapMax=infinity MemoryLimit=infinity DevicePolicy=auto TasksAccounting=yes TasksMax=73727 IPAccounting=no UMask=0022 LimitCPU=infinity LimitCPUSoft=infinity LimitFSIZE=infinity LimitFSIZESoft=infinity LimitDATA=infinity LimitDATASoft=infinity LimitSTACK=infinity LimitSTACKSoft=8388608 LimitCORE=infinity LimitCORESoft=infinity LimitRSS=infinity LimitRSSSoft=infinity LimitNOFILE=1048576 LimitNOFILESoft=1048576 LimitAS=infinity LimitASSoft=infinity LimitNPROC=infinity LimitNPROCSoft=infinity LimitMEMLOCK=16777216 LimitMEMLOCKSoft=16777216 LimitLOCKS=infinity LimitLOCKSSoft=infinity LimitSIGPENDING=2063289 LimitSIGPENDINGSoft=2063289 LimitMSGQUEUE=819200 LimitMSGQUEUESoft=819200 LimitNICE=0 LimitNICESoft=0 LimitRTPRIO=0 LimitRTPRIOSoft=0 LimitRTTIME=infinity LimitRTTIMESoft=infinity OOMScoreAdjust=-999 Nice=0 IOSchedulingClass=0 IOSchedulingPriority=0 CPUSchedulingPolicy=0 CPUSchedulingPriority=0 TimerSlackNSec=50000 CPUSchedulingResetOnFork=no NonBlocking=no StandardInput=null StandardInputData= StandardOutput=journal StandardError=inherit TTYReset=no TTYVHangup=no TTYVTDisallocate=no SyslogPriority=30 SyslogLevelPrefix=yes SyslogLevel=6 SyslogFacility=3 LogLevelMax=-1 SecureBits=0 CapabilityBoundingSet=cap_chown cap_dac_override cap_dac_read_search cap_fowner cap_fsetid cap_kill cap_setgid cap_setuid cap_setpcap cap_linux_immutable cap_net_bind_service cap_net_broadcast cap_net_admin cap_net_raw cap_ipc_lock cap_ipc_owner cap_sys_module cap_sys_rawio cap_sys_chroot cap_sys_ptrace cap_sys_pacct cap_sys_admin cap_sys_boot cap_sys_nice cap_sys_resource cap_sys_time cap_sys_tty_config cap_mknod cap_lease cap_audit_write cap_audit_control cap_setfcap cap_mac_override cap_mac_admin cap_syslog cap_wake_alarm cap_block_suspend AmbientCapabilities= DynamicUser=no RemoveIPC=no MountFlags= PrivateTmp=no PrivateDevices=no ProtectKernelTunables=no ProtectKernelModules=no ProtectControlGroups=no PrivateNetwork=no PrivateUsers=no ProtectHome=no ProtectSystem=no SameProcessGroup=no UtmpMode=init IgnoreSIGPIPE=yes NoNewPrivileges=no SystemCallErrorNumber=0 LockPersonality=no RuntimeDirectoryPreserve=no RuntimeDirectoryMode=0755 StateDirectoryMode=0755 CacheDirectoryMode=0755 LogsDirectoryMode=0755 ConfigurationDirectoryMode=0755 MemoryDenyWriteExecute=no RestrictRealtime=no RestrictNamespaces=no MountAPIVFS=no KeyringMode=private KillMode=process KillSignal=15 SendSIGKILL=yes SendSIGHUP=no Id=containerd.service Names=containerd.service Requires=system.slice sysinit.target WantedBy=multi-user.target Conflicts=shutdown.target Before=shutdown.target multi-user.target After=sysinit.target containerd-devicemapper.service system.slice basic.target systemd-journald.socket kata-init.service Documentation=https://containerd.io Description=containerd container runtime LoadState=loaded ActiveState=active SubState=running FragmentPath=/etc/systemd/system/containerd.service UnitFileState=enabled UnitFilePreset=enabled StateChangeTimestamp=Tue 2019-05-14 04:55:40 UTC StateChangeTimestampMonotonic=90202291 InactiveExitTimestamp=Tue 2019-05-14 04:55:38 UTC InactiveExitTimestampMonotonic=88594288 ActiveEnterTimestamp=Tue 2019-05-14 04:55:40 UTC ActiveEnterTimestampMonotonic=90202291 ActiveExitTimestampMonotonic=0 InactiveEnterTimestampMonotonic=0 CanStart=yes CanStop=yes CanReload=no CanIsolate=no StopWhenUnneeded=no RefuseManualStart=no RefuseManualStop=no AllowIsolate=no DefaultDependencies=yes OnFailureJobMode=replace IgnoreOnIsolate=no NeedDaemonReload=no JobTimeoutUSec=infinity JobRunningTimeoutUSec=infinity JobTimeoutAction=none ConditionResult=yes AssertResult=yes ConditionTimestamp=Tue 2019-05-14 04:55:38 UTC ConditionTimestampMonotonic=88593058 AssertTimestamp=Tue 2019-05-14 04:55:38 UTC AssertTimestampMonotonic=88593058 Transient=no Perpetual=no StartLimitIntervalUSec=10s StartLimitBurst=5 StartLimitAction=none FailureAction=none SuccessAction=none InvocationID=90bff5083cac46b2842b4d3eba0d81c7 CollectMode=inactive ``` Output of "`cat /etc/containerd/config.toml`": ``` [grpc] address = "/run/containerd/containerd.sock" uid = 0 gid = 0 [plugins] [plugins.cri.containerd] snapshotter = "devicemapper-snapshotter" [plugins.cri.containerd.default_runtime] runtime_type = "io.containerd.runtime.v1.linux" runtime_engine = "/usr/bin/runc" runtime_root = "" [plugins.cri.containerd.untrusted_workload_runtime] runtime_type = "io.containerd.kata.v2" [plugins.cri] max_container_log_line_size = 262144 [plugins.linux] shim = "/opt/containerd/bin/containerd-shim" runtime = "runc" [plugins.cri.registry] [plugins.cri.registry.mirrors] [plugins.cri.registry.mirrors."docker.io"] endpoint = [] [proxy_plugins] [proxy_plugins.devicemapper-snapshotter] type = "snapshot" address = "/var/run/containerd-devicemapper.sock" ``` --- # Packages No `dpkg` No `rpm` ---

awprice commented 5 years ago

cc @egernst @amshinde @mcastelino @dadux

devimc commented 5 years ago

cc @lifupan @bergwolf

lifupan commented 5 years ago

I can reproduce this issue.

lifupan commented 5 years ago

Hi @awprice After a long time digging, I found the root cause was as below: For k8s, once all of the containers in a pod terminated, the pod status would be set "Succeeded" or "Failed" according the containers exited status, in any case, kubelet would consider the pod has terminated, and would try to cleanup the pod cgroup resources, which means kubelet would kill all of the processes in the pod cgroups, thus the qemu process would be killed by accident, at a result, kata shimv2 wouldn't communicate with qemu to do some sandbox cleanup and left the shimv2 process there.

In summary, there are two issues for this case: the first one is that the kubelet shouldn't kill the qemu process; the second one is that in case the qemu process is killed by accident, kata shimv2 should detect this case and also do the required cleanup successfully instead of return error and left a process there.

awprice commented 5 years ago

Hi @awprice After a long time digging, I found the root cause was as below: For k8s, once all of the containers in a pod terminated, the pod status would be set "Succeeded" or "Failed" according the containers exited status, in any case, kubelet would consider the pod has terminated, and would try to cleanup the pod cgroup resources, which means kubelet would kill all of the processes in the pod cgroups, thus the qemu process would be killed by accident, at a result, kata shimv2 wouldn't communicate with qemu to do some sandbox cleanup and left the shimv2 process there.

In summary, there are two issues for this case: the first one is that the kubelet shouldn't kill the qemu process; the second one is that in case the qemu process is killed by accident, kata shimv2 should detect this case and also do the required cleanup successfully instead of return error and left a process there.

That's awesome news! Thanks for digging into this issue for us. Do you need assistance working on the PRs for it? Do we need to make sub-issues?

lifupan commented 5 years ago

Hi @awprice After a long time digging, I found the root cause was as below: For k8s, once all of the containers in a pod terminated, the pod status would be set "Succeeded" or "Failed" according the containers exited status, in any case, kubelet would consider the pod has terminated, and would try to cleanup the pod cgroup resources, which means kubelet would kill all of the processes in the pod cgroups, thus the qemu process would be killed by accident, at a result, kata shimv2 wouldn't communicate with qemu to do some sandbox cleanup and left the shimv2 process there. In summary, there are two issues for this case: the first one is that the kubelet shouldn't kill the qemu process; the second one is that in case the qemu process is killed by accident, kata shimv2 should detect this case and also do the required cleanup successfully instead of return error and left a process there.

That's awesome news! Thanks for digging into this issue for us. Do you need assistance working on the PRs for it? Do we need to make sub-issues?

Hi @awprice, thanks, I had made a progress on the patch, and will send a PR as it passed the tests.

lifupan commented 5 years ago

Hi @awprice , Can you have a try this PR https://github.com/kata-containers/runtime/pull/1723 and check does it fix your issue?

awprice commented 5 years ago

Hi @awprice , Can you have a try this PR #1723 and check does it fix your issue?

Unfortunately it doesn't seem to fix our issue :( We have pods stuck in Running and lot's of kata shimv2 processes consuming 100% of CPU.

lifupan commented 5 years ago

@awprice Can you try to create a smaller number pods? how many cpu cores in your machine?

awprice commented 5 years ago

@lifupan If I create 10 pods at a time it works fine, but any larger numbers (20+) we run into issues. We are using an i3.metal instance with 72 cores.

lifupan commented 5 years ago

@awprice Can you paste the containerd log?

lifupan commented 5 years ago

@awprice BTW, please make sure you used the latest containerd-shim-kata-v2 and you'd better run:

$make clean
$make

in your kata source directory.

grahamwhaley commented 5 years ago

@lifupan @awprice - pls check this Issue was meant to be closed by PR #1723 - I have a feeling the github auto close was maybe pre-emptive? Also, I'd like to check if this might be similar/related to #1375 /cc @zhiminghufighting

awprice commented 5 years ago

@lifupan Apologies for the confusion, I've just now retested the PR on master and it does indeed fix the issue! 😄

In my original testing I was using the Kubernetes Kata daemonset (https://github.com/kata-containers/packaging/tree/master/kata-deploy#kubernetes-quick-start) to deploy Kata to my node, and it looks like it was overriding the version I built with your PR.

Happy to keep this issue closed. Thanks for the awesome work!

lifupan commented 5 years ago

@lifupan Apologies for the confusion, I've just now retested the PR on master and it does indeed fix the issue! 😄

In my original testing I was using the Kubernetes Kata daemonset (https://github.com/kata-containers/packaging/tree/master/kata-deploy#kubernetes-quick-start) to deploy Kata to my node, and it looks like it was overriding the version I built with your PR.

Happy to keep this issue closed. Thanks for the awesome work!

@awprice It doesn't matter, and it's great to here that it fixed your issue.

amshinde commented 5 years ago

@chavafg @GabyCT Can we have an integration test that checks for this? Have a large number- around 100 pods created and make sure that they are deleted properly. It may make sense to have this test at the end of every release, rather than for every PR if it consumes a lot of time.

grahamwhaley commented 5 years ago

I'll note - launching a lot of kata's in parallel can result in some fails due to I believe internal k8s/docker style timeouts - at least when launching as a k8s deployment for instance. So, you might want to have a nice little loop that deploys them a few at a time and waits for them to come up. I'm working on staring at such things right now trying out k8s kata scaling tests. OOI, @awprice - any hints on how you are deploying a bunch of Katas - do you just fire them off all at once, or do you have some sort of 'slow burn deployment'? (/me looking for k8s hints ;-) ).

chavafg commented 5 years ago

@grahamwhaley will you submit a PR with the scaling test you are working on? I think that would be a very good base for later adding a job to run it.

awprice commented 5 years ago

@grahamwhaley We are currently using Kata for running untrusted batch workloads, i.e. jobs. We aren't using the Kubernetes Deployment/Job/Cronjob object however, we create Pod objects for each workload.

We currently average around 3 pods per second scheduled to our cluster, so having the Kata nodes being able to handle a high churn of pods is desirable.

The script I included above in this issue was a good way of replicating this load on a single Kata node that allowed us to find this bug.