kata-containers / runtime

Kata Containers version 1.x runtime (for version 2.x see https://github.com/kata-containers/kata-containers).
https://katacontainers.io/
Apache License 2.0
2.1k stars 376 forks source link

Hugepages are not allocated when pod is deployed with k8s using kata-runtime #2109

Closed wParkhi closed 3 years ago

wParkhi commented 5 years ago

Description of problem

Now it is trying to allocate hugepage the pod deployed in k8s using kata-runtime.

The yaml file used to distribute the pod is shown below.

apiVersion: v1
kind: Pod
metadata:
  name: testpod-dpdk
  annotations:
    io.kubernetes.cri.untrusted-workload: "true"
spec:
  containers:
  - name: dpdk-test
    image: parkhi/dpdk-event-pipeline
    imagePullPolicy: IfNotPresent
    command: [ "/bin/bash", "-c", "--" ]
    args: [ "while true; do sleep 300000; done;" ]
    volumeMounts:
    - mountPath: /dev/hugepages
      name: hugepage
      readOnly: False
    - mountPath: /var/proc
      name: memproc
      readOnly: False
    resources:
      requests:
        cpu: "60"
        memory: "1Gi"
        hugepages-1Gi: "1Gi"
        intel.com/intel_sriov_dpdk_A: "1"
      limits:
        cpu: "60"
        memory: "1Gi"
        hugepages-1Gi: "1Gi"
        intel.com/intel_sriov_dpdk_A: "1"
    securityContext:
      capabilities:
        add:
          ["IPC_LOCK"]
      runAsUser: 0
      privileged: false
  volumes:
  - name: hugepage
    hostPath:
      path: /dev/hugepages
      type: Directory
  - name: memproc
    hostPath:
      path: /proc
      type: Directory

Currently hugepages of host status is

[HOST device - worker node]
$ cat /proc/meminfo | grep Huge

AnonHugePages:    151552 kB
HugePages_Total:      16
HugePages_Free:       16
HugePages_Rsvd:        0
HugePages_Surp:        0
Hugepagesize:    1048576 kB

kata-conatiner's configuration.toml info is

[HOST device - worker node]
$ cat configuration.toml | grep huge
# Enable huge pages for VM RAM, default false
# being allocated using huge pages.
enable_hugepages = true

when i deployed the pod with the yaml, and the pod describe info is

[master node]
$ kubectl describe pods testpod-dpdk

Name:         testpod-dpdk
Namespace:    default
Priority:     0
Node:         vran/10.113.173.55
Start Time:   Mon, 07 Oct 2019 10:55:54 +0900
Labels:       <none>
Annotations:  io.kubernetes.cri.untrusted-workload: true
              k8s.v1.cni.cncf.io/networks-status:
                [{
                    "name": "cbr0",
                    "ips": [
                        "10.244.13.2"
                    ],
                    "default": true,
                    "dns": {}
                }]
Status:       Running
IP:           10.244.13.2
Containers:
  dpdk-test:
    Container ID:  containerd://b2861aee23ad1e5077594bc3a907642e09afd2ac012fd581928f5a785131f5f7
    Image:         parkhi/dpdk-event-pipeline
    Image ID:      docker.io/parkhi/dpdk-event-pipeline@sha256:23c28171782f446f82a300029bbfcc71b049b5491f5020561d457254bdfac3d1
    Port:          <none>
    Host Port:     <none>
    Command:
      /bin/bash
      -c
      --
    Args:
      while true; do sleep 300000; done;
    State:          Running
      Started:      Mon, 07 Oct 2019 10:56:04 +0900
    Ready:          True
    Restart Count:  0
    Limits:
      cpu:                           60
      hugepages-1Gi:                 1Gi
      intel.com/intel_sriov_dpdk_A:  1
      memory:                        1Gi
    Requests:
      cpu:                           60
      hugepages-1Gi:                 1Gi
      intel.com/intel_sriov_dpdk_A:  1
      memory:                        1Gi
    Environment:                     <none>
    Mounts:
      /dev/hugepages from hugepage (rw)
      /var/proc from memproc (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-vgb6d (ro)
Conditions:
  Type              Status
  Initialized       True 
  Ready             True 
  ContainersReady   True 
  PodScheduled      True 
Volumes:
  hugepage:
    Type:          HostPath (bare host directory volume)
    Path:          /dev/hugepages
    HostPathType:  Directory
  memproc:
    Type:          HostPath (bare host directory volume)
    Path:          /proc
    HostPathType:  Directory
  default-token-vgb6d:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  default-token-vgb6d
    Optional:    false
QoS Class:       Guaranteed
Node-Selectors:  <none>
Tolerations:     node.kubernetes.io/not-ready:NoExecute for 300s
                 node.kubernetes.io/unreachable:NoExecute for 300s
Events:
  Type    Reason     Age   From               Message
  ----    ------     ----  ----               -------
  Normal  Scheduled  88s   default-scheduler  Successfully assigned default/testpod-dpdk to vran
  Normal  Pulled     1s    kubelet, vran      Container image "parkhi/dpdk-event-pipeline" already present on machine
  Normal  Created    1s    kubelet, vran      Created container dpdk-test
  Normal  Started    0s    kubelet, vran      Started container dpdk-test

From the information in describe, there seems to be no problem with the deployment to k8s.

i can't find hugepages info in /proc/meminfo, when i join to pod's bin/bash which is deployed

[master node]
$ kubectl exec -it testpod-dpdk -- /bin/bash

[pod]
root@testpod-dpdk:~/dpdk/pktgen-dpdk# cat /proc/meminfo | grep Huge
HugePages_Total:       0
HugePages_Free:        0
HugePages_Rsvd:        0
HugePages_Surp:        0
Hugepagesize:       2048 kB
Hugetlb:               0 kB

As you can see from the yaml file above, the proc information of the guest is bound to the pod when the pod is deployed so that the information on the guest's huge page can be checked.

[pod]
root@testpod-dpdk:/var/proc# cat meminfo | grep Huge 
HugePages_Total:       0
HugePages_Free:        0
HugePages_Rsvd:        0
HugePages_Surp:        0
Hugepagesize:       2048 kB
Hugetlb:               0 kB

You can see that hugepage is not allocated in the deployed guest and pod. But on host you can see that hugepage is allocated.

[HOST device - worker node]
$ cat /proc/meminfo | grep Huge
AnonHugePages:    321536 kB
HugePages_Total:      16
HugePages_Free:       13
HugePages_Rsvd:        0
HugePages_Surp:        0
Hugepagesize:    1048576 kB

The reason why 1Gi hugepages 3 are allocated here is 1hugepage requested from the pod and 2048Mi default memory(2 * hugepage) allocated to qemu.

What's missing from the allocation of hugepages through kata-runtime?

Expected result

[pod]
root@testpod-dpdk:~/dpdk/pktgen-dpdk# cat /proc/meminfo | grep Huge
HugePages_Total:       1
HugePages_Free:        1
HugePages_Rsvd:        0
HugePages_Surp:        0
Hugepagesize:       1048576 kB
Hugetlb:               0 kB

Actual result

[pod]
root@testpod-dpdk:~/dpdk/pktgen-dpdk# cat /proc/meminfo | grep Huge
HugePages_Total:       0
HugePages_Free:        0
HugePages_Rsvd:        0
HugePages_Surp:        0
Hugepagesize:       2048 kB
Hugetlb:               0 kB

[kata.log]

Show kata-collect-data.sh details

# Meta details Running `kata-collect-data.sh` version `1.9.0-alpha2 (commit 264d7563b3b8cfee2c22272030f7a81914208005)` at `2019-10-29.13:09:46.157783017+0900`. --- Runtime is `/bin/kata-runtime`. # `kata-env` Output of "`/bin/kata-runtime kata-env`": ```toml [Meta] Version = "1.0.23" [Runtime] Debug = true Trace = false DisableGuestSeccomp = true DisableNewNetNs = false SandboxCgroupOnly = false Path = "/usr/bin/kata-runtime" [Runtime.Version] Semver = "1.9.0-alpha2" Commit = "264d7563b3b8cfee2c22272030f7a81914208005" OCI = "1.0.1-dev" [Runtime.Config] Path = "/etc/kata-containers/configuration.toml" [Hypervisor] MachineType = "pc" Version = "QEMU emulator version 4.1.0\nCopyright (c) 2003-2019 Fabrice Bellard and the QEMU Project developers" Path = "/usr/bin/qemu-vanilla-system-x86_64" BlockDeviceDriver = "virtio-scsi" EntropySource = "/dev/urandom" Msize9p = 8192 MemorySlots = 10 Debug = true UseVSock = false SharedFS = "virtio-9p" [Image] Path = "/usr/share/kata-containers/kata-containers-image_clearlinux_1.9.0-alpha2_agent_cfec1b64f4.img" [Kernel] Path = "/usr/share/kata-containers/vmlinuz-4.19.73.51-54.1.container" Parameters = "systemd.unit=kata-containers.target systemd.mask=systemd-networkd.service systemd.mask=systemd-networkd.socket agent.log=debug agent.log=debug initcall_debug" [Initrd] Path = "" [Proxy] Type = "kataProxy" Version = "kata-proxy version 1.9.0-alpha2-85504b8" Path = "/usr/libexec/kata-containers/kata-proxy" Debug = true [Shim] Type = "kataShim" Version = "kata-shim version 1.9.0-alpha2-6bd5e6b" Path = "/usr/libexec/kata-containers/kata-shim" Debug = true [Agent] Type = "kata" Debug = true Trace = false TraceMode = "" TraceType = "" [Host] Kernel = "3.10.0-1062.1.2.el7.x86_64" Architecture = "amd64" VMContainerCapable = true SupportVSocks = true [Host.Distro] Name = "CentOS Linux" Version = "7" [Host.CPU] Vendor = "GenuineIntel" Model = "Intel(R) Xeon(R) Gold 6148 CPU @ 2.40GHz" [Netmon] Version = "kata-netmon version 1.9.0-alpha2" Path = "/usr/libexec/kata-containers/kata-netmon" Debug = true Enable = false ``` --- # Runtime config files ## Runtime default config files ``` /etc/kata-containers/configuration.toml /usr/share/defaults/kata-containers/configuration.toml ``` ## Runtime config file contents Output of "`cat "/etc/kata-containers/configuration.toml"`": ```toml # Copyright (c) 2017-2019 Intel Corporation # # SPDX-License-Identifier: Apache-2.0 # # XXX: WARNING: this file is auto-generated. # XXX: # XXX: Source file: "cli/config/configuration-qemu.toml.in" # XXX: Project: # XXX: Name: Kata Containers # XXX: Type: kata [hypervisor.qemu] path = "/usr/bin/qemu-vanilla-system-x86_64" kernel = "/usr/share/kata-containers/vmlinuz.container" image = "/usr/share/kata-containers/kata-containers.img" machine_type = "pc" # Optional space-separated list of options to pass to the guest kernel. # For example, use `kernel_params = "vsyscall=emulate"` if you are having # trouble running pre-2.15 glibc. # # WARNING: - any parameter specified here will take priority over the default # parameter value of the same name used to start the virtual machine. # Do not set values here unless you understand the impact of doing so as you # may stop the virtual machine from booting. # To see the list of default parameters, enable hypervisor debug, create a # container and look for 'default-kernel-parameters' log entries. kernel_params = " agent.log=debug initcall_debug" # Path to the firmware. # If you want that qemu uses the default firmware leave this option empty firmware = "" # Machine accelerators # comma-separated list of machine accelerators to pass to the hypervisor. # For example, `machine_accelerators = "nosmm,nosmbus,nosata,nopit,static-prt,nofw"` machine_accelerators="" # Default number of vCPUs per SB/VM: # unspecified or 0 --> will be set to 1 # < 0 --> will be set to the actual number of physical cores # > 0 <= number of physical cores --> will be set to the specified number # > number of physical cores --> will be set to the actual number of physical cores default_vcpus = 80 # Default maximum number of vCPUs per SB/VM: # unspecified or == 0 --> will be set to the actual number of physical cores or to the maximum number # of vCPUs supported by KVM if that number is exceeded # > 0 <= number of physical cores --> will be set to the specified number # > number of physical cores --> will be set to the actual number of physical cores or to the maximum number # of vCPUs supported by KVM if that number is exceeded # WARNING: Depending of the architecture, the maximum number of vCPUs supported by KVM is used when # the actual number of physical cores is greater than it. # WARNING: Be aware that this value impacts the virtual machine's memory footprint and CPU # the hotplug functionality. For example, `default_maxvcpus = 240` specifies that until 240 vCPUs # can be added to a SB/VM, but the memory footprint will be big. Another example, with # `default_maxvcpus = 8` the memory footprint will be small, but 8 will be the maximum number of # vCPUs supported by the SB/VM. In general, we recommend that you do not edit this variable, # unless you know what are you doing. default_maxvcpus = 80 # Bridges can be used to hot plug devices. # Limitations: # * Currently only pci bridges are supported # * Until 30 devices per bridge can be hot plugged. # * Until 5 PCI bridges can be cold plugged per VM. # This limitation could be a bug in qemu or in the kernel # Default number of bridges per SB/VM: # unspecified or 0 --> will be set to 1 # > 1 <= 5 --> will be set to the specified number # > 5 --> will be set to 5 default_bridges = 1 # Default memory size in MiB for SB/VM. # If unspecified then it will be set 2048 MiB. default_memory = 2048 # # Default memory slots per SB/VM. # If unspecified then it will be set 10. # This is will determine the times that memory will be hotadded to sandbox/VM. #memory_slots = 10 # The size in MiB will be plused to max memory of hypervisor. # It is the memory address space for the NVDIMM devie. # If set block storage driver (block_device_driver) to "nvdimm", # should set memory_offset to the size of block device. # Default 0 #memory_offset = 0 # Disable block device from being used for a container's rootfs. # In case of a storage driver like devicemapper where a container's # root file system is backed by a block device, the block device is passed # directly to the hypervisor for performance reasons. # This flag prevents the block device from being passed to the hypervisor, # 9pfs is used instead to pass the rootfs. disable_block_device_use = false # Shared file system type: # - virtio-9p (default) # - virtio-fs shared_fs = "virtio-9p" # Path to vhost-user-fs daemon. virtio_fs_daemon = "/usr/bin/virtiofsd-x86_64" # Default size of DAX cache in MiB virtio_fs_cache_size = 1024 # Extra args for virtiofsd daemon # # Format example: # ["-o", "arg1=xxx,arg2", "-o", "hello world", "--arg3=yyy"] # # see `virtiofsd -h` for possible options. virtio_fs_extra_args = [] # Cache mode: # # - none # Metadata, data, and pathname lookup are not cached in guest. They are # always fetched from host and any changes are immediately pushed to host. # # - auto # Metadata and pathname lookup cache expires after a configured amount of # time (default is 1 second). Data is cached while the file is open (close # to open consistency). # # - always # Metadata, data, and pathname lookup are cached in guest and never expire. virtio_fs_cache = "always" # Block storage driver to be used for the hypervisor in case the container # rootfs is backed by a block device. This is virtio-scsi, virtio-blk # or nvdimm. block_device_driver = "virtio-scsi" # Specifies cache-related options will be set to block devices or not. # Default false #block_device_cache_set = true # Specifies cache-related options for block devices. # Denotes whether use of O_DIRECT (bypass the host page cache) is enabled. # Default false #block_device_cache_direct = true # Specifies cache-related options for block devices. # Denotes whether flush requests for the device are ignored. # Default false #block_device_cache_noflush = true # Enable iothreads (data-plane) to be used. This causes IO to be # handled in a separate IO thread. This is currently only implemented # for SCSI. # enable_iothreads = false # Enable pre allocation of VM RAM, default false # Enabling this will result in lower container density # as all of the memory will be allocated and locked # This is useful when you want to reserve all the memory # upfront or in the cases where you want memory latencies # to be very predictable # Default false enable_mem_prealloc = true # Enable huge pages for VM RAM, default false # Enabling this will result in the VM memory # being allocated using huge pages. # This is useful when you want to use vhost-user network # stacks within the container. This will automatically # result in memory pre allocation # enable_hugepages = true # Enable file based guest memory support. The default is an empty string which # will disable this feature. In the case of virtio-fs, this is enabled # automatically and '/dev/shm' is used as the backing folder. # This option will be ignored if VM templating is enabled. #file_mem_backend = "" # Enable swap of vm memory. Default false. # The behaviour is undefined if mem_prealloc is also set to true #enable_swap = true # This option changes the default hypervisor and kernel parameters # to enable debug output where available. This extra output is added # to the proxy logs, but only when proxy debug is also enabled. # # Default false enable_debug = true # Disable the customizations done in the runtime when it detects # that it is running on top a VMM. This will result in the runtime # behaving as it would when running on bare metal. # #disable_nesting_checks = true # This is the msize used for 9p shares. It is the number of bytes # used for 9p packet payload. #msize_9p = 8192 # If true and vsocks are supported, use vsocks to communicate directly # with the agent and no proxy is started, otherwise use unix # sockets and start a proxy to communicate with the agent. # Default false #use_vsock = true # VFIO devices are hotplugged on a bridge by default. # Enable hotplugging on root bus. This may be required for devices with # a large PCI bar, as this is a current limitation with hotplugging on # a bridge. This value is valid for "pc" machine type. # Default false #hotplug_vfio_on_root_bus = true # If host doesn't support vhost_net, set to true. Thus we won't create vhost fds for nics. # Default false #disable_vhost_net = true # # Default entropy source. # The path to a host source of entropy (including a real hardware RNG) # /dev/urandom and /dev/random are two main options. # Be aware that /dev/random is a blocking source of entropy. If the host # runs out of entropy, the VMs boot time will increase leading to get startup # timeouts. # The source of entropy /dev/urandom is non-blocking and provides a # generally acceptable source of entropy. It should work well for pretty much # all practical purposes. #entropy_source= "/dev/urandom" # Path to OCI hook binaries in the *guest rootfs*. # This does not affect host-side hooks which must instead be added to # the OCI spec passed to the runtime. # # You can create a rootfs with hooks by customizing the osbuilder scripts: # https://github.com/kata-containers/osbuilder # # Hooks must be stored in a subdirectory of guest_hook_path according to their # hook type, i.e. "guest_hook_path/{prestart,postart,poststop}". # The agent will scan these directories for executable files and add them, in # lexicographical order, to the lifecycle of the guest container. # Hooks are executed in the runtime namespace of the guest. See the official documentation: # https://github.com/opencontainers/runtime-spec/blob/v1.0.1/config.md#posix-platform-hooks # Warnings will be logged if any error is encountered will scanning for hooks, # but it will not abort container execution. #guest_hook_path = "/usr/share/oci/hooks" [factory] # VM templating support. Once enabled, new VMs are created from template # using vm cloning. They will share the same initial kernel, initramfs and # agent memory by mapping it readonly. It helps speeding up new container # creation and saves a lot of memory if there are many kata containers running # on the same host. # # When disabled, new VMs are created from scratch. # # Note: Requires "initrd=" to be set ("image=" is not supported). # # Default false #enable_template = true # Specifies the path of template. # # Default "/run/vc/vm/template" #template_path = "/run/vc/vm/template" # The number of caches of VMCache: # unspecified or == 0 --> VMCache is disabled # > 0 --> will be set to the specified number # # VMCache is a function that creates VMs as caches before using it. # It helps speed up new container creation. # The function consists of a server and some clients communicating # through Unix socket. The protocol is gRPC in protocols/cache/cache.proto. # The VMCache server will create some VMs and cache them by factory cache. # It will convert the VM to gRPC format and transport it when gets # requestion from clients. # Factory grpccache is the VMCache client. It will request gRPC format # VM and convert it back to a VM. If VMCache function is enabled, # kata-runtime will request VM from factory grpccache when it creates # a new sandbox. # # Default 0 #vm_cache_number = 0 # Specify the address of the Unix socket that is used by VMCache. # # Default /var/run/kata-containers/cache.sock #vm_cache_endpoint = "/var/run/kata-containers/cache.sock" [proxy.kata] path = "/usr/libexec/kata-containers/kata-proxy" # If enabled, proxy messages will be sent to the system log # (default: disabled) enable_debug = true [shim.kata] path = "/usr/libexec/kata-containers/kata-shim" # If enabled, shim messages will be sent to the system log # (default: disabled) enable_debug = true # If enabled, the shim will create opentracing.io traces and spans. # (See https://www.jaegertracing.io/docs/getting-started). # # Note: By default, the shim runs in a separate network namespace. Therefore, # to allow it to send trace details to the Jaeger agent running on the host, # it is necessary to set 'disable_new_netns=true' so that it runs in the host # network namespace. # # (default: disabled) #enable_tracing = true [agent.kata] # If enabled, make the agent display debug-level messages. # (default: disabled) enable_debug = true # Enable agent tracing. # # If enabled, the default trace mode is "dynamic" and the # default trace type is "isolated". The trace mode and type are set # explicity with the `trace_type=` and `trace_mode=` options. # # Notes: # # - Tracing is ONLY enabled when `enable_tracing` is set: explicitly # setting `trace_mode=` and/or `trace_type=` without setting `enable_tracing` # will NOT activate agent tracing. # # - See https://github.com/kata-containers/agent/blob/master/TRACING.md for # full details. # # (default: disabled) #enable_tracing = true # #trace_mode = "dynamic" #trace_type = "isolated" # Comma separated list of kernel modules and their parameters. # These modules will be loaded in the guest kernel using modprobe(8). # The following example can be used to load two kernel modules with parameters # - kernel_modules=["e1000e InterruptThrottleRate=3000,3000,3000 EEE=1", "i915 enable_ppgtt=0"] # The first word is considered as the module name and the rest as its parameters. # Container will not be started when: # * A kernel module is specified and the modprobe command is not installed in the guest # or it fails loading the module. # * The module is not available in the guest or it doesn't met the guest kernel # requirements, like architecture and version. # kernel_modules=[] [netmon] # If enabled, the network monitoring process gets started when the # sandbox is created. This allows for the detection of some additional # network being added to the existing network namespace, after the # sandbox has been created. # (default: disabled) #enable_netmon = true # Specify the path to the netmon binary. path = "/usr/libexec/kata-containers/kata-netmon" # If enabled, netmon messages will be sent to the system log # (default: disabled) enable_debug = true [runtime] # If enabled, the runtime will log additional debug messages to the # system log # (default: disabled) enable_debug = true # # Internetworking model # Determines how the VM should be connected to the # the container network interface # Options: # # - bridged (Deprecated) # Uses a linux bridge to interconnect the container interface to # the VM. Works for most cases except macvlan and ipvlan. # ***NOTE: This feature has been deprecated with plans to remove this # feature in the future. Please use other network models listed below. # # - macvtap # Used when the Container network interface can be bridged using # macvtap. # # - none # Used when customize network. Only creates a tap device. No veth pair. # # - tcfilter # Uses tc filter rules to redirect traffic from the network interface # provided by plugin to a tap interface connected to the VM. # internetworking_model="tcfilter" # disable guest seccomp # Determines whether container seccomp profiles are passed to the virtual # machine and applied by the kata agent. If set to true, seccomp is not applied # within the guest # (default: true) disable_guest_seccomp=true # If enabled, the runtime will create opentracing.io traces and spans. # (See https://www.jaegertracing.io/docs/getting-started). # (default: disabled) #enable_tracing = true # If enabled, the runtime will not create a network namespace for shim and hypervisor processes. # This option may have some potential impacts to your host. It should only be used when you know what you're doing. # `disable_new_netns` conflicts with `enable_netmon` # `disable_new_netns` conflicts with `internetworking_model=bridged` and `internetworking_model=macvtap`. It works only # with `internetworking_model=none`. The tap device will be in the host network namespace and can connect to a bridge # (like OVS) directly. # If you are using docker, `disable_new_netns` only works with `docker run --net=none` # (default: false) #disable_new_netns = true # if enabled, the runtime will add all the kata processes inside one dedicated cgroup. # The container cgroups in the host are not created, just one single cgroup per sandbox. # The sandbox cgroup is not constrained by the runtime # The runtime caller is free to restrict or collect cgroup stats of the overall Kata sandbox. # The sandbox cgroup path is the parent cgroup of a container with the PodSandbox annotation. # See: https://godoc.org/github.com/kata-containers/runtime/virtcontainers#ContainerType #sandbox_cgroup_only=true # Enabled experimental feature list, format: ["a", "b"]. # Experimental features are features not stable enough for production, # They may break compatibility, and are prepared for a big version bump. # Supported experimental features: # 1. "newstore": new persist storage driver which breaks backward compatibility, # expected to move out of experimental in 2.0.0. # (default: []) experimental=[] ``` Output of "`cat "/usr/share/defaults/kata-containers/configuration.toml"`": ```toml # Copyright (c) 2017-2019 Intel Corporation # # SPDX-License-Identifier: Apache-2.0 # # XXX: WARNING: this file is auto-generated. # XXX: # XXX: Source file: "cli/config/configuration-qemu.toml.in" # XXX: Project: # XXX: Name: Kata Containers # XXX: Type: kata [hypervisor.qemu] path = "/usr/bin/qemu-vanilla-system-x86_64" kernel = "/usr/share/kata-containers/vmlinuz.container" image = "/usr/share/kata-containers/kata-containers.img" machine_type = "pc" # Optional space-separated list of options to pass to the guest kernel. # For example, use `kernel_params = "vsyscall=emulate"` if you are having # trouble running pre-2.15 glibc. # # WARNING: - any parameter specified here will take priority over the default # parameter value of the same name used to start the virtual machine. # Do not set values here unless you understand the impact of doing so as you # may stop the virtual machine from booting. # To see the list of default parameters, enable hypervisor debug, create a # container and look for 'default-kernel-parameters' log entries. kernel_params = "" # Path to the firmware. # If you want that qemu uses the default firmware leave this option empty firmware = "" # Machine accelerators # comma-separated list of machine accelerators to pass to the hypervisor. # For example, `machine_accelerators = "nosmm,nosmbus,nosata,nopit,static-prt,nofw"` machine_accelerators="" # Default number of vCPUs per SB/VM: # unspecified or 0 --> will be set to 1 # < 0 --> will be set to the actual number of physical cores # > 0 <= number of physical cores --> will be set to the specified number # > number of physical cores --> will be set to the actual number of physical cores default_vcpus = 80 # Default maximum number of vCPUs per SB/VM: # unspecified or == 0 --> will be set to the actual number of physical cores or to the maximum number # of vCPUs supported by KVM if that number is exceeded # > 0 <= number of physical cores --> will be set to the specified number # > number of physical cores --> will be set to the actual number of physical cores or to the maximum number # of vCPUs supported by KVM if that number is exceeded # WARNING: Depending of the architecture, the maximum number of vCPUs supported by KVM is used when # the actual number of physical cores is greater than it. # WARNING: Be aware that this value impacts the virtual machine's memory footprint and CPU # the hotplug functionality. For example, `default_maxvcpus = 240` specifies that until 240 vCPUs # can be added to a SB/VM, but the memory footprint will be big. Another example, with # `default_maxvcpus = 8` the memory footprint will be small, but 8 will be the maximum number of # vCPUs supported by the SB/VM. In general, we recommend that you do not edit this variable, # unless you know what are you doing. default_maxvcpus = 80 # Bridges can be used to hot plug devices. # Limitations: # * Currently only pci bridges are supported # * Until 30 devices per bridge can be hot plugged. # * Until 5 PCI bridges can be cold plugged per VM. # This limitation could be a bug in qemu or in the kernel # Default number of bridges per SB/VM: # unspecified or 0 --> will be set to 1 # > 1 <= 5 --> will be set to the specified number # > 5 --> will be set to 5 default_bridges = 1 # Default memory size in MiB for SB/VM. # If unspecified then it will be set 2048 MiB. default_memory = 2048 # # Default memory slots per SB/VM. # If unspecified then it will be set 10. # This is will determine the times that memory will be hotadded to sandbox/VM. #memory_slots = 10 # The size in MiB will be plused to max memory of hypervisor. # It is the memory address space for the NVDIMM devie. # If set block storage driver (block_device_driver) to "nvdimm", # should set memory_offset to the size of block device. # Default 0 #memory_offset = 0 # Disable block device from being used for a container's rootfs. # In case of a storage driver like devicemapper where a container's # root file system is backed by a block device, the block device is passed # directly to the hypervisor for performance reasons. # This flag prevents the block device from being passed to the hypervisor, # 9pfs is used instead to pass the rootfs. disable_block_device_use = false # Shared file system type: # - virtio-9p (default) # - virtio-fs shared_fs = "virtio-9p" # Path to vhost-user-fs daemon. virtio_fs_daemon = "/usr/bin/virtiofsd-x86_64" # Default size of DAX cache in MiB virtio_fs_cache_size = 1024 # Extra args for virtiofsd daemon # # Format example: # ["-o", "arg1=xxx,arg2", "-o", "hello world", "--arg3=yyy"] # # see `virtiofsd -h` for possible options. virtio_fs_extra_args = [] # Cache mode: # # - none # Metadata, data, and pathname lookup are not cached in guest. They are # always fetched from host and any changes are immediately pushed to host. # # - auto # Metadata and pathname lookup cache expires after a configured amount of # time (default is 1 second). Data is cached while the file is open (close # to open consistency). # # - always # Metadata, data, and pathname lookup are cached in guest and never expire. virtio_fs_cache = "always" # Block storage driver to be used for the hypervisor in case the container # rootfs is backed by a block device. This is virtio-scsi, virtio-blk # or nvdimm. block_device_driver = "virtio-scsi" # Specifies cache-related options will be set to block devices or not. # Default false #block_device_cache_set = true # Specifies cache-related options for block devices. # Denotes whether use of O_DIRECT (bypass the host page cache) is enabled. # Default false #block_device_cache_direct = true # Specifies cache-related options for block devices. # Denotes whether flush requests for the device are ignored. # Default false #block_device_cache_noflush = true # Enable iothreads (data-plane) to be used. This causes IO to be # handled in a separate IO thread. This is currently only implemented # for SCSI. # enable_iothreads = false # Enable pre allocation of VM RAM, default false # Enabling this will result in lower container density # as all of the memory will be allocated and locked # This is useful when you want to reserve all the memory # upfront or in the cases where you want memory latencies # to be very predictable # Default false enable_mem_prealloc = true # Enable huge pages for VM RAM, default false # Enabling this will result in the VM memory # being allocated using huge pages. # This is useful when you want to use vhost-user network # stacks within the container. This will automatically # result in memory pre allocation # enable_hugepages = true # Enable file based guest memory support. The default is an empty string which # will disable this feature. In the case of virtio-fs, this is enabled # automatically and '/dev/shm' is used as the backing folder. # This option will be ignored if VM templating is enabled. #file_mem_backend = "" # Enable swap of vm memory. Default false. # The behaviour is undefined if mem_prealloc is also set to true #enable_swap = true # This option changes the default hypervisor and kernel parameters # to enable debug output where available. This extra output is added # to the proxy logs, but only when proxy debug is also enabled. # # Default false enable_debug = true # Disable the customizations done in the runtime when it detects # that it is running on top a VMM. This will result in the runtime # behaving as it would when running on bare metal. # #disable_nesting_checks = true # This is the msize used for 9p shares. It is the number of bytes # used for 9p packet payload. #msize_9p = 8192 # If true and vsocks are supported, use vsocks to communicate directly # with the agent and no proxy is started, otherwise use unix # sockets and start a proxy to communicate with the agent. # Default false #use_vsock = true # VFIO devices are hotplugged on a bridge by default. # Enable hotplugging on root bus. This may be required for devices with # a large PCI bar, as this is a current limitation with hotplugging on # a bridge. This value is valid for "pc" machine type. # Default false #hotplug_vfio_on_root_bus = true # If host doesn't support vhost_net, set to true. Thus we won't create vhost fds for nics. # Default false #disable_vhost_net = true # # Default entropy source. # The path to a host source of entropy (including a real hardware RNG) # /dev/urandom and /dev/random are two main options. # Be aware that /dev/random is a blocking source of entropy. If the host # runs out of entropy, the VMs boot time will increase leading to get startup # timeouts. # The source of entropy /dev/urandom is non-blocking and provides a # generally acceptable source of entropy. It should work well for pretty much # all practical purposes. #entropy_source= "/dev/urandom" # Path to OCI hook binaries in the *guest rootfs*. # This does not affect host-side hooks which must instead be added to # the OCI spec passed to the runtime. # # You can create a rootfs with hooks by customizing the osbuilder scripts: # https://github.com/kata-containers/osbuilder # # Hooks must be stored in a subdirectory of guest_hook_path according to their # hook type, i.e. "guest_hook_path/{prestart,postart,poststop}". # The agent will scan these directories for executable files and add them, in # lexicographical order, to the lifecycle of the guest container. # Hooks are executed in the runtime namespace of the guest. See the official documentation: # https://github.com/opencontainers/runtime-spec/blob/v1.0.1/config.md#posix-platform-hooks # Warnings will be logged if any error is encountered will scanning for hooks, # but it will not abort container execution. #guest_hook_path = "/usr/share/oci/hooks" [factory] # VM templating support. Once enabled, new VMs are created from template # using vm cloning. They will share the same initial kernel, initramfs and # agent memory by mapping it readonly. It helps speeding up new container # creation and saves a lot of memory if there are many kata containers running # on the same host. # # When disabled, new VMs are created from scratch. # # Note: Requires "initrd=" to be set ("image=" is not supported). # # Default false #enable_template = true # Specifies the path of template. # # Default "/run/vc/vm/template" #template_path = "/run/vc/vm/template" # The number of caches of VMCache: # unspecified or == 0 --> VMCache is disabled # > 0 --> will be set to the specified number # # VMCache is a function that creates VMs as caches before using it. # It helps speed up new container creation. # The function consists of a server and some clients communicating # through Unix socket. The protocol is gRPC in protocols/cache/cache.proto. # The VMCache server will create some VMs and cache them by factory cache. # It will convert the VM to gRPC format and transport it when gets # requestion from clients. # Factory grpccache is the VMCache client. It will request gRPC format # VM and convert it back to a VM. If VMCache function is enabled, # kata-runtime will request VM from factory grpccache when it creates # a new sandbox. # # Default 0 #vm_cache_number = 0 # Specify the address of the Unix socket that is used by VMCache. # # Default /var/run/kata-containers/cache.sock #vm_cache_endpoint = "/var/run/kata-containers/cache.sock" [proxy.kata] path = "/usr/libexec/kata-containers/kata-proxy" # If enabled, proxy messages will be sent to the system log # (default: disabled) #enable_debug = true [shim.kata] path = "/usr/libexec/kata-containers/kata-shim" # If enabled, shim messages will be sent to the system log # (default: disabled) #enable_debug = true # If enabled, the shim will create opentracing.io traces and spans. # (See https://www.jaegertracing.io/docs/getting-started). # # Note: By default, the shim runs in a separate network namespace. Therefore, # to allow it to send trace details to the Jaeger agent running on the host, # it is necessary to set 'disable_new_netns=true' so that it runs in the host # network namespace. # # (default: disabled) #enable_tracing = true [agent.kata] # If enabled, make the agent display debug-level messages. # (default: disabled) #enable_debug = true # Enable agent tracing. # # If enabled, the default trace mode is "dynamic" and the # default trace type is "isolated". The trace mode and type are set # explicity with the `trace_type=` and `trace_mode=` options. # # Notes: # # - Tracing is ONLY enabled when `enable_tracing` is set: explicitly # setting `trace_mode=` and/or `trace_type=` without setting `enable_tracing` # will NOT activate agent tracing. # # - See https://github.com/kata-containers/agent/blob/master/TRACING.md for # full details. # # (default: disabled) #enable_tracing = true # #trace_mode = "dynamic" #trace_type = "isolated" # Comma separated list of kernel modules and their parameters. # These modules will be loaded in the guest kernel using modprobe(8). # The following example can be used to load two kernel modules with parameters # - kernel_modules=["e1000e InterruptThrottleRate=3000,3000,3000 EEE=1", "i915 enable_ppgtt=0"] # The first word is considered as the module name and the rest as its parameters. # Container will not be started when: # * A kernel module is specified and the modprobe command is not installed in the guest # or it fails loading the module. # * The module is not available in the guest or it doesn't met the guest kernel # requirements, like architecture and version. # kernel_modules=[] [netmon] # If enabled, the network monitoring process gets started when the # sandbox is created. This allows for the detection of some additional # network being added to the existing network namespace, after the # sandbox has been created. # (default: disabled) #enable_netmon = true # Specify the path to the netmon binary. path = "/usr/libexec/kata-containers/kata-netmon" # If enabled, netmon messages will be sent to the system log # (default: disabled) #enable_debug = true [runtime] # If enabled, the runtime will log additional debug messages to the # system log # (default: disabled) #enable_debug = true # # Internetworking model # Determines how the VM should be connected to the # the container network interface # Options: # # - bridged (Deprecated) # Uses a linux bridge to interconnect the container interface to # the VM. Works for most cases except macvlan and ipvlan. # ***NOTE: This feature has been deprecated with plans to remove this # feature in the future. Please use other network models listed below. # # - macvtap # Used when the Container network interface can be bridged using # macvtap. # # - none # Used when customize network. Only creates a tap device. No veth pair. # # - tcfilter # Uses tc filter rules to redirect traffic from the network interface # provided by plugin to a tap interface connected to the VM. # internetworking_model="tcfilter" # disable guest seccomp # Determines whether container seccomp profiles are passed to the virtual # machine and applied by the kata agent. If set to true, seccomp is not applied # within the guest # (default: true) disable_guest_seccomp=true # If enabled, the runtime will create opentracing.io traces and spans. # (See https://www.jaegertracing.io/docs/getting-started). # (default: disabled) #enable_tracing = true # If enabled, the runtime will not create a network namespace for shim and hypervisor processes. # This option may have some potential impacts to your host. It should only be used when you know what you're doing. # `disable_new_netns` conflicts with `enable_netmon` # `disable_new_netns` conflicts with `internetworking_model=bridged` and `internetworking_model=macvtap`. It works only # with `internetworking_model=none`. The tap device will be in the host network namespace and can connect to a bridge # (like OVS) directly. # If you are using docker, `disable_new_netns` only works with `docker run --net=none` # (default: false) #disable_new_netns = true # if enabled, the runtime will add all the kata processes inside one dedicated cgroup. # The container cgroups in the host are not created, just one single cgroup per sandbox. # The sandbox cgroup is not constrained by the runtime # The runtime caller is free to restrict or collect cgroup stats of the overall Kata sandbox. # The sandbox cgroup path is the parent cgroup of a container with the PodSandbox annotation. # See: https://godoc.org/github.com/kata-containers/runtime/virtcontainers#ContainerType #sandbox_cgroup_only=true # Enabled experimental feature list, format: ["a", "b"]. # Experimental features are features not stable enough for production, # They may break compatibility, and are prepared for a big version bump. # Supported experimental features: # 1. "newstore": new persist storage driver which breaks backward compatibility, # expected to move out of experimental in 2.0.0. # (default: []) experimental=[] ``` --- # KSM throttler ## version Output of "`/usr/lib/systemd/system/kata-ksm-throttler.service --version`": ``` ./usr/bin/kata-collect-data.sh: line 178: /usr/lib/systemd/system/kata-ksm-throttler.service: 허가 거부 ``` Output of "`/usr/libexec/kata-ksm-throttler/kata-ksm-throttler --version`": ``` kata-ksm-throttler version 1.9.0-alpha2-7254a7e ``` ## systemd service # Image details ```yaml --- osbuilder: url: "https://github.com/kata-containers/osbuilder" version: "unknown" rootfs-creation-time: "2019-09-18T15:34:48.349388249+0000Z" description: "osbuilder rootfs" file-format-version: "0.0.2" architecture: "x86_64" base-distro: name: "Clear" version: "31040" packages: default: - "chrony" - "iptables-bin" - "kmod-bin" - "libudev0-shim" - "systemd" - "util-linux-bin" extra: agent: url: "https://github.com/kata-containers/agent" name: "kata-agent" version: "1.9.0-alpha2-cfec1b64f4d32aeac92d70f87afb72d2b194a45d" agent-is-init-daemon: "no" ``` --- # Initrd details No initrd --- # Logfiles ## Runtime logs No recent runtime problems found in system journal. ## Proxy logs No recent proxy problems found in system journal. ## Shim logs No recent shim problems found in system journal. ## Throttler logs No recent throttler problems found in system journal. --- # Container manager details Have `docker` ## Docker Output of "`docker version`": ``` Client: Docker Engine - Community Version: 19.03.2 API version: 1.40 Go version: go1.12.8 Git commit: 6a30dfc Built: Thu Aug 29 05:28:55 2019 OS/Arch: linux/amd64 Experimental: false Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running? ``` Output of "`docker info`": ``` Client: Debug Mode: false Server: ERROR: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running? errors pretty printing info ``` Output of "`systemctl show docker`": ``` Type=notify Restart=always NotifyAccess=main RestartUSec=2s TimeoutStartUSec=0 TimeoutStopUSec=0 WatchdogUSec=0 WatchdogTimestampMonotonic=0 StartLimitInterval=60000000 StartLimitBurst=3 StartLimitAction=none FailureAction=none PermissionsStartOnly=no RootDirectoryStartOnly=no RemainAfterExit=no GuessMainPID=yes MainPID=0 ControlPID=0 FileDescriptorStoreMax=0 StatusErrno=0 Result=success ExecMainStartTimestampMonotonic=0 ExecMainExitTimestampMonotonic=0 ExecMainPID=0 ExecMainCode=0 ExecMainStatus=0 ExecStart={ path=/usr/bin/dockerd ; argv[]=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock ; ignore_errors=no ; start_time=[n/a] ; stop_time=[n/a] ; pid=0 ; code=(null) ; status=0/0 } ExecReload={ path=/bin/kill ; argv[]=/bin/kill -s HUP $MAINPID ; ignore_errors=no ; start_time=[n/a] ; stop_time=[n/a] ; pid=0 ; code=(null) ; status=0/0 } Slice=system.slice MemoryCurrent=18446744073709551615 TasksCurrent=18446744073709551615 Delegate=yes CPUAccounting=no CPUShares=18446744073709551615 StartupCPUShares=18446744073709551615 CPUQuotaPerSecUSec=infinity BlockIOAccounting=no BlockIOWeight=18446744073709551615 StartupBlockIOWeight=18446744073709551615 MemoryAccounting=no MemoryLimit=18446744073709551615 DevicePolicy=auto TasksAccounting=no TasksMax=18446744073709551615 UMask=0022 LimitCPU=18446744073709551615 LimitFSIZE=18446744073709551615 LimitDATA=18446744073709551615 LimitSTACK=18446744073709551615 LimitCORE=18446744073709551615 LimitRSS=18446744073709551615 LimitNOFILE=18446744073709551615 LimitAS=18446744073709551615 LimitNPROC=18446744073709551615 LimitMEMLOCK=65536 LimitLOCKS=18446744073709551615 LimitSIGPENDING=697748 LimitMSGQUEUE=819200 LimitNICE=0 LimitRTPRIO=0 LimitRTTIME=18446744073709551615 OOMScoreAdjust=0 Nice=0 IOScheduling=0 CPUSchedulingPolicy=0 CPUSchedulingPriority=0 TimerSlackNSec=50000 CPUSchedulingResetOnFork=no NonBlocking=no StandardInput=null StandardOutput=journal StandardError=inherit TTYReset=no TTYVHangup=no TTYVTDisallocate=no SyslogPriority=30 SyslogLevelPrefix=yes SecureBits=0 CapabilityBoundingSet=18446744073709551615 AmbientCapabilities=0 MountFlags=0 PrivateTmp=no PrivateNetwork=no PrivateDevices=no ProtectHome=no ProtectSystem=no SameProcessGroup=no IgnoreSIGPIPE=yes NoNewPrivileges=no SystemCallErrorNumber=0 RuntimeDirectoryMode=0755 KillMode=process KillSignal=15 SendSIGKILL=yes SendSIGHUP=no Id=docker.service Names=docker.service Requires=system.slice docker.socket basic.target Wants=network-online.target BindsTo=containerd.service ConsistsOf=docker.socket Conflicts=shutdown.target Before=shutdown.target After=containerd.service firewalld.service docker.socket network-online.target basic.target system.slice systemd-journald.socket TriggeredBy=docker.socket Documentation=https://docs.docker.com Description=Docker Application Container Engine LoadState=loaded ActiveState=inactive SubState=dead FragmentPath=/usr/lib/systemd/system/docker.service UnitFileState=disabled UnitFilePreset=disabled InactiveExitTimestampMonotonic=0 ActiveEnterTimestampMonotonic=0 ActiveExitTimestampMonotonic=0 InactiveEnterTimestampMonotonic=0 CanStart=yes CanStop=yes CanReload=yes CanIsolate=no StopWhenUnneeded=no RefuseManualStart=no RefuseManualStop=no AllowIsolate=no DefaultDependencies=yes OnFailureJobMode=replace IgnoreOnIsolate=no IgnoreOnSnapshot=no NeedDaemonReload=no JobTimeoutUSec=0 JobTimeoutAction=none ConditionResult=no AssertResult=no ConditionTimestampMonotonic=0 AssertTimestampMonotonic=0 Transient=no ``` Have `kubectl` ## Kubernetes Output of "`kubectl version`": ``` Client Version: version.Info{Major:"1", Minor:"15", GitVersion:"v1.15.2", GitCommit:"f6278300bebbb750328ac16ee6dd3aa7d3549568", GitTreeState:"clean", BuildDate:"2019-08-05T09:23:26Z", GoVersion:"go1.12.5", Compiler:"gc", Platform:"linux/amd64"} The connection to the server localhost:8080 was refused - did you specify the right host or port? ``` Output of "`kubectl config view`": ``` apiVersion: v1 clusters: [] contexts: [] current-context: "" kind: Config preferences: {} users: [] ``` Output of "`systemctl show kubelet`": ``` Type=simple Restart=always NotifyAccess=none RestartUSec=10s TimeoutStartUSec=1min 30s TimeoutStopUSec=1min 30s WatchdogUSec=0 WatchdogTimestamp=수 2019-10-23 23:10:16 KST WatchdogTimestampMonotonic=638397340 StartLimitInterval=0 StartLimitBurst=5 StartLimitAction=none FailureAction=none PermissionsStartOnly=no RootDirectoryStartOnly=no RemainAfterExit=no GuessMainPID=yes MainPID=8925 ControlPID=0 FileDescriptorStoreMax=0 StatusErrno=0 Result=success ExecMainStartTimestamp=수 2019-10-23 23:10:16 KST ExecMainStartTimestampMonotonic=638397234 ExecMainExitTimestampMonotonic=0 ExecMainPID=8925 ExecMainCode=0 ExecMainStatus=0 ExecStart={ path=/usr/bin/kubelet ; argv[]=/usr/bin/kubelet $KUBELET_KUBECONFIG_ARGS $KUBELET_CONFIG_ARGS $KUBELET_KUBEADM_ARGS $KUBELET_EXTRA_ARGS $KUBELET_RUNTIME_ARGS ; ignore_errors=no ; start_time=[수 2019-10-23 23:10:16 KST] ; stop_time=[n/a] ; pid=8925 ; code=(null) ; status=0/0 } Slice=system.slice ControlGroup=/system.slice/kubelet.service MemoryCurrent=81170432 TasksCurrent=71 Delegate=no CPUAccounting=no CPUShares=18446744073709551615 StartupCPUShares=18446744073709551615 CPUQuotaPerSecUSec=infinity BlockIOAccounting=no BlockIOWeight=18446744073709551615 StartupBlockIOWeight=18446744073709551615 MemoryAccounting=no MemoryLimit=18446744073709551615 DevicePolicy=auto TasksAccounting=no TasksMax=18446744073709551615 Environment=KUBELET_KUBECONFIG_ARGS=--bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --kubeconfig=/etc/kubernetes/kubelet.conf KUBELET_CONFIG_ARGS=--config=/var/lib/kubelet/config.yaml KUBELET_RUNTIME_ARGS=--container-runtime=remote --runtime-request-timeout=15m --container-runtime-endpoint=unix:///run/containerd/containerd.sock KUBELET_EXTRA_ARGS=--fail-swap-on=false --cgroup-driver=systemd EnvironmentFile=/var/lib/kubelet/kubeadm-flags.env (ignore_errors=yes) EnvironmentFile=/etc/default/kubelet (ignore_errors=yes) UMask=0022 LimitCPU=18446744073709551615 LimitFSIZE=18446744073709551615 LimitDATA=18446744073709551615 LimitSTACK=18446744073709551615 LimitCORE=18446744073709551615 LimitRSS=18446744073709551615 LimitNOFILE=4096 LimitAS=18446744073709551615 LimitNPROC=697748 LimitMEMLOCK=65536 LimitLOCKS=18446744073709551615 LimitSIGPENDING=697748 LimitMSGQUEUE=819200 LimitNICE=0 LimitRTPRIO=0 LimitRTTIME=18446744073709551615 OOMScoreAdjust=0 Nice=0 IOScheduling=0 CPUSchedulingPolicy=0 CPUSchedulingPriority=0 TimerSlackNSec=50000 CPUSchedulingResetOnFork=no NonBlocking=no StandardInput=null StandardOutput=journal StandardError=inherit TTYReset=no TTYVHangup=no TTYVTDisallocate=no SyslogPriority=30 SyslogLevelPrefix=yes SecureBits=0 CapabilityBoundingSet=18446744073709551615 AmbientCapabilities=0 MountFlags=0 PrivateTmp=no PrivateNetwork=no PrivateDevices=no ProtectHome=no ProtectSystem=no SameProcessGroup=no IgnoreSIGPIPE=yes NoNewPrivileges=no SystemCallErrorNumber=0 RuntimeDirectoryMode=0755 KillMode=control-group KillSignal=15 SendSIGKILL=yes SendSIGHUP=no Id=kubelet.service Names=kubelet.service Requires=system.slice basic.target WantedBy=multi-user.target Conflicts=shutdown.target Before=shutdown.target multi-user.target After=system.slice systemd-journald.socket basic.target Documentation=https://kubernetes.io/docs/ Description=kubelet: The Kubernetes Node Agent LoadState=loaded ActiveState=active SubState=running FragmentPath=/usr/lib/systemd/system/kubelet.service DropInPaths=/etc/systemd/system/kubelet.service.d/10-kubeadm.conf UnitFileState=enabled UnitFilePreset=disabled InactiveExitTimestamp=수 2019-10-23 23:10:16 KST InactiveExitTimestampMonotonic=638397390 ActiveEnterTimestamp=수 2019-10-23 23:10:16 KST ActiveEnterTimestampMonotonic=638397390 ActiveExitTimestamp=수 2019-10-23 23:10:06 KST ActiveExitTimestampMonotonic=628129744 InactiveEnterTimestamp=수 2019-10-23 23:10:16 KST InactiveEnterTimestampMonotonic=638369304 CanStart=yes CanStop=yes CanReload=no CanIsolate=no StopWhenUnneeded=no RefuseManualStart=no RefuseManualStop=no AllowIsolate=no DefaultDependencies=yes OnFailureJobMode=replace IgnoreOnIsolate=no IgnoreOnSnapshot=no NeedDaemonReload=no JobTimeoutUSec=0 JobTimeoutAction=none ConditionResult=yes AssertResult=yes ConditionTimestamp=수 2019-10-23 23:10:16 KST ConditionTimestampMonotonic=638378493 AssertTimestamp=수 2019-10-23 23:10:16 KST AssertTimestampMonotonic=638378494 Transient=no ``` No `crio` Have `containerd` ## containerd Output of "`containerd --version`": ``` containerd containerd.io 1.2.6 894b81a4b802e4eb2a91d1ce216b8817763c29fb ``` Output of "`systemctl show containerd`": ``` Type=simple Restart=no NotifyAccess=none RestartUSec=100ms TimeoutStartUSec=1min 30s TimeoutStopUSec=1min 30s WatchdogUSec=0 WatchdogTimestamp=수 2019-10-23 23:10:10 KST WatchdogTimestampMonotonic=632209541 StartLimitInterval=10000000 StartLimitBurst=5 StartLimitAction=none FailureAction=none PermissionsStartOnly=no RootDirectoryStartOnly=no RemainAfterExit=no GuessMainPID=yes MainPID=8849 ControlPID=0 FileDescriptorStoreMax=0 StatusErrno=0 Result=success ExecMainStartTimestamp=수 2019-10-23 23:10:10 KST ExecMainStartTimestampMonotonic=632209475 ExecMainExitTimestampMonotonic=0 ExecMainPID=8849 ExecMainCode=0 ExecMainStatus=0 ExecStartPre={ path=/sbin/modprobe ; argv[]=/sbin/modprobe overlay ; ignore_errors=yes ; start_time=[수 2019-10-23 23:10:10 KST] ; stop_time=[수 2019-10-23 23:10:10 KST] ; pid=8846 ; code=exited ; status=0 } ExecStart={ path=/usr/bin/containerd ; argv[]=/usr/bin/containerd ; ignore_errors=no ; start_time=[수 2019-10-23 23:10:10 KST] ; stop_time=[n/a] ; pid=8849 ; code=(null) ; status=0/0 } Slice=system.slice ControlGroup=/system.slice/containerd.service MemoryCurrent=3808153600 TasksCurrent=318 Delegate=yes CPUAccounting=no CPUShares=18446744073709551615 StartupCPUShares=18446744073709551615 CPUQuotaPerSecUSec=infinity BlockIOAccounting=no BlockIOWeight=18446744073709551615 StartupBlockIOWeight=18446744073709551615 MemoryAccounting=no MemoryLimit=18446744073709551615 DevicePolicy=auto TasksAccounting=no TasksMax=18446744073709551615 UMask=0022 LimitCPU=18446744073709551615 LimitFSIZE=18446744073709551615 LimitDATA=18446744073709551615 LimitSTACK=18446744073709551615 LimitCORE=18446744073709551615 LimitRSS=18446744073709551615 LimitNOFILE=1048576 LimitAS=18446744073709551615 LimitNPROC=18446744073709551615 LimitMEMLOCK=65536 LimitLOCKS=18446744073709551615 LimitSIGPENDING=697748 LimitMSGQUEUE=819200 LimitNICE=0 LimitRTPRIO=0 LimitRTTIME=18446744073709551615 OOMScoreAdjust=0 Nice=0 IOScheduling=0 CPUSchedulingPolicy=0 CPUSchedulingPriority=0 TimerSlackNSec=50000 CPUSchedulingResetOnFork=no NonBlocking=no StandardInput=null StandardOutput=journal StandardError=inherit TTYReset=no TTYVHangup=no TTYVTDisallocate=no SyslogPriority=30 SyslogLevelPrefix=yes SecureBits=0 CapabilityBoundingSet=18446744073709551615 AmbientCapabilities=0 MountFlags=0 PrivateTmp=no PrivateNetwork=no PrivateDevices=no ProtectHome=no ProtectSystem=no SameProcessGroup=no IgnoreSIGPIPE=yes NoNewPrivileges=no SystemCallErrorNumber=0 RuntimeDirectoryMode=0755 KillMode=process KillSignal=15 SendSIGKILL=yes SendSIGHUP=no Id=containerd.service Names=containerd.service Requires=system.slice basic.target Conflicts=shutdown.target Before=shutdown.target After=system.slice systemd-journald.socket network.target basic.target Documentation=https://containerd.io Description=containerd container runtime LoadState=loaded ActiveState=active SubState=running FragmentPath=/usr/lib/systemd/system/containerd.service UnitFileState=disabled UnitFilePreset=disabled InactiveExitTimestamp=수 2019-10-23 23:10:10 KST InactiveExitTimestampMonotonic=632176285 ActiveEnterTimestamp=수 2019-10-23 23:10:10 KST ActiveEnterTimestampMonotonic=632209583 ActiveExitTimestampMonotonic=0 InactiveEnterTimestampMonotonic=0 CanStart=yes CanStop=yes CanReload=no CanIsolate=no StopWhenUnneeded=no RefuseManualStart=no RefuseManualStop=no AllowIsolate=no DefaultDependencies=yes OnFailureJobMode=replace IgnoreOnIsolate=no IgnoreOnSnapshot=no NeedDaemonReload=no JobTimeoutUSec=0 JobTimeoutAction=none ConditionResult=yes AssertResult=yes ConditionTimestamp=수 2019-10-23 23:10:10 KST ConditionTimestampMonotonic=632161615 AssertTimestamp=수 2019-10-23 23:10:10 KST AssertTimestampMonotonic=632161616 Transient=no ``` Output of "`cat /etc/containerd/config.toml`": ``` root = "/var/lib/containerd" state = "/run/containerd" oom_score = 0 [grpc] address = "/run/containerd/containerd.sock" uid = 0 gid = 0 max_recv_message_size = 16777216 max_send_message_size = 16777216 [debug] address = "" uid = 0 gid = 0 level = "" [metrics] address = "" grpc_histogram = false [cgroup] path = "" [plugins] [plugins.cgroups] no_prometheus = false [plugins.cri] stream_server_address = "127.0.0.1" stream_server_port = "0" enable_selinux = false sandbox_image = "k8s.gcr.io/pause:3.1" stats_collect_period = 10 systemd_cgroup = true enable_tls_streaming = false max_container_log_line_size = 16384 [plugins.cri.containerd] snapshotter = "overlayfs" no_pivot = false [plugins.cri.containerd.default_runtime] runtime_type = "io.containerd.runtime.v1.linux" runtime_engine = "" runtime_root = "" [plugins.cri.containerd.untrusted_workload_runtime] runtime_type = "io.containerd.kata.v2" runtime_engine = "" runtime_root = "" [plugins.cri.cni] bin_dir = "/opt/cni/bin" conf_dir = "/etc/cni/net.d" conf_template = "" [plugins.cri.registry] [plugins.cri.registry.mirrors] [plugins.cri.registry.mirrors."insecure.registry.io"] endpoint = ["https://10.113.174.79:5000"] [plugins.cri.x509_key_pair_streaming] tls_cert_file = "" tls_key_file = "" [plugins.diff-service] default = ["walking"] [plugins.linux] shim = "containerd-shim" runtime = "runc" runtime_root = "" no_shim = false shim_debug = false [plugins.opt] path = "/opt/containerd" [plugins.restart] interval = "10s" [plugins.scheduler] pause_threshold = 0.02 deletion_threshold = 0 mutation_threshold = 100 schedule_delay = "0s" startup_delay = "100ms" ``` --- # Packages No `dpkg` Have `rpm` Output of "`rpm -qa|egrep "(cc-oci-runtimecc-runtimerunv|kata-proxy|kata-runtime|kata-shim|kata-ksm-throttler|kata-containers-image|linux-container|qemu-)"`": ``` qemu-user-2.0.0-1.el7.6.x86_64 kata-ksm-throttler-1.9.0~alpha2-42.1.x86_64 qemu-system-unicore32-2.0.0-1.el7.6.x86_64 qemu-system-s390x-2.0.0-1.el7.6.x86_64 qemu-system-arm-2.0.0-1.el7.6.x86_64 qemu-2.0.0-1.el7.6.x86_64 qemu-system-xtensa-2.0.0-1.el7.6.x86_64 qemu-system-microblaze-2.0.0-1.el7.6.x86_64 qemu-system-x86-2.0.0-1.el7.6.x86_64 libvirt-daemon-driver-qemu-4.5.0-23.el7_7.1.x86_64 qemu-lite-bin-2.11.0+git.87517afd72-41.1.x86_64 qemu-lite-2.11.0+git.87517afd72-41.1.x86_64 kata-containers-image-1.9.0~alpha2-37.1.x86_64 kata-linux-container-4.19.73.51-54.1.x86_64 qemu-system-sh4-2.0.0-1.el7.6.x86_64 qemu-system-lm32-2.0.0-1.el7.6.x86_64 qemu-lite-data-2.11.0+git.87517afd72-41.1.x86_64 qemu-kvm-common-1.5.3-167.el7_7.1.x86_64 kata-shim-1.9.0~alpha2-36.1.x86_64 qemu-system-mips-2.0.0-1.el7.6.x86_64 qemu-system-m68k-2.0.0-1.el7.6.x86_64 qemu-kvm-1.5.3-167.el7_7.1.x86_64 qemu-vanilla-data-4.1.0+git.9e06029aea-41.1.x86_64 qemu-common-2.0.0-1.el7.6.x86_64 qemu-guest-agent-2.12.0-3.el7.x86_64 qemu-img-1.5.3-167.el7_7.1.x86_64 qemu-system-moxie-2.0.0-1.el7.6.x86_64 qemu-vanilla-4.1.0+git.9e06029aea-41.1.x86_64 kata-runtime-1.9.0~alpha2-59.1.x86_64 qemu-system-alpha-2.0.0-1.el7.6.x86_64 kata-proxy-bin-1.9.0~alpha2-38.1.x86_64 kata-shim-bin-1.9.0~alpha2-36.1.x86_64 qemu-system-cris-2.0.0-1.el7.6.x86_64 ipxe-roms-qemu-20180825-2.git133f4c.el7.noarch kata-proxy-1.9.0~alpha2-38.1.x86_64 qemu-system-or32-2.0.0-1.el7.6.x86_64 qemu-vanilla-bin-4.1.0+git.9e06029aea-41.1.x86_64 ``` ---

wParkhi commented 4 years ago

please review this issue.

jodh-intel commented 4 years ago

Thanks for raising @wParkhi.

Please:

  1. Enable full debug.
  2. Re-run the commands.
  3. Run sudo kata-collect-data.sh > /tmp/kata.log.
  4. Review the contents of kata.log.
  5. Paste the contents of kata.log directly into this issue (as requested in the issue template).
amshinde commented 4 years ago

@wParkhi This is a missing feature in Kata right now. See this: https://github.com/kata-containers/runtime/issues/1548

wParkhi commented 4 years ago

@jodh-intel
I attached my kata.log file. Thanks :)

ChaJiWon commented 4 years ago

How is it going this issue?

wParkhi commented 4 years ago

@jodh-intel How is it going?

wansuyoo commented 4 years ago

@jodh-intel

I've tried to allocate hugepages using same pod described at above issue. Hugepages is still not allocated on Kata-containers' container. There added more details what I tried.
[Host machine] It is worker node's memory Info regarding hugepages.

$ cat /proc/meminfo | grep Huge
AnonHugePages:         0 kB
HugePages_Total:      32
HugePages_Free:       32
HugePages_Rsvd:        0
HugePages_Surp:        0
Hugepagesize:    1048576 kB


[Host machine] It is worker node's kata configuration.toml.
Added kernel_params to update grub for hugepages, and enabled hugepages option.
I add the more kernel params on toml file about configuration of hugepasges.

$ cat configuration.toml | grep huge
kernel_params = "transparent_hugepage=never default_hugepagesz=1G hugepagesz=1G hugepages=1"
# Enable huge pages for VM RAM, default false
# being allocated using huge pages.
enable_hugepages = true


[Actual Result] Deployed Pod successfully.

$ kubectl get pods
NAME             READY   STATUS    RESTARTS   AGE
bind-devs-dpdk   1/1     Running   0          2m22s


But, even though Guest kernel is configured hugepage 1G, total hugepages still marked as zero.

$ crictl exec -it ${CONTAINER_ID} /bin/bash
root@bind-devs-dpdk:~/dpdk/pktgen-dpdk# cat /proc/meminfo | grep Huge
HugePages_Total:       0
HugePages_Free:        0
HugePages_Rsvd:        0
HugePages_Surp:        0
Hugepagesize:    1048576 kB
Hugetlb:               0 kB


Guest Kernel dmesg is shown the error about hugepages allocation failed.
First line of below dmesg, it seems to be fail allocation hugepage as 1G, because KVM has only 2G memory by default.
And, it seems to no enough memory on boot time for allocation hugepage as 1G.
Second one, I have no idea for that, I guess occured error when hot-plugging hugepage which is described on pod description.

root@bind-devs-dpdk:~/dpdk/pktgen-dpdk# dmesg | grep huge
[    0.080376] HugeTLB: allocating 1 of page size 1.00 GiB failed.  Only allocated 0 hugepages.
[    0.338225] systemd[80]: dev-hugepages.mount: Failed to connect stdout to the journal socket, ignoring: No such file or directory


It looks like Host machine allocated 3G hugepage to VM(2G) and Container(1G).

$ cat /proc/meminfo | grep Huge
AnonHugePages:         0 kB
HugePages_Total:      32
HugePages_Free:       29
HugePages_Rsvd:        0
HugePages_Surp:        0
Hugepagesize:    1048576 kB


I've tried on the ver. 1.10 of kata-runtime.
If there is any updates for this issue, please let me know about that.


[Expected Result]

$ crictl exec -it ${CONTAINER_ID} /bin/bash
root@bind-devs-dpdk:~/dpdk/pktgen-dpdk# cat /proc/meminfo | grep Huge
HugePages_Total:       1
HugePages_Free:        1
HugePages_Rsvd:        0
HugePages_Surp:        0
Hugepagesize:    1048576 kB
Hugetlb:               0 kB
amshinde commented 4 years ago

@wansuyoo Taking a look at this.

wansuyoo commented 4 years ago

@amshinde There is no further information ?

amshinde commented 4 years ago

@wansuyoo I thought I had commented on this yesterday, but dont see the update here. Anyways, I tried your setup and have been looking at identifying the gaps that need to be fixed wrt to having support for hugetable cgroups as well. Handling hugetable cgroups is going to require some additional implementation in Kata.

But we can start with just making sure hugepages work with Kata. So, I tried a setup similar to yours but just using 2MB huge pages. I was able to see hugepages assigned to Kata. Here is the pod yaml:

apiVersion: v1
kind: Pod
metadata:
  name: testpod-kata-huge
spec:
  runtimeClassName: kata
  containers:
  - name: dpdk-test
    image: debian
    imagePullPolicy: IfNotPresent
    command: [ "/bin/bash", "-c", "--" ]
    args: [ "while true; do sleep 300000; done;" ]
    volumeMounts:
    - mountPath: /dev/hugepages
      name: hugepage
      readOnly: False
    resources:
      requests:
        memory: "512Mi"
        hugepages-2Mi: "512Mi"
      limits:
        memory: "512Mi"
        hugepages-2Mi: "512Mi"
    securityContext:
      capabilities:
        add:
          ["IPC_LOCK"]
      runAsUser: 0
      privileged: false
  volumes:
  - name: hugepage
    hostPath:
      path: /dev/hugepages
      type: Directory

Note, you should not need to pass the host side /proc to Kata, this would not work as well since Kata container runs inside a VM having its own proc filesystem. Note, instead of using annotation of untrusted workloads, I have used RuntimeClass()https://github.com/kata-containers/documentation/blob/3ed59ee50eea5bf4163d760a7a01221e2b4d343d/how-to/containerd-kata.md#kubernetes-runtimeclass feature that is meant to replace using annotations. You can use either.

Changes made to Kata configuration:

cat /usr/share/defaults/kata-containers/configuration.toml | grep huge
kernel_params = "default_hugepagesz=2M hugepagesz=2M hugepages=256"
# Enable huge pages for VM RAM, default false
# being allocated using huge pages.
enable_hugepages = true

With this :

kubectl exec -it  testpod-kata-huge bash
root@testpod-kata-huge:/# mount | grep hugepages
hugetlbfs on /dev/hugepages type hugetlbfs (rw,nosuid,nodev,noexec,relatime,pagesize=2M)
root@testpod-kata-huge:/# cat /proc/meminfo | grep Huge
HugePages_Total:     256
HugePages_Free:      256
HugePages_Rsvd:        0
HugePages_Surp:        0
Hugepagesize:       2048 kB
Hugetlb:          524288 kB

As you can see 256 pages of 2MB side have been allocated inside Kata.

Can you try with this again, and let me know if you are seeing issues. In that case, can you paste the output of kata-collect.sh script present in the runtime repository.

Resource Accounting:

One thing to note is, in case of Kata today, all of qemu and guest memory is allocated with huge pages as well as seen here:

root       6682  0.5  0.3 2186204 56404 ?       Sl   09:09   0:03 /usr/bin/qemu-vanilla-system-x86_64 -name sandbox-5b65581ded297d10b40076aa128eb5df542980627e26e47facd8bb092f8f6cf1 -uuid 9e226ad9-4a81-40bf-bd90-1420d4414bec -machine pc,accel=kvm,kernel_irqchip,nvdimm -cpu host,pmu=off -qmp unix:/run/vc/vm/5b65581ded297d10b40076aa128eb5df542980627e26e47facd8bb092f8f6cf1/qmp.sock,server,nowait -m 1024M,slots=10,maxmem=17041M -device pci-bridge,bus=pci.0,id=pci-bridge-0,chassis_nr=1,shpc=on,addr=2,romfile= -device virtio-serial-pci,disable-modern=true,id=serial0,romfile= -device virtconsole,chardev=charconsole0,id=console0 -chardev socket,id=charconsole0,path=/run/vc/vm/5b65581ded297d10b40076aa128eb5df542980627e26e47facd8bb092f8f6cf1/console.sock,server,nowait -device nvdimm,id=nv0,memdev=mem0 -object memory-backend-file,id=mem0,mem-path=/usr/share/kata-containers/kata-containers-image_clearlinux_1.10.0-alpha1_agent_c87f497312.img,size=134217728 -device virtio-scsi-pci,id=scsi0,disable-modern=true,romfile= -object rng-random,id=rng0,filename=/dev/urandom -device virtio-rng,rng=rng0,romfile= -device virtserialport,chardev=charch0,id=channel0,name=agent.channel.0 -chardev socket,id=charch0,path=/run/vc/vm/5b65581ded297d10b40076aa128eb5df542980627e26e47facd8bb092f8f6cf1/kata.sock,server,nowait -device virtio-9p-pci,disable-modern=true,fsdev=extra-9p-kataShared,mount_tag=kataShared,romfile= -fsdev local,id=extra-9p-kataShared,path=/run/kata-containers/shared/sandboxes/5b65581ded297d10b40076aa128eb5df542980627e26e47facd8bb092f8f6cf1,security_model=none -netdev tap,id=network-0,vhost=on,vhostfds=3,fds=4 -device driver=virtio-net-pci,netdev=network-0,mac=d2:11:e7:1f:40:81,disable-modern=true,mq=on,vectors=4,romfile= -global kvm-pit.lost_tick_policy=discard -vga none -no-user-config -nodefaults -nographic -daemonize -object memory-backend-file,id=dimm1,size=1024M,mem-path=/dev/hugepages -numa node,memdev=dimm1 -kernel /usr/share/kata-containers/vmlinuz-4.19.75.55-44.container -append tsc=reliable no_timer_check rcupdate.rcu_expedited=1 i8042.direct=1 i8042.dumbkbd=1 i8042.nopnp=1 i8042.noaux=1 noreplace-smp reboot=k console=hvc0 console=hvc1 iommu=off cryptomgr.notests net.ifnames=0 pci=lastbus=0 root=/dev/pmem0p1 rootflags=dax,data=ordered,errors=remount-ro ro rootfstype=ext4 debug systemd.show_status=true systemd.log_level=debug panic=1 nr_cpus=4 agent.use_vsock=false systemd.unit=kata-containers.target systemd.mask=systemd-networkd.service systemd.mask=systemd-networkd.socket agent.log=debug default_hugepagesz=2M hugepagesz=2M hugepages=256 -pidfile /run/vc/vm/5b65581ded297d10b40076aa128eb5df542980627e26e47facd8bb092f8f6cf1/pid -D /run/vc/vm/5b65581ded297d10b40076aa128eb5df542980627e26e47facd8bb092f8f6cf1/qemu.log -smp 1,cores=1,threads=1,sockets=4,maxcpus=4

Note the option -object memory-backend-file,id=dimm1,size=2048M, passed to qemu. So today if you want to have your app use some amount of hugepages memory say 1G, some part of it will be used by qemu and the guest. As a temporary workaround, you can add the memory overhead required into the huge-page requests in the pod yaml, say assign (1024 + 256 MB) of huge page memory. I have been looking at ways how we can solve this resource accounting issue for huge pages.

One of the things I have considered is using the PodOverhead feature that we added to kubernetes to account for overhead associated with memory and CPU usage for Kata guest resources. However If we add resource overhead for huge pages in k8s, we are not going to be accounting correctly for regular memory for Kata correctly. This is going to become even more apparent when passing emptyDir based on hugepages to the container. (k8s creates a separate volume backed by hugepages in this case). I am looking at ways of fixing this, by not backing the entire qemu memory with hugepages to be able to solve the resource accounting issue. This is described in the issue here: https://github.com/kata-containers/runtime/issues/1548 I shall post my findings on resource accounting as I make some more progress on it.