$ mount | grep devtmpfs
udev on /dev type devtmpfs (rw,nosuid,relatime,size=4074044k,nr_inodes=1018511,mode=755)
udev on /run/kata-containers/shared/sandboxes/26e37bd224ce6ded8559fd245382cc09810d516730ceb0d9239efbd85e0b9849/26e37bd224ce6ded8559fd245382cc09810d516730ceb0d9239efbd85e0b9849/rootfs/dev type devtmpfs (rw,nosuid,relatime,size=4074044k,nr_inodes=1018511,mode=755)
And they can increase by the number of docker cp that you performed. To reproduce this, I used the following configuration
Meta details
Running kata-collect-data.sh version 1.3.0-rc1 (commit 0d99a4f49f60d936fa15e5a0842cfd4d03a24e8f) at 2018-09-27.19:59:08.823534274+0000.
Runtime is /usr/local/bin/kata-runtime.
kata-env
Output of "/usr/local/bin/kata-runtime kata-env":
[Meta]
Version = "1.0.18"
[Runtime]
Debug = true
Path = "/usr/local/bin/kata-runtime"
[Runtime.Version]
Semver = "1.3.0-rc1"
Commit = "0d99a4f49f60d936fa15e5a0842cfd4d03a24e8f"
OCI = "1.0.1"
[Runtime.Config]
Path = "/usr/share/defaults/kata-containers/configuration.toml"
[Hypervisor]
MachineType = "pc"
Version = "QEMU emulator version 2.11.0\nCopyright (c) 2003-2017 Fabrice Bellard and the QEMU Project developers"
Path = "/usr/bin/qemu-lite-system-x86_64"
BlockDeviceDriver = "virtio-scsi"
EntropySource = "/dev/urandom"
Msize9p = 8192
MemorySlots = 10
Debug = true
UseVSock = false
[Image]
Path = "/usr/share/kata-containers/kata-containers-image_clearlinux_1.3.0-rc1_agent_1ee972176ae.img"
[Kernel]
Path = "/usr/share/kata-containers/vmlinuz-4.14.67-12"
Parameters = "agent.log=debug"
[Initrd]
Path = ""
[Proxy]
Type = "kataProxy"
Version = "kata-proxy version 1.3.0-6ddb006ad3f709cab018af9dc0bf9e756c3ce2cd"
Path = "/usr/libexec/kata-containers/kata-proxy"
Debug = true
[Shim]
Type = "kataShim"
Version = "kata-shim version 1.3.0-rc1-9b2891cfb153967fa4a65e44b2928255c889f643"
Path = "/usr/libexec/kata-containers/kata-shim"
Debug = true
[Agent]
Type = "kata"
[Host]
Kernel = "4.15.0-1023-azure"
Architecture = "amd64"
VMContainerCapable = true
SupportVSocks = false
[Host.Distro]
Name = "Ubuntu"
Version = "16.04"
[Host.CPU]
Vendor = "GenuineIntel"
Model = "Intel(R) Xeon(R) CPU E5-2673 v4 @ 2.30GHz"
[Netmon]
Version = "kata-netmon version 1.3.0-rc1"
Path = "/usr/libexec/kata-containers/kata-netmon"
Debug = true
Enable = false
Config file /etc/kata-containers/configuration.toml not found
Output of "cat "/usr/share/defaults/kata-containers/configuration.toml"":
# Copyright (c) 2017-2018 Intel Corporation
#
# SPDX-License-Identifier: Apache-2.0
#
# XXX: WARNING: this file is auto-generated.
# XXX:
# XXX: Source file: "cli/config/configuration.toml.in"
# XXX: Project:
# XXX: Name: Kata Containers
# XXX: Type: kata
[hypervisor.qemu]
path = "/usr/bin/qemu-lite-system-x86_64"
kernel = "/usr/share/kata-containers/vmlinuz.container"
image = "/usr/share/kata-containers/kata-containers.img"
machine_type = "pc"
# Optional space-separated list of options to pass to the guest kernel.
# For example, use `kernel_params = "vsyscall=emulate"` if you are having
# trouble running pre-2.15 glibc.
#
# WARNING: - any parameter specified here will take priority over the default
# parameter value of the same name used to start the virtual machine.
# Do not set values here unless you understand the impact of doing so as you
# may stop the virtual machine from booting.
# To see the list of default parameters, enable hypervisor debug, create a
# container and look for 'default-kernel-parameters' log entries.
kernel_params = " agent.log=debug"
# Path to the firmware.
# If you want that qemu uses the default firmware leave this option empty
firmware = ""
# Machine accelerators
# comma-separated list of machine accelerators to pass to the hypervisor.
# For example, `machine_accelerators = "nosmm,nosmbus,nosata,nopit,static-prt,nofw"`
machine_accelerators=""
# Default number of vCPUs per SB/VM:
# unspecified or 0 --> will be set to 1
# < 0 --> will be set to the actual number of physical cores
# > 0 <= number of physical cores --> will be set to the specified number
# > number of physical cores --> will be set to the actual number of physical cores
default_vcpus = 1
# Default maximum number of vCPUs per SB/VM:
# unspecified or == 0 --> will be set to the actual number of physical cores or to the maximum number
# of vCPUs supported by KVM if that number is exceeded
# > 0 <= number of physical cores --> will be set to the specified number
# > number of physical cores --> will be set to the actual number of physical cores or to the maximum number
# of vCPUs supported by KVM if that number is exceeded
# WARNING: Depending of the architecture, the maximum number of vCPUs supported by KVM is used when
# the actual number of physical cores is greater than it.
# WARNING: Be aware that this value impacts the virtual machine's memory footprint and CPU
# the hotplug functionality. For example, `default_maxvcpus = 240` specifies that until 240 vCPUs
# can be added to a SB/VM, but the memory footprint will be big. Another example, with
# `default_maxvcpus = 8` the memory footprint will be small, but 8 will be the maximum number of
# vCPUs supported by the SB/VM. In general, we recommend that you do not edit this variable,
# unless you know what are you doing.
default_maxvcpus = 0
# Bridges can be used to hot plug devices.
# Limitations:
# * Currently only pci bridges are supported
# * Until 30 devices per bridge can be hot plugged.
# * Until 5 PCI bridges can be cold plugged per VM.
# This limitation could be a bug in qemu or in the kernel
# Default number of bridges per SB/VM:
# unspecified or 0 --> will be set to 1
# > 1 <= 5 --> will be set to the specified number
# > 5 --> will be set to 5
default_bridges = 1
# Default memory size in MiB for SB/VM.
# If unspecified then it will be set 2048 MiB.
#default_memory = 2048
#
# Default memory slots per SB/VM.
# If unspecified then it will be set 10.
# This is will determine the times that memory will be hotadded to sandbox/VM.
#memory_slots = 10
# Disable block device from being used for a container's rootfs.
# In case of a storage driver like devicemapper where a container's
# root file system is backed by a block device, the block device is passed
# directly to the hypervisor for performance reasons.
# This flag prevents the block device from being passed to the hypervisor,
# 9pfs is used instead to pass the rootfs.
disable_block_device_use = false
# Block storage driver to be used for the hypervisor in case the container
# rootfs is backed by a block device. This is either virtio-scsi or
# virtio-blk.
block_device_driver = "virtio-scsi"
# Enable iothreads (data-plane) to be used. This causes IO to be
# handled in a separate IO thread. This is currently only implemented
# for SCSI.
#
enable_iothreads = false
# Enable pre allocation of VM RAM, default false
# Enabling this will result in lower container density
# as all of the memory will be allocated and locked
# This is useful when you want to reserve all the memory
# upfront or in the cases where you want memory latencies
# to be very predictable
# Default false
#enable_mem_prealloc = true
# Enable huge pages for VM RAM, default false
# Enabling this will result in the VM memory
# being allocated using huge pages.
# This is useful when you want to use vhost-user network
# stacks within the container. This will automatically
# result in memory pre allocation
#enable_hugepages = true
# Enable swap of vm memory. Default false.
# The behaviour is undefined if mem_prealloc is also set to true
#enable_swap = true
# This option changes the default hypervisor and kernel parameters
# to enable debug output where available. This extra output is added
# to the proxy logs, but only when proxy debug is also enabled.
#
# Default false
enable_debug = true
# Disable the customizations done in the runtime when it detects
# that it is running on top a VMM. This will result in the runtime
# behaving as it would when running on bare metal.
#
#disable_nesting_checks = true
# This is the msize used for 9p shares. It is the number of bytes
# used for 9p packet payload.
#msize_9p = 8192
# If true and vsocks are supported, use vsocks to communicate directly
# with the agent and no proxy is started, otherwise use unix
# sockets and start a proxy to communicate with the agent.
# Default false
#use_vsock = true
# VFIO devices are hotplugged on a bridge by default.
# Enable hotplugging on root bus. This may be required for devices with
# a large PCI bar, as this is a current limitation with hotplugging on
# a bridge. This value is valid for "pc" machine type.
# Default false
#hotplug_vfio_on_root_bus = true
# If host doesn't support vhost_net, set to true. Thus we won't create vhost fds for nics.
# Default false
#disable_vhost_net = true
#
# Default entropy source.
# The path to a host source of entropy (including a real hardware RNG)
# /dev/urandom and /dev/random are two main options.
# Be aware that /dev/random is a blocking source of entropy. If the host
# runs out of entropy, the VMs boot time will increase leading to get startup
# timeouts.
# The source of entropy /dev/urandom is non-blocking and provides a
# generally acceptable source of entropy. It should work well for pretty much
# all practical purposes.
#entropy_source= "/dev/urandom"
[factory]
# VM templating support. Once enabled, new VMs are created from template
# using vm cloning. They will share the same initial kernel, initramfs and
# agent memory by mapping it readonly. It helps speeding up new container
# creation and saves a lot of memory if there are many kata containers running
# on the same host.
#
# When disabled, new VMs are created from scratch.
#
# Default false
#enable_template = true
[proxy.kata]
path = "/usr/libexec/kata-containers/kata-proxy"
# If enabled, proxy messages will be sent to the system log
# (default: disabled)
enable_debug = true
[shim.kata]
path = "/usr/libexec/kata-containers/kata-shim"
# If enabled, shim messages will be sent to the system log
# (default: disabled)
enable_debug = true
[agent.kata]
# There is no field for this section. The goal is only to be able to
# specify which type of agent the user wants to use.
[netmon]
# If enabled, the network monitoring process gets started when the
# sandbox is created. This allows for the detection of some additional
# network being added to the existing network namespace, after the
# sandbox has been created.
# (default: disabled)
#enable_netmon = true
# Specify the path to the netmon binary.
path = "/usr/libexec/kata-containers/kata-netmon"
# If enabled, netmon messages will be sent to the system log
# (default: disabled)
enable_debug = true
[runtime]
# If enabled, the runtime will log additional debug messages to the
# system log
# (default: disabled)
enable_debug = true
#
# Internetworking model
# Determines how the VM should be connected to the
# the container network interface
# Options:
#
# - bridged
# Uses a linux bridge to interconnect the container interface to
# the VM. Works for most cases except macvlan and ipvlan.
#
# - macvtap
# Used when the Container network interface can be bridged using
# macvtap.
internetworking_model="macvtap"
# If enabled, the runtime will create opentracing.io traces and spans.
# (See https://www.jaegertracing.io/docs/getting-started).
# (default: disabled)
#enable_tracing = true
KSM throttler
version
Output of "--version":
/usr/local/bin/kata-collect-data.sh: line 168: --version: command not found
Client Version: version.Info{Major:"1", Minor:"10", GitVersion:"v1.10.5", GitCommit:"32ac1c9073b132b8ba18aa830f46b77dcceb0723", GitTreeState:"clean", BuildDate:"2018-06-21T11:46:00Z", GoVersion:"go1.9.3", Compiler:"gc", Platform:"linux/amd64"}
The connection to the server localhost:8080 was refused - did you specify the right host or port?
Have dpkg
Output of "dpkg -l|egrep "(cc-oci-runtimecc-runtimerunv|kata-proxy|kata-runtime|kata-shim|kata-ksm-throttler|kata-containers-image|linux-container|qemu-)"":
ii kata-containers-image 1.3.0~rc1-34 amd64 Kata containers image
ii qemu-lite 2.11.0+git.f886228056-50 amd64 linux kernel optimised for container-like workloads.
Have rpm
Output of "rpm -qa|egrep "(cc-oci-runtimecc-runtimerunv|kata-proxy|kata-runtime|kata-shim|kata-ksm-throttler|kata-containers-image|linux-container|qemu-)"":
While doing the following
It is being observed that mount points are left
And they can increase by the number of
docker cp
that you performed. To reproduce this, I used the following configurationMeta details
Running
kata-collect-data.sh
version1.3.0-rc1 (commit 0d99a4f49f60d936fa15e5a0842cfd4d03a24e8f)
at2018-09-27.19:59:08.823534274+0000
.Runtime is
/usr/local/bin/kata-runtime
.kata-env
Output of "
/usr/local/bin/kata-runtime kata-env
":Runtime config files
Runtime default config files
Runtime config file contents
Config file
/etc/kata-containers/configuration.toml
not found Output of "cat "/usr/share/defaults/kata-containers/configuration.toml"
":KSM throttler
version
Output of "
--version
":systemd service
Image details
Initrd details
No initrd
Logfiles
Runtime logs
Recent runtime problems found in system journal:
Proxy logs
Recent proxy problems found in system journal:
Shim logs
Recent shim problems found in system journal:
Throttler logs
No recent throttler problems found in system journal.
Container manager details
Have
docker
Docker
Output of "
docker version
":Output of "
docker info
":Output of "
systemctl show docker
":Have
kubectl
Kubernetes
Output of "
kubectl version
":Output of "
kubectl config view
":Output of "
systemctl show kubelet
":Have
crio
Output of "crio --version
":Output of "
systemctl show crio
":Packages
Have
dpkg
Output of "dpkg -l|egrep "(cc-oci-runtimecc-runtimerunv|kata-proxy|kata-runtime|kata-shim|kata-ksm-throttler|kata-containers-image|linux-container|qemu-)"
":Have
rpm
Output of "rpm -qa|egrep "(cc-oci-runtimecc-runtimerunv|kata-proxy|kata-runtime|kata-shim|kata-ksm-throttler|kata-containers-image|linux-container|qemu-)"
":