kata-containers / runtime

Kata Containers version 1.x runtime (for version 2.x see https://github.com/kata-containers/kata-containers).
https://katacontainers.io/
Apache License 2.0
2.1k stars 376 forks source link

Run images with centos6 based leads to error. #1916

Closed clarklee92 closed 3 years ago

clarklee92 commented 5 years ago

Description of problem

Hello, Recently, we faced some problems using kata. To my surprise, it's related to vm image. It's easy to reproduce. just use official image of centos at dockerHub.

  1. using VM image: default clear linux pod based from centos6 can not execute most command, exit with error code 139. pod based from centos7 can start normally, but yum install will hang forever at installation step.

  2. using VM image: custom centos 7 pod based from centos6 can not execute most command, exit with error code 139. pod based from centos7 works fine.

Expected result

Command with expected return.

Actual result

Most command exit with code 139 sh/bash such kind lead to 139, but sleep or ls is ok.

clarklee92 commented 5 years ago

Meta details

Running kata-collect-data.sh version 1.6.2 (commit 2cbbadb93b2a41103450992ff5e8fc2775ca8edb) at 2019-07-29.10:25:54.898648949+0800.


Runtime is /usr/bin/kata-runtime.

kata-env

Output of "/usr/bin/kata-runtime kata-env":

[Meta]
  Version = "1.0.20"

[Runtime]
  Debug = true
  Trace = false
  DisableGuestSeccomp = true
  DisableNewNetNs = false
  Path = "/opt/kata/bin/kata-runtime"
  [Runtime.Version]
    Semver = "1.6.2"
    Commit = "2cbbadb93b2a41103450992ff5e8fc2775ca8edb"
    OCI = "1.0.1-dev"
  [Runtime.Config]
    Path = "/opt/kata/share/defaults/kata-containers/configuration-qemu.toml"

[Hypervisor]
  MachineType = "pc"
  Version = "QEMU emulator version 2.11.2(kata-static)\nCopyright (c) 2003-2017 Fabrice Bellard and the QEMU Project developers"
  Path = "/opt/kata/bin/qemu-system-x86_64"
  BlockDeviceDriver = "virtio-scsi"
  EntropySource = "/dev/urandom"
  Msize9p = 8192
  MemorySlots = 10
  Debug = false
  UseVSock = false

[Image]
  Path = "/opt/kata/share/kata-containers/kata-containers-centos7.img"

[Kernel]
  Path = "/opt/kata/share/kata-containers/vmlinuz-4.19.28-33"
  Parameters = "init=/usr/lib/systemd/systemd systemd.unit=kata-containers.target systemd.mask=systemd-networkd.service systemd.mask=systemd-networkd.socket systemd.mask=systemd-journald.service systemd.mask=systemd-journald.socket systemd.mask=systemd-journal-flush.service systemd.mask=systemd-udevd.service systemd.mask=systemd-udevd.socket systemd.mask=systemd-udev-trigger.service systemd.mask=systemd-timesyncd.service systemd.mask=systemd-update-utmp.service systemd.mask=systemd-tmpfiles-setup.service systemd.mask=systemd-tmpfiles-cleanup.service systemd.mask=systemd-tmpfiles-cleanup.timer systemd.mask=tmp.mount"

[Initrd]
  Path = ""

[Proxy]
  Type = "kataProxy"
  Version = "kata-proxy version 1.6.2-12b81180dff2ccea3e2835c25a7c3c21c347c65b"
  Path = "/opt/kata/libexec/kata-containers/kata-proxy"
  Debug = true

[Shim]
  Type = "kataShim"
  Version = "kata-shim version 1.6.2-665783af3122439e58311d672761730f12ca9162"
  Path = "/opt/kata/libexec/kata-containers/kata-shim"
  Debug = true

[Agent]
  Type = "kata"

[Host]
  Kernel = "3.10.0-514.el7.x86_64"
  Architecture = "amd64"
  VMContainerCapable = true
  SupportVSocks = false
  [Host.Distro]
    Name = "CentOS Linux"
    Version = "7"
  [Host.CPU]
    Vendor = "GenuineIntel"
    Model = "Intel(R) Xeon(R) Silver 4110 CPU @ 2.10GHz"

[Netmon]
  Version = "kata-netmon version 1.6.2"
  Path = "/opt/kata/libexec/kata-containers/kata-netmon"
  Debug = true
  Enable = false

Runtime config files

Runtime default config files

/etc/kata-containers/configuration.toml
/opt/kata/share/defaults/kata-containers/configuration.toml

Runtime config file contents

Config file /etc/kata-containers/configuration.toml not found Output of "cat "/opt/kata/share/defaults/kata-containers/configuration.toml"":

# Copyright (c) 2017-2019 Intel Corporation
#
# SPDX-License-Identifier: Apache-2.0
#

# XXX: WARNING: this file is auto-generated.
# XXX:
# XXX: Source file: "cli/config/configuration-qemu.toml.in"
# XXX: Project:
# XXX:   Name: Kata Containers
# XXX:   Type: kata

[hypervisor.qemu]
path = "/opt/kata/bin/qemu-system-x86_64"
kernel = "/opt/kata/share/kata-containers/vmlinuz.container"
image = "/opt/kata/share/kata-containers/kata-containers.img"
machine_type = "pc"

# Optional space-separated list of options to pass to the guest kernel.
# For example, use `kernel_params = "vsyscall=emulate"` if you are having
# trouble running pre-2.15 glibc.
#
# WARNING: - any parameter specified here will take priority over the default
# parameter value of the same name used to start the virtual machine.
# Do not set values here unless you understand the impact of doing so as you
# may stop the virtual machine from booting.
# To see the list of default parameters, enable hypervisor debug, create a
# container and look for 'default-kernel-parameters' log entries.
kernel_params = ""

# Path to the firmware.
# If you want that qemu uses the default firmware leave this option empty
firmware = ""

# Machine accelerators
# comma-separated list of machine accelerators to pass to the hypervisor.
# For example, `machine_accelerators = "nosmm,nosmbus,nosata,nopit,static-prt,nofw"`
machine_accelerators=""

# Default number of vCPUs per SB/VM:
# unspecified or 0                --> will be set to 1
# < 0                             --> will be set to the actual number of physical cores
# > 0 <= number of physical cores --> will be set to the specified number
# > number of physical cores      --> will be set to the actual number of physical cores
default_vcpus = 16

# Default maximum number of vCPUs per SB/VM:
# unspecified or == 0             --> will be set to the actual number of physical cores or to the maximum number
#                                     of vCPUs supported by KVM if that number is exceeded
# > 0 <= number of physical cores --> will be set to the specified number
# > number of physical cores      --> will be set to the actual number of physical cores or to the maximum number
#                                     of vCPUs supported by KVM if that number is exceeded
# WARNING: Depending of the architecture, the maximum number of vCPUs supported by KVM is used when
# the actual number of physical cores is greater than it.
# WARNING: Be aware that this value impacts the virtual machine's memory footprint and CPU
# the hotplug functionality. For example, `default_maxvcpus = 240` specifies that until 240 vCPUs
# can be added to a SB/VM, but the memory footprint will be big. Another example, with
# `default_maxvcpus = 8` the memory footprint will be small, but 8 will be the maximum number of
# vCPUs supported by the SB/VM. In general, we recommend that you do not edit this variable,
# unless you know what are you doing.
default_maxvcpus = 0

# Bridges can be used to hot plug devices.
# Limitations:
# * Currently only pci bridges are supported
# * Until 30 devices per bridge can be hot plugged.
# * Until 5 PCI bridges can be cold plugged per VM.
#   This limitation could be a bug in qemu or in the kernel
# Default number of bridges per SB/VM:
# unspecified or 0   --> will be set to 1
# > 1 <= 5           --> will be set to the specified number
# > 5                --> will be set to 5
default_bridges = 1

# Default memory size in MiB for SB/VM.
# If unspecified then it will be set 2048 MiB.
default_memory = 131072
#
# Default memory slots per SB/VM.
# If unspecified then it will be set 10.
# This is will determine the times that memory will be hotadded to sandbox/VM.
#memory_slots = 10

# The size in MiB will be plused to max memory of hypervisor.
# It is the memory address space for the NVDIMM devie.
# If set block storage driver (block_device_driver) to "nvdimm",
# should set memory_offset to the size of block device.
# Default 0
#memory_offset = 0

# Disable block device from being used for a container's rootfs.
# In case of a storage driver like devicemapper where a container's 
# root file system is backed by a block device, the block device is passed
# directly to the hypervisor for performance reasons. 
# This flag prevents the block device from being passed to the hypervisor, 
# 9pfs is used instead to pass the rootfs.
disable_block_device_use = false

# Block storage driver to be used for the hypervisor in case the container
# rootfs is backed by a block device. This is virtio-scsi, virtio-blk
# or nvdimm.
block_device_driver = "virtio-scsi"

# Specifies cache-related options will be set to block devices or not.
# Default false
#block_device_cache_set = true

# Specifies cache-related options for block devices.
# Denotes whether use of O_DIRECT (bypass the host page cache) is enabled.
# Default false
#block_device_cache_direct = true

# Specifies cache-related options for block devices.
# Denotes whether flush requests for the device are ignored.
# Default false
#block_device_cache_noflush = true

# Enable iothreads (data-plane) to be used. This causes IO to be
# handled in a separate IO thread. This is currently only implemented
# for SCSI.
#
enable_iothreads = false

# Enable pre allocation of VM RAM, default false
# Enabling this will result in lower container density
# as all of the memory will be allocated and locked
# This is useful when you want to reserve all the memory
# upfront or in the cases where you want memory latencies
# to be very predictable
# Default false
#enable_mem_prealloc = true

# Enable huge pages for VM RAM, default false
# Enabling this will result in the VM memory
# being allocated using huge pages.
# This is useful when you want to use vhost-user network
# stacks within the container. This will automatically 
# result in memory pre allocation
#enable_hugepages = true

# Enable swap of vm memory. Default false.
# The behaviour is undefined if mem_prealloc is also set to true
#enable_swap = true

# This option changes the default hypervisor and kernel parameters
# to enable debug output where available. This extra output is added
# to the proxy logs, but only when proxy debug is also enabled.
# 
# Default false
#enable_debug = true

# Disable the customizations done in the runtime when it detects
# that it is running on top a VMM. This will result in the runtime
# behaving as it would when running on bare metal.
# 
#disable_nesting_checks = true

# This is the msize used for 9p shares. It is the number of bytes 
# used for 9p packet payload.
#msize_9p = 8192

# If true and vsocks are supported, use vsocks to communicate directly
# with the agent and no proxy is started, otherwise use unix
# sockets and start a proxy to communicate with the agent.
# Default false
#use_vsock = true

# VFIO devices are hotplugged on a bridge by default. 
# Enable hotplugging on root bus. This may be required for devices with
# a large PCI bar, as this is a current limitation with hotplugging on 
# a bridge. This value is valid for "pc" machine type.
# Default false
#hotplug_vfio_on_root_bus = true

# If host doesn't support vhost_net, set to true. Thus we won't create vhost fds for nics.
# Default false
#disable_vhost_net = true
#
# Default entropy source.
# The path to a host source of entropy (including a real hardware RNG)
# /dev/urandom and /dev/random are two main options.
# Be aware that /dev/random is a blocking source of entropy.  If the host
# runs out of entropy, the VMs boot time will increase leading to get startup
# timeouts.
# The source of entropy /dev/urandom is non-blocking and provides a
# generally acceptable source of entropy. It should work well for pretty much
# all practical purposes.
#entropy_source= "/dev/urandom"

# Path to OCI hook binaries in the *guest rootfs*.
# This does not affect host-side hooks which must instead be added to
# the OCI spec passed to the runtime.
#
# You can create a rootfs with hooks by customizing the osbuilder scripts:
# https://github.com/kata-containers/osbuilder
#
# Hooks must be stored in a subdirectory of guest_hook_path according to their
# hook type, i.e. "guest_hook_path/{prestart,postart,poststop}".
# The agent will scan these directories for executable files and add them, in
# lexicographical order, to the lifecycle of the guest container.
# Hooks are executed in the runtime namespace of the guest. See the official documentation:
# https://github.com/opencontainers/runtime-spec/blob/v1.0.1/config.md#posix-platform-hooks
# Warnings will be logged if any error is encountered will scanning for hooks,
# but it will not abort container execution.
#guest_hook_path = "/usr/share/oci/hooks"

[factory]
# VM templating support. Once enabled, new VMs are created from template
# using vm cloning. They will share the same initial kernel, initramfs and
# agent memory by mapping it readonly. It helps speeding up new container
# creation and saves a lot of memory if there are many kata containers running
# on the same host.
#
# When disabled, new VMs are created from scratch.
#
# Note: Requires "initrd=" to be set ("image=" is not supported).
#
# Default false
#enable_template = true

# The number of caches of VMCache:
# unspecified or == 0   --> VMCache is disabled
# > 0                   --> will be set to the specified number
#
# VMCache is a function that creates VMs as caches before using it.
# It helps speed up new container creation.
# The function consists of a server and some clients communicating
# through Unix socket.  The protocol is gRPC in protocols/cache/cache.proto.
# The VMCache server will create some VMs and cache them by factory cache.
# It will convert the VM to gRPC format and transport it when gets
# requestion from clients.
# Factory grpccache is the VMCache client.  It will request gRPC format
# VM and convert it back to a VM.  If VMCache function is enabled,
# kata-runtime will request VM from factory grpccache when it creates
# a new sandbox.
#
# Default 0
#vm_cache_number = 0

# Specify the address of the Unix socket that is used by VMCache.
#
# Default /var/run/kata-containers/cache.sock
#vm_cache_endpoint = "/var/run/kata-containers/cache.sock"

[proxy.kata]
path = "/opt/kata/libexec/kata-containers/kata-proxy"

# If enabled, proxy messages will be sent to the system log
# (default: disabled)
enable_debug = true

[shim.kata]
path = "/opt/kata/libexec/kata-containers/kata-shim"

# If enabled, shim messages will be sent to the system log
# (default: disabled)
enable_debug = true

# If enabled, the shim will create opentracing.io traces and spans.
# (See https://www.jaegertracing.io/docs/getting-started).
#
# Note: By default, the shim runs in a separate network namespace. Therefore,
# to allow it to send trace details to the Jaeger agent running on the host,
# it is necessary to set 'disable_new_netns=true' so that it runs in the host
# network namespace.
#
# (default: disabled)
#enable_tracing = true

[agent.kata]
# There is no field for this section. The goal is only to be able to
# specify which type of agent the user wants to use.

[netmon]
# If enabled, the network monitoring process gets started when the
# sandbox is created. This allows for the detection of some additional
# network being added to the existing network namespace, after the
# sandbox has been created.
# (default: disabled)
#enable_netmon = true

# Specify the path to the netmon binary.
path = "/opt/kata/libexec/kata-containers/kata-netmon"

# If enabled, netmon messages will be sent to the system log
# (default: disabled)
enable_debug = true

[runtime]
# If enabled, the runtime will log additional debug messages to the
# system log
# (default: disabled)
enable_debug = true
#
# Internetworking model
# Determines how the VM should be connected to the
# the container network interface
# Options:
#
#   - bridged
#     Uses a linux bridge to interconnect the container interface to
#     the VM. Works for most cases except macvlan and ipvlan.
#
#   - macvtap
#     Used when the Container network interface can be bridged using
#     macvtap.
#
#   - none
#     Used when customize network. Only creates a tap device. No veth pair.
#
#   - tcfilter
#     Uses tc filter rules to redirect traffic from the network interface
#     provided by plugin to a tap interface connected to the VM.
#
internetworking_model="tcfilter"

# disable guest seccomp
# Determines whether container seccomp profiles are passed to the virtual
# machine and applied by the kata agent. If set to true, seccomp is not applied
# within the guest
# (default: true)
disable_guest_seccomp=true

# If enabled, the runtime will create opentracing.io traces and spans.
# (See https://www.jaegertracing.io/docs/getting-started).
# (default: disabled)
#enable_tracing = true

# If enabled, the runtime will not create a network namespace for shim and hypervisor processes.
# This option may have some potential impacts to your host. It should only be used when you know what you're doing.
# `disable_new_netns` conflicts with `enable_netmon`
# `disable_new_netns` conflicts with `internetworking_model=bridged` and `internetworking_model=macvtap`. It works only
# with `internetworking_model=none`. The tap device will be in the host network namespace and can connect to a bridge
# (like OVS) directly.
# If you are using docker, `disable_new_netns` only works with `docker run --net=none`
# (default: false)
#disable_new_netns = true

Config file /usr/share/defaults/kata-containers/configuration.toml not found


KSM throttler

version

Output of "--version":

kata-collect-data.sh: line 175: --version: command not found

systemd service

Image details

---
osbuilder:
  url: "https://github.com/kata-containers/osbuilder"
  version: "unknown"
rootfs-creation-time: "2019-07-26T10:18:47.504408743+0000Z"
description: "osbuilder rootfs"
file-format-version: "0.0.2"
architecture: "x86_64"
base-distro:
  name: "Centos"
  version: "7"
  packages:
    default:
      - "chrony"
      - "iptables"
      - "systemd"
    extra:

agent:
  url: "https://github.com/kata-containers/agent"
  name: "kata-agent"
  version: "1.7.0-rc1-f983b3665ff954864de23c0a81e15378ef300855"
  agent-is-init-daemon: "no"

Initrd details

No initrd


Logfiles

Runtime logs

Recent runtime problems found in system journal:

time="2019-07-29T10:13:57.831679833+08:00" level=info msg="endpoint unmarshalled" arch=amd64 command=state container=e924b267ef4e8758fa2cc9ea73420c77eb8a18532617bf3980d6ceb14d45bc7d endpoint="&{{{2d5b8933-9cd0-4281-b8e7-426743b0b32f br1_kata {tap1_kata 5a:6b:d8:12:ae:b2 []} [0xc000010a40 0xc000010a48 0xc000010a50 0xc000010a58 0xc000010a60 0xc000010a68 0xc000010a70 0xc000010a78 0xc000010a80 0xc000010a88 0xc000010a90 0xc000010a98 0xc000010aa0 0xc000010aa8 0xc000010ab0 0xc000010ab8] [0xc000010ac0 0xc000010ac8 0xc000010ad0 0xc000010ad8 0xc000010ae0 0xc000010ae8 0xc000010af0 0xc000010af8 0xc000010b00 0xc000010b08 0xc000010b10 0xc000010b18 0xc000010b20 0xc000010b28 0xc000010b30 0xc000010b38]} {eth1 0e:b6:33:a7:7e:ec []} 4} {{{4 1500 0 eth1 5a:6b:d8:12:ae:b2 up|broadcast|multicast 69699 2 0 <nil>  0xc0005363c0 0 <nil> ether <nil> unknown 0 0 0} macvlan} [220.194.64.242/25 eth1 fe80::586b:d8ff:fe12:aeb2/64] [{Ifindex: 4 Dst: <nil> Src: <nil> Gw: 220.194.64.129 Flags: [] Table: 254} {Ifindex: 4 Dst: 220.194.64.128/25 Src: 220.194.64.242 Gw: <nil> Flags: [] Table: 254} {Ifindex: 4 Dst: fe80::/64 Src: <nil> Gw: <nil> Flags: [] Table: 254} {Ifindex: 4 Dst: <nil> Src: <nil> Gw: fe80::3a0e:4dff:fed2:2eaf Flags: [] Table: 254}] {[]  [] []}} macvlan }" endpoint-type=macvlan name=kata-runtime pid=75535 source=virtcontainers subsystem=network
time="2019-07-29T10:13:58.546013324+08:00" level=info msg="endpoint unmarshalled" arch=amd64 command=state container=e924b267ef4e8758fa2cc9ea73420c77eb8a18532617bf3980d6ceb14d45bc7d endpoint="&{{{2d5b8933-9cd0-4281-b8e7-426743b0b32f br1_kata {tap1_kata 5a:6b:d8:12:ae:b2 []} [0xc000010a40 0xc000010a48 0xc000010a50 0xc000010a58 0xc000010a60 0xc000010a68 0xc000010a70 0xc000010a78 0xc000010a80 0xc000010a88 0xc000010a90 0xc000010a98 0xc000010aa0 0xc000010aa8 0xc000010ab0 0xc000010ab8] [0xc000010ac0 0xc000010ac8 0xc000010ad0 0xc000010ad8 0xc000010ae0 0xc000010ae8 0xc000010af0 0xc000010af8 0xc000010b00 0xc000010b08 0xc000010b10 0xc000010b18 0xc000010b20 0xc000010b28 0xc000010b30 0xc000010b38]} {eth1 0e:b6:33:a7:7e:ec []} 4} {{{4 1500 0 eth1 5a:6b:d8:12:ae:b2 up|broadcast|multicast 69699 2 0 <nil>  0xc0005383c0 0 <nil> ether <nil> unknown 0 0 0} macvlan} [220.194.64.242/25 eth1 fe80::586b:d8ff:fe12:aeb2/64] [{Ifindex: 4 Dst: <nil> Src: <nil> Gw: 220.194.64.129 Flags: [] Table: 254} {Ifindex: 4 Dst: 220.194.64.128/25 Src: 220.194.64.242 Gw: <nil> Flags: [] Table: 254} {Ifindex: 4 Dst: fe80::/64 Src: <nil> Gw: <nil> Flags: [] Table: 254} {Ifindex: 4 Dst: <nil> Src: <nil> Gw: fe80::3a0e:4dff:fed2:2eaf Flags: [] Table: 254}] {[]  [] []}} macvlan }" endpoint-type=macvlan name=kata-runtime pid=75617 source=virtcontainers subsystem=network
time="2019-07-29T10:13:58.871859117+08:00" level=info msg="endpoint unmarshalled" arch=amd64 command=state container=dcd89a770934f214bbfb0fc337f8327fbda1debe1decf1721f224bddd307fcbf endpoint="&{{{5b6c5fe7-07ce-410f-989b-c948f3df766a br1_kata {tap1_kata de:b8:03:ab:c1:af []} [0xc0004e6a38 0xc0004e6a40 0xc0004e6a48 0xc0004e6a50 0xc0004e6a58 0xc0004e6a60 0xc0004e6a68 0xc0004e6a70 0xc0004e6a78 0xc0004e6a80 0xc0004e6a88 0xc0004e6a90 0xc0004e6a98 0xc0004e6aa0 0xc0004e6aa8 0xc0004e6ab0] [0xc0004e6ab8 0xc0004e6ac0 0xc0004e6ac8 0xc0004e6ad0 0xc0004e6ad8 0xc0004e6ae0 0xc0004e6ae8 0xc0004e6af0 0xc0004e6af8 0xc0004e6b00 0xc0004e6b08 0xc0004e6b10 0xc0004e6b18 0xc0004e6b20 0xc0004e6b28 0xc0004e6b30]} {eth1 a2:09:d1:7b:9c:ab []} 4} {{{4 1500 0 eth1 de:b8:03:ab:c1:af up|broadcast|multicast 69699 2 0 <nil>  0xc0001d4300 0 <nil> ether <nil> unknown 0 0 0} macvlan} [220.194.64.224/25 eth1 fe80::dcb8:3ff:feab:c1af/64] [{Ifindex: 4 Dst: <nil> Src: <nil> Gw: 220.194.64.129 Flags: [] Table: 254} {Ifindex: 4 Dst: 220.194.64.128/25 Src: 220.194.64.224 Gw: <nil> Flags: [] Table: 254} {Ifindex: 4 Dst: fe80::/64 Src: <nil> Gw: <nil> Flags: [] Table: 254} {Ifindex: 4 Dst: <nil> Src: <nil> Gw: fe80::3a0e:4dff:fed2:2eaf Flags: [] Table: 254}] {[]  [] []}} macvlan }" endpoint-type=macvlan name=kata-runtime pid=75639 source=virtcontainers subsystem=network
time="2019-07-29T10:13:59.468234766+08:00" level=info msg="endpoint unmarshalled" arch=amd64 command=kill container=dcd89a770934f214bbfb0fc337f8327fbda1debe1decf1721f224bddd307fcbf endpoint="&{{{5b6c5fe7-07ce-410f-989b-c948f3df766a br1_kata {tap1_kata de:b8:03:ab:c1:af []} [0xc000010a48 0xc000010a50 0xc000010a58 0xc000010a60 0xc000010a68 0xc000010a70 0xc000010a78 0xc000010a80 0xc000010a88 0xc000010a90 0xc000010a98 0xc000010aa0 0xc000010aa8 0xc000010ab0 0xc000010ab8 0xc000010ac0] [0xc000010ac8 0xc000010ad0 0xc000010ad8 0xc000010ae0 0xc000010ae8 0xc000010af0 0xc000010af8 0xc000010b00 0xc000010b08 0xc000010b10 0xc000010b18 0xc000010b20 0xc000010b28 0xc000010b30 0xc000010b38 0xc000010b40]} {eth1 a2:09:d1:7b:9c:ab []} 4} {{{4 1500 0 eth1 de:b8:03:ab:c1:af up|broadcast|multicast 69699 2 0 <nil>  0xc000536300 0 <nil> ether <nil> unknown 0 0 0} macvlan} [220.194.64.224/25 eth1 fe80::dcb8:3ff:feab:c1af/64] [{Ifindex: 4 Dst: <nil> Src: <nil> Gw: 220.194.64.129 Flags: [] Table: 254} {Ifindex: 4 Dst: 220.194.64.128/25 Src: 220.194.64.224 Gw: <nil> Flags: [] Table: 254} {Ifindex: 4 Dst: fe80::/64 Src: <nil> Gw: <nil> Flags: [] Table: 254} {Ifindex: 4 Dst: <nil> Src: <nil> Gw: fe80::3a0e:4dff:fed2:2eaf Flags: [] Table: 254}] {[]  [] []}} macvlan }" endpoint-type=macvlan name=kata-runtime pid=75646 source=virtcontainers subsystem=network
time="2019-07-29T10:13:59.868813436+08:00" level=info msg="endpoint unmarshalled" arch=amd64 command=kill container=dcd89a770934f214bbfb0fc337f8327fbda1debe1decf1721f224bddd307fcbf endpoint="&{{{5b6c5fe7-07ce-410f-989b-c948f3df766a br1_kata {tap1_kata de:b8:03:ab:c1:af []} [0xc000010d58 0xc000010d60 0xc000010d68 0xc000010d70 0xc000010d78 0xc000010d80 0xc000010d88 0xc000010d90 0xc000010d98 0xc000010da0 0xc000010da8 0xc000010db0 0xc000010db8 0xc000010dc0 0xc000010dc8 0xc000010dd0] [0xc000010dd8 0xc000010de0 0xc000010de8 0xc000010df0 0xc000010df8 0xc000010e00 0xc000010e08 0xc000010e10 0xc000010e18 0xc000010e20 0xc000010e28 0xc000010e30 0xc000010e38 0xc000010e40 0xc000010e48 0xc000010e50]} {eth1 a2:09:d1:7b:9c:ab []} 4} {{{4 1500 0 eth1 de:b8:03:ab:c1:af up|broadcast|multicast 69699 2 0 <nil>  0xc000536780 0 <nil> ether <nil> unknown 0 0 0} macvlan} [220.194.64.224/25 eth1 fe80::dcb8:3ff:feab:c1af/64] [{Ifindex: 4 Dst: <nil> Src: <nil> Gw: 220.194.64.129 Flags: [] Table: 254} {Ifindex: 4 Dst: 220.194.64.128/25 Src: 220.194.64.224 Gw: <nil> Flags: [] Table: 254} {Ifindex: 4 Dst: fe80::/64 Src: <nil> Gw: <nil> Flags: [] Table: 254} {Ifindex: 4 Dst: <nil> Src: <nil> Gw: fe80::3a0e:4dff:fed2:2eaf Flags: [] Table: 254}] {[]  [] []}} macvlan }" endpoint-type=macvlan name=kata-runtime pid=75646 sandbox=dcd89a770934f214bbfb0fc337f8327fbda1debe1decf1721f224bddd307fcbf source=virtcontainers subsystem=network
time="2019-07-29T10:13:59.976953955+08:00" level=warning msg="Agent did not stop sandbox" arch=amd64 command=kill container=dcd89a770934f214bbfb0fc337f8327fbda1debe1decf1721f224bddd307fcbf error="Proxy is not running: no such process" name=kata-runtime pid=75646 sandbox=dcd89a770934f214bbfb0fc337f8327fbda1debe1decf1721f224bddd307fcbf sandboxid=dcd89a770934f214bbfb0fc337f8327fbda1debe1decf1721f224bddd307fcbf source=virtcontainers subsystem=sandbox
time="2019-07-29T10:14:00.102559511+08:00" level=warning msg="Unable to connect to unix socket (/run/vc/vm/dcd89a770934f214bbfb0fc337f8327fbda1debe1decf1721f224bddd307fcbf/qmp.sock): dial unix /run/vc/vm/dcd89a770934f214bbfb0fc337f8327fbda1debe1decf1721f224bddd307fcbf/qmp.sock: connect: no such file or directory" arch=amd64 command=kill container=dcd89a770934f214bbfb0fc337f8327fbda1debe1decf1721f224bddd307fcbf name=kata-runtime pid=75646 sandbox=dcd89a770934f214bbfb0fc337f8327fbda1debe1decf1721f224bddd307fcbf source=virtcontainers subsystem=qmp
time="2019-07-29T10:14:00.160452348+08:00" level=error msg="Failed to connect to QEMU instance" arch=amd64 command=kill container=dcd89a770934f214bbfb0fc337f8327fbda1debe1decf1721f224bddd307fcbf error="dial unix /run/vc/vm/dcd89a770934f214bbfb0fc337f8327fbda1debe1decf1721f224bddd307fcbf/qmp.sock: connect: no such file or directory" name=kata-runtime pid=75646 sandbox=dcd89a770934f214bbfb0fc337f8327fbda1debe1decf1721f224bddd307fcbf source=virtcontainers subsystem=qemu
time="2019-07-29T10:14:00.235747271+08:00" level=warning msg="failed to resolve vm path" arch=amd64 command=kill container=dcd89a770934f214bbfb0fc337f8327fbda1debe1decf1721f224bddd307fcbf dir=/run/vc/vm/dcd89a770934f214bbfb0fc337f8327fbda1debe1decf1721f224bddd307fcbf error="lstat /run/vc/vm/dcd89a770934f214bbfb0fc337f8327fbda1debe1decf1721f224bddd307fcbf: no such file or directory" name=kata-runtime pid=75646 sandbox=dcd89a770934f214bbfb0fc337f8327fbda1debe1decf1721f224bddd307fcbf source=virtcontainers subsystem=qemu
time="2019-07-29T10:14:00.35248247+08:00" level=error msg="dial unix /run/vc/vm/dcd89a770934f214bbfb0fc337f8327fbda1debe1decf1721f224bddd307fcbf/qmp.sock: connect: no such file or directory" arch=amd64 command=kill container=dcd89a770934f214bbfb0fc337f8327fbda1debe1decf1721f224bddd307fcbf name=kata-runtime pid=75646 sandbox=dcd89a770934f214bbfb0fc337f8327fbda1debe1decf1721f224bddd307fcbf source=runtime
time="2019-07-29T10:15:22.70412583+08:00" level=info msg="endpoint unmarshalled" arch=amd64 command=state container=33b530cde3a4704e5c797e3944e98f54d5666ba65b420954db8e5fa048236a25 endpoint="&{{{42ccee8b-344e-47b8-8f46-cf223d3b16f6 br1_kata {tap1_kata ca:e2:3b:0e:29:ad []} [0xc0000aac80 0xc0000aac88 0xc0000aac90 0xc0000aac98 0xc0000aaca0 0xc0000aaca8 0xc0000aacb0 0xc0000aacb8 0xc0000aacc0 0xc0000aacc8 0xc0000aacd0 0xc0000aacd8 0xc0000aace0 0xc0000aace8 0xc0000aacf0 0xc0000aacf8] [0xc0000aad00 0xc0000aad08 0xc0000aad10 0xc0000aad18 0xc0000aad20 0xc0000aad28 0xc0000aad30 0xc0000aad38 0xc0000aad40 0xc0000aad48 0xc0000aad50 0xc0000aad58 0xc0000aad60 0xc0000aad68 0xc0000aad70 0xc0000aad78]} {eth1 f6:3f:49:ce:b6:e7 []} 4} {{{4 1500 0 eth1 ca:e2:3b:0e:29:ad up|broadcast|multicast 69699 2 0 <nil>  0xc000554300 0 <nil> ether <nil> unknown 0 0 0} macvlan} [220.194.64.219/25 eth1 fe80::c8e2:3bff:fe0e:29ad/64] [{Ifindex: 4 Dst: <nil> Src: <nil> Gw: 220.194.64.129 Flags: [] Table: 254} {Ifindex: 4 Dst: 220.194.64.128/25 Src: 220.194.64.219 Gw: <nil> Flags: [] Table: 254} {Ifindex: 4 Dst: fe80::/64 Src: <nil> Gw: <nil> Flags: [] Table: 254} {Ifindex: 4 Dst: <nil> Src: <nil> Gw: fe80::3a0e:4dff:fed2:2eaf Flags: [] Table: 254}] {[]  [] []}} macvlan }" endpoint-type=macvlan name=kata-runtime pid=77224 source=virtcontainers subsystem=network
time="2019-07-29T10:15:23.296535142+08:00" level=info msg="endpoint unmarshalled" arch=amd64 command=kill container=33b530cde3a4704e5c797e3944e98f54d5666ba65b420954db8e5fa048236a25 endpoint="&{{{42ccee8b-344e-47b8-8f46-cf223d3b16f6 br1_kata {tap1_kata ca:e2:3b:0e:29:ad []} [0xc0000aac80 0xc0000aac88 0xc0000aac90 0xc0000aac98 0xc0000aaca0 0xc0000aaca8 0xc0000aacb0 0xc0000aacb8 0xc0000aacc0 0xc0000aacc8 0xc0000aacd0 0xc0000aacd8 0xc0000aace0 0xc0000aace8 0xc0000aacf0 0xc0000aacf8] [0xc0000aad00 0xc0000aad08 0xc0000aad10 0xc0000aad18 0xc0000aad20 0xc0000aad28 0xc0000aad30 0xc0000aad38 0xc0000aad40 0xc0000aad48 0xc0000aad50 0xc0000aad58 0xc0000aad60 0xc0000aad68 0xc0000aad70 0xc0000aad78]} {eth1 f6:3f:49:ce:b6:e7 []} 4} {{{4 1500 0 eth1 ca:e2:3b:0e:29:ad up|broadcast|multicast 69699 2 0 <nil>  0xc000532300 0 <nil> ether <nil> unknown 0 0 0} macvlan} [220.194.64.219/25 eth1 fe80::c8e2:3bff:fe0e:29ad/64] [{Ifindex: 4 Dst: <nil> Src: <nil> Gw: 220.194.64.129 Flags: [] Table: 254} {Ifindex: 4 Dst: 220.194.64.128/25 Src: 220.194.64.219 Gw: <nil> Flags: [] Table: 254} {Ifindex: 4 Dst: fe80::/64 Src: <nil> Gw: <nil> Flags: [] Table: 254} {Ifindex: 4 Dst: <nil> Src: <nil> Gw: fe80::3a0e:4dff:fed2:2eaf Flags: [] Table: 254}] {[]  [] []}} macvlan }" endpoint-type=macvlan name=kata-runtime pid=77245 source=virtcontainers subsystem=network
time="2019-07-29T10:15:23.638789766+08:00" level=info msg="endpoint unmarshalled" arch=amd64 command=kill container=33b530cde3a4704e5c797e3944e98f54d5666ba65b420954db8e5fa048236a25 endpoint="&{{{42ccee8b-344e-47b8-8f46-cf223d3b16f6 br1_kata {tap1_kata ca:e2:3b:0e:29:ad []} [0xc0000aaf60 0xc0000aaf68 0xc0000aaf70 0xc0000aaf78 0xc0000aaf80 0xc0000aaf88 0xc0000aaf90 0xc0000aaf98 0xc0000aafa0 0xc0000aafa8 0xc0000aafb0 0xc0000aafb8 0xc0000aafc0 0xc0000aafc8 0xc0000aafd0 0xc0000aafd8] [0xc0000aafe0 0xc0000aafe8 0xc0000aaff0 0xc0000aaff8 0xc0000ab000 0xc0000ab008 0xc0000ab010 0xc0000ab018 0xc0000ab020 0xc0000ab028 0xc0000ab030 0xc0000ab038 0xc0000ab040 0xc0000ab048 0xc0000ab050 0xc0000ab058]} {eth1 f6:3f:49:ce:b6:e7 []} 4} {{{4 1500 0 eth1 ca:e2:3b:0e:29:ad up|broadcast|multicast 69699 2 0 <nil>  0xc000532780 0 <nil> ether <nil> unknown 0 0 0} macvlan} [220.194.64.219/25 eth1 fe80::c8e2:3bff:fe0e:29ad/64] [{Ifindex: 4 Dst: <nil> Src: <nil> Gw: 220.194.64.129 Flags: [] Table: 254} {Ifindex: 4 Dst: 220.194.64.128/25 Src: 220.194.64.219 Gw: <nil> Flags: [] Table: 254} {Ifindex: 4 Dst: fe80::/64 Src: <nil> Gw: <nil> Flags: [] Table: 254} {Ifindex: 4 Dst: <nil> Src: <nil> Gw: fe80::3a0e:4dff:fed2:2eaf Flags: [] Table: 254}] {[]  [] []}} macvlan }" endpoint-type=macvlan name=kata-runtime pid=77245 sandbox=33b530cde3a4704e5c797e3944e98f54d5666ba65b420954db8e5fa048236a25 source=virtcontainers subsystem=network
time="2019-07-29T10:15:23.746968265+08:00" level=warning msg="Agent did not stop sandbox" arch=amd64 command=kill container=33b530cde3a4704e5c797e3944e98f54d5666ba65b420954db8e5fa048236a25 error="Proxy is not running: no such process" name=kata-runtime pid=77245 sandbox=33b530cde3a4704e5c797e3944e98f54d5666ba65b420954db8e5fa048236a25 sandboxid=33b530cde3a4704e5c797e3944e98f54d5666ba65b420954db8e5fa048236a25 source=virtcontainers subsystem=sandbox
time="2019-07-29T10:15:23.847413376+08:00" level=warning msg="Unable to connect to unix socket (/run/vc/vm/33b530cde3a4704e5c797e3944e98f54d5666ba65b420954db8e5fa048236a25/qmp.sock): dial unix /run/vc/vm/33b530cde3a4704e5c797e3944e98f54d5666ba65b420954db8e5fa048236a25/qmp.sock: connect: no such file or directory" arch=amd64 command=kill container=33b530cde3a4704e5c797e3944e98f54d5666ba65b420954db8e5fa048236a25 name=kata-runtime pid=77245 sandbox=33b530cde3a4704e5c797e3944e98f54d5666ba65b420954db8e5fa048236a25 source=virtcontainers subsystem=qmp
time="2019-07-29T10:15:23.880495286+08:00" level=error msg="Failed to connect to QEMU instance" arch=amd64 command=kill container=33b530cde3a4704e5c797e3944e98f54d5666ba65b420954db8e5fa048236a25 error="dial unix /run/vc/vm/33b530cde3a4704e5c797e3944e98f54d5666ba65b420954db8e5fa048236a25/qmp.sock: connect: no such file or directory" name=kata-runtime pid=77245 sandbox=33b530cde3a4704e5c797e3944e98f54d5666ba65b420954db8e5fa048236a25 source=virtcontainers subsystem=qemu
time="2019-07-29T10:15:23.93945155+08:00" level=warning msg="failed to resolve vm path" arch=amd64 command=kill container=33b530cde3a4704e5c797e3944e98f54d5666ba65b420954db8e5fa048236a25 dir=/run/vc/vm/33b530cde3a4704e5c797e3944e98f54d5666ba65b420954db8e5fa048236a25 error="lstat /run/vc/vm/33b530cde3a4704e5c797e3944e98f54d5666ba65b420954db8e5fa048236a25: no such file or directory" name=kata-runtime pid=77245 sandbox=33b530cde3a4704e5c797e3944e98f54d5666ba65b420954db8e5fa048236a25 source=virtcontainers subsystem=qemu
time="2019-07-29T10:15:24.072476663+08:00" level=error msg="dial unix /run/vc/vm/33b530cde3a4704e5c797e3944e98f54d5666ba65b420954db8e5fa048236a25/qmp.sock: connect: no such file or directory" arch=amd64 command=kill container=33b530cde3a4704e5c797e3944e98f54d5666ba65b420954db8e5fa048236a25 name=kata-runtime pid=77245 sandbox=33b530cde3a4704e5c797e3944e98f54d5666ba65b420954db8e5fa048236a25 source=runtime
time="2019-07-29T10:19:01.756199462+08:00" level=info msg="endpoint unmarshalled" arch=amd64 command=state container=dcd89a770934f214bbfb0fc337f8327fbda1debe1decf1721f224bddd307fcbf endpoint="&{{{5b6c5fe7-07ce-410f-989b-c948f3df766a br1_kata {tap1_kata de:b8:03:ab:c1:af []} [0xc000010a40 0xc000010a48 0xc000010a50 0xc000010a58 0xc000010a60 0xc000010a68 0xc000010a70 0xc000010a78 0xc000010a80 0xc000010a88 0xc000010a90 0xc000010a98 0xc000010aa0 0xc000010aa8 0xc000010ab0 0xc000010ab8] [0xc000010ac0 0xc000010ac8 0xc000010ad0 0xc000010ad8 0xc000010ae0 0xc000010ae8 0xc000010af0 0xc000010af8 0xc000010b00 0xc000010b08 0xc000010b10 0xc000010b18 0xc000010b20 0xc000010b28 0xc000010b30 0xc000010b38]} {eth1 a2:09:d1:7b:9c:ab []} 4} {{{4 1500 0 eth1 de:b8:03:ab:c1:af up|broadcast|multicast 69699 2 0 <nil>  0xc000536300 0 <nil> ether <nil> unknown 0 0 0} macvlan} [220.194.64.224/25 eth1 fe80::dcb8:3ff:feab:c1af/64] [{Ifindex: 4 Dst: <nil> Src: <nil> Gw: 220.194.64.129 Flags: [] Table: 254} {Ifindex: 4 Dst: 220.194.64.128/25 Src: 220.194.64.224 Gw: <nil> Flags: [] Table: 254} {Ifindex: 4 Dst: fe80::/64 Src: <nil> Gw: <nil> Flags: [] Table: 254} {Ifindex: 4 Dst: <nil> Src: <nil> Gw: fe80::3a0e:4dff:fed2:2eaf Flags: [] Table: 254}] {[]  [] []}} macvlan }" endpoint-type=macvlan name=kata-runtime pid=81797 source=virtcontainers subsystem=network
time="2019-07-29T10:19:02.308050207+08:00" level=info msg="endpoint unmarshalled" arch=amd64 command=kill container=dcd89a770934f214bbfb0fc337f8327fbda1debe1decf1721f224bddd307fcbf endpoint="&{{{5b6c5fe7-07ce-410f-989b-c948f3df766a br1_kata {tap1_kata de:b8:03:ab:c1:af []} [0xc0004f6a40 0xc0004f6a48 0xc0004f6a50 0xc0004f6a58 0xc0004f6a60 0xc0004f6a68 0xc0004f6a70 0xc0004f6a78 0xc0004f6a80 0xc0004f6a88 0xc0004f6a90 0xc0004f6a98 0xc0004f6aa0 0xc0004f6aa8 0xc0004f6ab0 0xc0004f6ab8] [0xc0004f6ac0 0xc0004f6ac8 0xc0004f6ad0 0xc0004f6ad8 0xc0004f6ae0 0xc0004f6ae8 0xc0004f6af0 0xc0004f6af8 0xc0004f6b00 0xc0004f6b08 0xc0004f6b10 0xc0004f6b18 0xc0004f6b20 0xc0004f6b28 0xc0004f6b30 0xc0004f6b38]} {eth1 a2:09:d1:7b:9c:ab []} 4} {{{4 1500 0 eth1 de:b8:03:ab:c1:af up|broadcast|multicast 69699 2 0 <nil>  0xc00051a300 0 <nil> ether <nil> unknown 0 0 0} macvlan} [220.194.64.224/25 eth1 fe80::dcb8:3ff:feab:c1af/64] [{Ifindex: 4 Dst: <nil> Src: <nil> Gw: 220.194.64.129 Flags: [] Table: 254} {Ifindex: 4 Dst: 220.194.64.128/25 Src: 220.194.64.224 Gw: <nil> Flags: [] Table: 254} {Ifindex: 4 Dst: fe80::/64 Src: <nil> Gw: <nil> Flags: [] Table: 254} {Ifindex: 4 Dst: <nil> Src: <nil> Gw: fe80::3a0e:4dff:fed2:2eaf Flags: [] Table: 254}] {[]  [] []}} macvlan }" endpoint-type=macvlan name=kata-runtime pid=81805 source=virtcontainers subsystem=network
time="2019-07-29T10:19:02.750271861+08:00" level=info msg="endpoint unmarshalled" arch=amd64 command=kill container=dcd89a770934f214bbfb0fc337f8327fbda1debe1decf1721f224bddd307fcbf endpoint="&{{{5b6c5fe7-07ce-410f-989b-c948f3df766a br1_kata {tap1_kata de:b8:03:ab:c1:af []} [0xc0004f6d50 0xc0004f6d58 0xc0004f6d60 0xc0004f6d68 0xc0004f6d70 0xc0004f6d78 0xc0004f6d80 0xc0004f6d88 0xc0004f6d90 0xc0004f6d98 0xc0004f6da0 0xc0004f6da8 0xc0004f6db0 0xc0004f6db8 0xc0004f6dc0 0xc0004f6dc8] [0xc0004f6dd0 0xc0004f6dd8 0xc0004f6de0 0xc0004f6de8 0xc0004f6df0 0xc0004f6df8 0xc0004f6e00 0xc0004f6e08 0xc0004f6e10 0xc0004f6e18 0xc0004f6e20 0xc0004f6e28 0xc0004f6e30 0xc0004f6e38 0xc0004f6e40 0xc0004f6e48]} {eth1 a2:09:d1:7b:9c:ab []} 4} {{{4 1500 0 eth1 de:b8:03:ab:c1:af up|broadcast|multicast 69699 2 0 <nil>  0xc00051a780 0 <nil> ether <nil> unknown 0 0 0} macvlan} [220.194.64.224/25 eth1 fe80::dcb8:3ff:feab:c1af/64] [{Ifindex: 4 Dst: <nil> Src: <nil> Gw: 220.194.64.129 Flags: [] Table: 254} {Ifindex: 4 Dst: 220.194.64.128/25 Src: 220.194.64.224 Gw: <nil> Flags: [] Table: 254} {Ifindex: 4 Dst: fe80::/64 Src: <nil> Gw: <nil> Flags: [] Table: 254} {Ifindex: 4 Dst: <nil> Src: <nil> Gw: fe80::3a0e:4dff:fed2:2eaf Flags: [] Table: 254}] {[]  [] []}} macvlan }" endpoint-type=macvlan name=kata-runtime pid=81805 sandbox=dcd89a770934f214bbfb0fc337f8327fbda1debe1decf1721f224bddd307fcbf source=virtcontainers subsystem=network
time="2019-07-29T10:19:02.876097606+08:00" level=warning msg="Agent did not stop sandbox" arch=amd64 command=kill container=dcd89a770934f214bbfb0fc337f8327fbda1debe1decf1721f224bddd307fcbf error="Proxy is not running: no such process" name=kata-runtime pid=81805 sandbox=dcd89a770934f214bbfb0fc337f8327fbda1debe1decf1721f224bddd307fcbf sandboxid=dcd89a770934f214bbfb0fc337f8327fbda1debe1decf1721f224bddd307fcbf source=virtcontainers subsystem=sandbox
time="2019-07-29T10:19:03.017356287+08:00" level=warning msg="Unable to connect to unix socket (/run/vc/vm/dcd89a770934f214bbfb0fc337f8327fbda1debe1decf1721f224bddd307fcbf/qmp.sock): dial unix /run/vc/vm/dcd89a770934f214bbfb0fc337f8327fbda1debe1decf1721f224bddd307fcbf/qmp.sock: connect: no such file or directory" arch=amd64 command=kill container=dcd89a770934f214bbfb0fc337f8327fbda1debe1decf1721f224bddd307fcbf name=kata-runtime pid=81805 sandbox=dcd89a770934f214bbfb0fc337f8327fbda1debe1decf1721f224bddd307fcbf source=virtcontainers subsystem=qmp
time="2019-07-29T10:19:03.058863757+08:00" level=error msg="Failed to connect to QEMU instance" arch=amd64 command=kill container=dcd89a770934f214bbfb0fc337f8327fbda1debe1decf1721f224bddd307fcbf error="dial unix /run/vc/vm/dcd89a770934f214bbfb0fc337f8327fbda1debe1decf1721f224bddd307fcbf/qmp.sock: connect: no such file or directory" name=kata-runtime pid=81805 sandbox=dcd89a770934f214bbfb0fc337f8327fbda1debe1decf1721f224bddd307fcbf source=virtcontainers subsystem=qemu
time="2019-07-29T10:19:03.092137797+08:00" level=warning msg="failed to resolve vm path" arch=amd64 command=kill container=dcd89a770934f214bbfb0fc337f8327fbda1debe1decf1721f224bddd307fcbf dir=/run/vc/vm/dcd89a770934f214bbfb0fc337f8327fbda1debe1decf1721f224bddd307fcbf error="lstat /run/vc/vm/dcd89a770934f214bbfb0fc337f8327fbda1debe1decf1721f224bddd307fcbf: no such file or directory" name=kata-runtime pid=81805 sandbox=dcd89a770934f214bbfb0fc337f8327fbda1debe1decf1721f224bddd307fcbf source=virtcontainers subsystem=qemu
time="2019-07-29T10:19:03.208968118+08:00" level=error msg="dial unix /run/vc/vm/dcd89a770934f214bbfb0fc337f8327fbda1debe1decf1721f224bddd307fcbf/qmp.sock: connect: no such file or directory" arch=amd64 command=kill container=dcd89a770934f214bbfb0fc337f8327fbda1debe1decf1721f224bddd307fcbf name=kata-runtime pid=81805 sandbox=dcd89a770934f214bbfb0fc337f8327fbda1debe1decf1721f224bddd307fcbf source=runtime
time="2019-07-29T10:20:24.726980141+08:00" level=info msg="endpoint unmarshalled" arch=amd64 command=state container=33b530cde3a4704e5c797e3944e98f54d5666ba65b420954db8e5fa048236a25 endpoint="&{{{42ccee8b-344e-47b8-8f46-cf223d3b16f6 br1_kata {tap1_kata ca:e2:3b:0e:29:ad []} [0xc000010a40 0xc000010a48 0xc000010a50 0xc000010a58 0xc000010a60 0xc000010a68 0xc000010a70 0xc000010a78 0xc000010a80 0xc000010a88 0xc000010a90 0xc000010a98 0xc000010aa0 0xc000010aa8 0xc000010ab0 0xc000010ab8] [0xc000010ac0 0xc000010ac8 0xc000010ad0 0xc000010ad8 0xc000010ae0 0xc000010ae8 0xc000010af0 0xc000010af8 0xc000010b00 0xc000010b08 0xc000010b10 0xc000010b18 0xc000010b20 0xc000010b28 0xc000010b30 0xc000010b38]} {eth1 f6:3f:49:ce:b6:e7 []} 4} {{{4 1500 0 eth1 ca:e2:3b:0e:29:ad up|broadcast|multicast 69699 2 0 <nil>  0xc000536300 0 <nil> ether <nil> unknown 0 0 0} macvlan} [220.194.64.219/25 eth1 fe80::c8e2:3bff:fe0e:29ad/64] [{Ifindex: 4 Dst: <nil> Src: <nil> Gw: 220.194.64.129 Flags: [] Table: 254} {Ifindex: 4 Dst: 220.194.64.128/25 Src: 220.194.64.219 Gw: <nil> Flags: [] Table: 254} {Ifindex: 4 Dst: fe80::/64 Src: <nil> Gw: <nil> Flags: [] Table: 254} {Ifindex: 4 Dst: <nil> Src: <nil> Gw: fe80::3a0e:4dff:fed2:2eaf Flags: [] Table: 254}] {[]  [] []}} macvlan }" endpoint-type=macvlan name=kata-runtime pid=83425 source=virtcontainers subsystem=network
time="2019-07-29T10:20:25.327752223+08:00" level=info msg="endpoint unmarshalled" arch=amd64 command=kill container=33b530cde3a4704e5c797e3944e98f54d5666ba65b420954db8e5fa048236a25 endpoint="&{{{42ccee8b-344e-47b8-8f46-cf223d3b16f6 br1_kata {tap1_kata ca:e2:3b:0e:29:ad []} [0xc000010a48 0xc000010a50 0xc000010a58 0xc000010a60 0xc000010a68 0xc000010a70 0xc000010a78 0xc000010a80 0xc000010a88 0xc000010a90 0xc000010a98 0xc000010aa0 0xc000010aa8 0xc000010ab0 0xc000010ab8 0xc000010ac0] [0xc000010ac8 0xc000010ad0 0xc000010ad8 0xc000010ae0 0xc000010ae8 0xc000010af0 0xc000010af8 0xc000010b00 0xc000010b08 0xc000010b10 0xc000010b18 0xc000010b20 0xc000010b28 0xc000010b30 0xc000010b38 0xc000010b40]} {eth1 f6:3f:49:ce:b6:e7 []} 4} {{{4 1500 0 eth1 ca:e2:3b:0e:29:ad up|broadcast|multicast 69699 2 0 <nil>  0xc000534300 0 <nil> ether <nil> unknown 0 0 0} macvlan} [220.194.64.219/25 eth1 fe80::c8e2:3bff:fe0e:29ad/64] [{Ifindex: 4 Dst: <nil> Src: <nil> Gw: 220.194.64.129 Flags: [] Table: 254} {Ifindex: 4 Dst: 220.194.64.128/25 Src: 220.194.64.219 Gw: <nil> Flags: [] Table: 254} {Ifindex: 4 Dst: fe80::/64 Src: <nil> Gw: <nil> Flags: [] Table: 254} {Ifindex: 4 Dst: <nil> Src: <nil> Gw: fe80::3a0e:4dff:fed2:2eaf Flags: [] Table: 254}] {[]  [] []}} macvlan }" endpoint-type=macvlan name=kata-runtime pid=83432 source=virtcontainers subsystem=network
time="2019-07-29T10:20:25.678490481+08:00" level=info msg="endpoint unmarshalled" arch=amd64 command=kill container=33b530cde3a4704e5c797e3944e98f54d5666ba65b420954db8e5fa048236a25 endpoint="&{{{42ccee8b-344e-47b8-8f46-cf223d3b16f6 br1_kata {tap1_kata ca:e2:3b:0e:29:ad []} [0xc000010d28 0xc000010d30 0xc000010d38 0xc000010d40 0xc000010d48 0xc000010d50 0xc000010d58 0xc000010d60 0xc000010d68 0xc000010d70 0xc000010d78 0xc000010d80 0xc000010d88 0xc000010d90 0xc000010d98 0xc000010da0] [0xc000010da8 0xc000010db0 0xc000010db8 0xc000010dc0 0xc000010dc8 0xc000010dd0 0xc000010dd8 0xc000010de0 0xc000010de8 0xc000010df0 0xc000010df8 0xc000010e00 0xc000010e08 0xc000010e10 0xc000010e18 0xc000010e20]} {eth1 f6:3f:49:ce:b6:e7 []} 4} {{{4 1500 0 eth1 ca:e2:3b:0e:29:ad up|broadcast|multicast 69699 2 0 <nil>  0xc000534780 0 <nil> ether <nil> unknown 0 0 0} macvlan} [220.194.64.219/25 eth1 fe80::c8e2:3bff:fe0e:29ad/64] [{Ifindex: 4 Dst: <nil> Src: <nil> Gw: 220.194.64.129 Flags: [] Table: 254} {Ifindex: 4 Dst: 220.194.64.128/25 Src: 220.194.64.219 Gw: <nil> Flags: [] Table: 254} {Ifindex: 4 Dst: fe80::/64 Src: <nil> Gw: <nil> Flags: [] Table: 254} {Ifindex: 4 Dst: <nil> Src: <nil> Gw: fe80::3a0e:4dff:fed2:2eaf Flags: [] Table: 254}] {[]  [] []}} macvlan }" endpoint-type=macvlan name=kata-runtime pid=83432 sandbox=33b530cde3a4704e5c797e3944e98f54d5666ba65b420954db8e5fa048236a25 source=virtcontainers subsystem=network
time="2019-07-29T10:20:25.78663595+08:00" level=warning msg="Agent did not stop sandbox" arch=amd64 command=kill container=33b530cde3a4704e5c797e3944e98f54d5666ba65b420954db8e5fa048236a25 error="Proxy is not running: no such process" name=kata-runtime pid=83432 sandbox=33b530cde3a4704e5c797e3944e98f54d5666ba65b420954db8e5fa048236a25 sandboxid=33b530cde3a4704e5c797e3944e98f54d5666ba65b420954db8e5fa048236a25 source=virtcontainers subsystem=sandbox
time="2019-07-29T10:20:25.886849432+08:00" level=warning msg="Unable to connect to unix socket (/run/vc/vm/33b530cde3a4704e5c797e3944e98f54d5666ba65b420954db8e5fa048236a25/qmp.sock): dial unix /run/vc/vm/33b530cde3a4704e5c797e3944e98f54d5666ba65b420954db8e5fa048236a25/qmp.sock: connect: no such file or directory" arch=amd64 command=kill container=33b530cde3a4704e5c797e3944e98f54d5666ba65b420954db8e5fa048236a25 name=kata-runtime pid=83432 sandbox=33b530cde3a4704e5c797e3944e98f54d5666ba65b420954db8e5fa048236a25 source=virtcontainers subsystem=qmp
time="2019-07-29T10:20:25.953417936+08:00" level=error msg="Failed to connect to QEMU instance" arch=amd64 command=kill container=33b530cde3a4704e5c797e3944e98f54d5666ba65b420954db8e5fa048236a25 error="dial unix /run/vc/vm/33b530cde3a4704e5c797e3944e98f54d5666ba65b420954db8e5fa048236a25/qmp.sock: connect: no such file or directory" name=kata-runtime pid=83432 sandbox=33b530cde3a4704e5c797e3944e98f54d5666ba65b420954db8e5fa048236a25 source=virtcontainers subsystem=qemu
time="2019-07-29T10:20:25.995400753+08:00" level=warning msg="failed to resolve vm path" arch=amd64 command=kill container=33b530cde3a4704e5c797e3944e98f54d5666ba65b420954db8e5fa048236a25 dir=/run/vc/vm/33b530cde3a4704e5c797e3944e98f54d5666ba65b420954db8e5fa048236a25 error="lstat /run/vc/vm/33b530cde3a4704e5c797e3944e98f54d5666ba65b420954db8e5fa048236a25: no such file or directory" name=kata-runtime pid=83432 sandbox=33b530cde3a4704e5c797e3944e98f54d5666ba65b420954db8e5fa048236a25 source=virtcontainers subsystem=qemu
time="2019-07-29T10:20:26.120536018+08:00" level=error msg="dial unix /run/vc/vm/33b530cde3a4704e5c797e3944e98f54d5666ba65b420954db8e5fa048236a25/qmp.sock: connect: no such file or directory" arch=amd64 command=kill container=33b530cde3a4704e5c797e3944e98f54d5666ba65b420954db8e5fa048236a25 name=kata-runtime pid=83432 sandbox=33b530cde3a4704e5c797e3944e98f54d5666ba65b420954db8e5fa048236a25 source=runtime
time="2019-07-29T10:24:03.670930659+08:00" level=info msg="endpoint unmarshalled" arch=amd64 command=state container=dcd89a770934f214bbfb0fc337f8327fbda1debe1decf1721f224bddd307fcbf endpoint="&{{{5b6c5fe7-07ce-410f-989b-c948f3df766a br1_kata {tap1_kata de:b8:03:ab:c1:af []} [0xc000010a40 0xc000010a48 0xc000010a50 0xc000010a58 0xc000010a60 0xc000010a68 0xc000010a70 0xc000010a78 0xc000010a80 0xc000010a88 0xc000010a90 0xc000010a98 0xc000010aa0 0xc000010aa8 0xc000010ab0 0xc000010ab8] [0xc000010ac0 0xc000010ac8 0xc000010ad0 0xc000010ad8 0xc000010ae0 0xc000010ae8 0xc000010af0 0xc000010af8 0xc000010b00 0xc000010b08 0xc000010b10 0xc000010b18 0xc000010b20 0xc000010b28 0xc000010b30 0xc000010b38]} {eth1 a2:09:d1:7b:9c:ab []} 4} {{{4 1500 0 eth1 de:b8:03:ab:c1:af up|broadcast|multicast 69699 2 0 <nil>  0xc000534300 0 <nil> ether <nil> unknown 0 0 0} macvlan} [220.194.64.224/25 eth1 fe80::dcb8:3ff:feab:c1af/64] [{Ifindex: 4 Dst: <nil> Src: <nil> Gw: 220.194.64.129 Flags: [] Table: 254} {Ifindex: 4 Dst: 220.194.64.128/25 Src: 220.194.64.224 Gw: <nil> Flags: [] Table: 254} {Ifindex: 4 Dst: fe80::/64 Src: <nil> Gw: <nil> Flags: [] Table: 254} {Ifindex: 4 Dst: <nil> Src: <nil> Gw: fe80::3a0e:4dff:fed2:2eaf Flags: [] Table: 254}] {[]  [] []}} macvlan }" endpoint-type=macvlan name=kata-runtime pid=87977 source=virtcontainers subsystem=network
time="2019-07-29T10:24:04.230074153+08:00" level=info msg="endpoint unmarshalled" arch=amd64 command=kill container=dcd89a770934f214bbfb0fc337f8327fbda1debe1decf1721f224bddd307fcbf endpoint="&{{{5b6c5fe7-07ce-410f-989b-c948f3df766a br1_kata {tap1_kata de:b8:03:ab:c1:af []} [0xc000010a48 0xc000010a50 0xc000010a58 0xc000010a60 0xc000010a68 0xc000010a70 0xc000010a78 0xc000010a80 0xc000010a88 0xc000010a90 0xc000010a98 0xc000010aa0 0xc000010aa8 0xc000010ab0 0xc000010ab8 0xc000010ac0] [0xc000010ac8 0xc000010ad0 0xc000010ad8 0xc000010ae0 0xc000010ae8 0xc000010af0 0xc000010af8 0xc000010b00 0xc000010b08 0xc000010b10 0xc000010b18 0xc000010b20 0xc000010b28 0xc000010b30 0xc000010b38 0xc000010b40]} {eth1 a2:09:d1:7b:9c:ab []} 4} {{{4 1500 0 eth1 de:b8:03:ab:c1:af up|broadcast|multicast 69699 2 0 <nil>  0xc00052c300 0 <nil> ether <nil> unknown 0 0 0} macvlan} [220.194.64.224/25 eth1 fe80::dcb8:3ff:feab:c1af/64] [{Ifindex: 4 Dst: <nil> Src: <nil> Gw: 220.194.64.129 Flags: [] Table: 254} {Ifindex: 4 Dst: 220.194.64.128/25 Src: 220.194.64.224 Gw: <nil> Flags: [] Table: 254} {Ifindex: 4 Dst: fe80::/64 Src: <nil> Gw: <nil> Flags: [] Table: 254} {Ifindex: 4 Dst: <nil> Src: <nil> Gw: fe80::3a0e:4dff:fed2:2eaf Flags: [] Table: 254}] {[]  [] []}} macvlan }" endpoint-type=macvlan name=kata-runtime pid=87984 source=virtcontainers subsystem=network
time="2019-07-29T10:24:04.655522688+08:00" level=info msg="endpoint unmarshalled" arch=amd64 command=kill container=dcd89a770934f214bbfb0fc337f8327fbda1debe1decf1721f224bddd307fcbf endpoint="&{{{5b6c5fe7-07ce-410f-989b-c948f3df766a br1_kata {tap1_kata de:b8:03:ab:c1:af []} [0xc000010d58 0xc000010d60 0xc000010d68 0xc000010d70 0xc000010d78 0xc000010d80 0xc000010d88 0xc000010d90 0xc000010d98 0xc000010da0 0xc000010da8 0xc000010db0 0xc000010db8 0xc000010dc0 0xc000010dc8 0xc000010dd0] [0xc000010dd8 0xc000010de0 0xc000010de8 0xc000010df0 0xc000010df8 0xc000010e00 0xc000010e08 0xc000010e10 0xc000010e18 0xc000010e20 0xc000010e28 0xc000010e30 0xc000010e38 0xc000010e40 0xc000010e48 0xc000010e50]} {eth1 a2:09:d1:7b:9c:ab []} 4} {{{4 1500 0 eth1 de:b8:03:ab:c1:af up|broadcast|multicast 69699 2 0 <nil>  0xc00052c780 0 <nil> ether <nil> unknown 0 0 0} macvlan} [220.194.64.224/25 eth1 fe80::dcb8:3ff:feab:c1af/64] [{Ifindex: 4 Dst: <nil> Src: <nil> Gw: 220.194.64.129 Flags: [] Table: 254} {Ifindex: 4 Dst: 220.194.64.128/25 Src: 220.194.64.224 Gw: <nil> Flags: [] Table: 254} {Ifindex: 4 Dst: fe80::/64 Src: <nil> Gw: <nil> Flags: [] Table: 254} {Ifindex: 4 Dst: <nil> Src: <nil> Gw: fe80::3a0e:4dff:fed2:2eaf Flags: [] Table: 254}] {[]  [] []}} macvlan }" endpoint-type=macvlan name=kata-runtime pid=87984 sandbox=dcd89a770934f214bbfb0fc337f8327fbda1debe1decf1721f224bddd307fcbf source=virtcontainers subsystem=network
time="2019-07-29T10:24:04.780342985+08:00" level=warning msg="Agent did not stop sandbox" arch=amd64 command=kill container=dcd89a770934f214bbfb0fc337f8327fbda1debe1decf1721f224bddd307fcbf error="Proxy is not running: no such process" name=kata-runtime pid=87984 sandbox=dcd89a770934f214bbfb0fc337f8327fbda1debe1decf1721f224bddd307fcbf sandboxid=dcd89a770934f214bbfb0fc337f8327fbda1debe1decf1721f224bddd307fcbf source=virtcontainers subsystem=sandbox
time="2019-07-29T10:24:04.930605313+08:00" level=warning msg="Unable to connect to unix socket (/run/vc/vm/dcd89a770934f214bbfb0fc337f8327fbda1debe1decf1721f224bddd307fcbf/qmp.sock): dial unix /run/vc/vm/dcd89a770934f214bbfb0fc337f8327fbda1debe1decf1721f224bddd307fcbf/qmp.sock: connect: no such file or directory" arch=amd64 command=kill container=dcd89a770934f214bbfb0fc337f8327fbda1debe1decf1721f224bddd307fcbf name=kata-runtime pid=87984 sandbox=dcd89a770934f214bbfb0fc337f8327fbda1debe1decf1721f224bddd307fcbf source=virtcontainers subsystem=qmp
time="2019-07-29T10:24:04.980557434+08:00" level=error msg="Failed to connect to QEMU instance" arch=amd64 command=kill container=dcd89a770934f214bbfb0fc337f8327fbda1debe1decf1721f224bddd307fcbf error="dial unix /run/vc/vm/dcd89a770934f214bbfb0fc337f8327fbda1debe1decf1721f224bddd307fcbf/qmp.sock: connect: no such file or directory" name=kata-runtime pid=87984 sandbox=dcd89a770934f214bbfb0fc337f8327fbda1debe1decf1721f224bddd307fcbf source=virtcontainers subsystem=qemu
time="2019-07-29T10:24:05.022364973+08:00" level=warning msg="failed to resolve vm path" arch=amd64 command=kill container=dcd89a770934f214bbfb0fc337f8327fbda1debe1decf1721f224bddd307fcbf dir=/run/vc/vm/dcd89a770934f214bbfb0fc337f8327fbda1debe1decf1721f224bddd307fcbf error="lstat /run/vc/vm/dcd89a770934f214bbfb0fc337f8327fbda1debe1decf1721f224bddd307fcbf: no such file or directory" name=kata-runtime pid=87984 sandbox=dcd89a770934f214bbfb0fc337f8327fbda1debe1decf1721f224bddd307fcbf source=virtcontainers subsystem=qemu
time="2019-07-29T10:24:05.147522684+08:00" level=error msg="dial unix /run/vc/vm/dcd89a770934f214bbfb0fc337f8327fbda1debe1decf1721f224bddd307fcbf/qmp.sock: connect: no such file or directory" arch=amd64 command=kill container=dcd89a770934f214bbfb0fc337f8327fbda1debe1decf1721f224bddd307fcbf name=kata-runtime pid=87984 sandbox=dcd89a770934f214bbfb0fc337f8327fbda1debe1decf1721f224bddd307fcbf source=runtime
time="2019-07-29T10:25:26.757498183+08:00" level=info msg="endpoint unmarshalled" arch=amd64 command=state container=33b530cde3a4704e5c797e3944e98f54d5666ba65b420954db8e5fa048236a25 endpoint="&{{{42ccee8b-344e-47b8-8f46-cf223d3b16f6 br1_kata {tap1_kata ca:e2:3b:0e:29:ad []} [0xc0004e0a38 0xc0004e0a40 0xc0004e0a48 0xc0004e0a50 0xc0004e0a58 0xc0004e0a60 0xc0004e0a68 0xc0004e0a70 0xc0004e0a78 0xc0004e0a80 0xc0004e0a88 0xc0004e0a90 0xc0004e0a98 0xc0004e0aa0 0xc0004e0aa8 0xc0004e0ab0] [0xc0004e0ab8 0xc0004e0ac0 0xc0004e0ac8 0xc0004e0ad0 0xc0004e0ad8 0xc0004e0ae0 0xc0004e0ae8 0xc0004e0af0 0xc0004e0af8 0xc0004e0b00 0xc0004e0b08 0xc0004e0b10 0xc0004e0b18 0xc0004e0b20 0xc0004e0b28 0xc0004e0b30]} {eth1 f6:3f:49:ce:b6:e7 []} 4} {{{4 1500 0 eth1 ca:e2:3b:0e:29:ad up|broadcast|multicast 69699 2 0 <nil>  0xc0001c2300 0 <nil> ether <nil> unknown 0 0 0} macvlan} [220.194.64.219/25 eth1 fe80::c8e2:3bff:fe0e:29ad/64] [{Ifindex: 4 Dst: <nil> Src: <nil> Gw: 220.194.64.129 Flags: [] Table: 254} {Ifindex: 4 Dst: 220.194.64.128/25 Src: 220.194.64.219 Gw: <nil> Flags: [] Table: 254} {Ifindex: 4 Dst: fe80::/64 Src: <nil> Gw: <nil> Flags: [] Table: 254} {Ifindex: 4 Dst: <nil> Src: <nil> Gw: fe80::3a0e:4dff:fed2:2eaf Flags: [] Table: 254}] {[]  [] []}} macvlan }" endpoint-type=macvlan name=kata-runtime pid=89659 source=virtcontainers subsystem=network
time="2019-07-29T10:25:27.333246638+08:00" level=info msg="endpoint unmarshalled" arch=amd64 command=kill container=33b530cde3a4704e5c797e3944e98f54d5666ba65b420954db8e5fa048236a25 endpoint="&{{{42ccee8b-344e-47b8-8f46-cf223d3b16f6 br1_kata {tap1_kata ca:e2:3b:0e:29:ad []} [0xc0000aac88 0xc0000aac90 0xc0000aac98 0xc0000aaca0 0xc0000aaca8 0xc0000aacb0 0xc0000aacb8 0xc0000aacc0 0xc0000aacc8 0xc0000aacd0 0xc0000aacd8 0xc0000aace0 0xc0000aace8 0xc0000aacf0 0xc0000aacf8 0xc0000aad00] [0xc0000aad08 0xc0000aad10 0xc0000aad18 0xc0000aad20 0xc0000aad28 0xc0000aad30 0xc0000aad38 0xc0000aad40 0xc0000aad48 0xc0000aad50 0xc0000aad58 0xc0000aad60 0xc0000aad68 0xc0000aad70 0xc0000aad78 0xc0000aad80]} {eth1 f6:3f:49:ce:b6:e7 []} 4} {{{4 1500 0 eth1 ca:e2:3b:0e:29:ad up|broadcast|multicast 69699 2 0 <nil>  0xc000536300 0 <nil> ether <nil> unknown 0 0 0} macvlan} [220.194.64.219/25 eth1 fe80::c8e2:3bff:fe0e:29ad/64] [{Ifindex: 4 Dst: <nil> Src: <nil> Gw: 220.194.64.129 Flags: [] Table: 254} {Ifindex: 4 Dst: 220.194.64.128/25 Src: 220.194.64.219 Gw: <nil> Flags: [] Table: 254} {Ifindex: 4 Dst: fe80::/64 Src: <nil> Gw: <nil> Flags: [] Table: 254} {Ifindex: 4 Dst: <nil> Src: <nil> Gw: fe80::3a0e:4dff:fed2:2eaf Flags: [] Table: 254}] {[]  [] []}} macvlan }" endpoint-type=macvlan name=kata-runtime pid=89666 source=virtcontainers subsystem=network
time="2019-07-29T10:25:27.760073627+08:00" level=info msg="endpoint unmarshalled" arch=amd64 command=kill container=33b530cde3a4704e5c797e3944e98f54d5666ba65b420954db8e5fa048236a25 endpoint="&{{{42ccee8b-344e-47b8-8f46-cf223d3b16f6 br1_kata {tap1_kata ca:e2:3b:0e:29:ad []} [0xc0000aaf68 0xc0000aaf70 0xc0000aaf78 0xc0000aaf80 0xc0000aaf88 0xc0000aaf90 0xc0000aaf98 0xc0000aafa0 0xc0000aafa8 0xc0000aafb0 0xc0000aafb8 0xc0000aafc0 0xc0000aafc8 0xc0000aafd0 0xc0000aafd8 0xc0000aafe0] [0xc0000aafe8 0xc0000aaff0 0xc0000aaff8 0xc0000ab000 0xc0000ab008 0xc0000ab010 0xc0000ab018 0xc0000ab020 0xc0000ab028 0xc0000ab030 0xc0000ab038 0xc0000ab040 0xc0000ab048 0xc0000ab050 0xc0000ab058 0xc0000ab060]} {eth1 f6:3f:49:ce:b6:e7 []} 4} {{{4 1500 0 eth1 ca:e2:3b:0e:29:ad up|broadcast|multicast 69699 2 0 <nil>  0xc000536780 0 <nil> ether <nil> unknown 0 0 0} macvlan} [220.194.64.219/25 eth1 fe80::c8e2:3bff:fe0e:29ad/64] [{Ifindex: 4 Dst: <nil> Src: <nil> Gw: 220.194.64.129 Flags: [] Table: 254} {Ifindex: 4 Dst: 220.194.64.128/25 Src: 220.194.64.219 Gw: <nil> Flags: [] Table: 254} {Ifindex: 4 Dst: fe80::/64 Src: <nil> Gw: <nil> Flags: [] Table: 254} {Ifindex: 4 Dst: <nil> Src: <nil> Gw: fe80::3a0e:4dff:fed2:2eaf Flags: [] Table: 254}] {[]  [] []}} macvlan }" endpoint-type=macvlan name=kata-runtime pid=89666 sandbox=33b530cde3a4704e5c797e3944e98f54d5666ba65b420954db8e5fa048236a25 source=virtcontainers subsystem=network
time="2019-07-29T10:25:27.868322339+08:00" level=warning msg="Agent did not stop sandbox" arch=amd64 command=kill container=33b530cde3a4704e5c797e3944e98f54d5666ba65b420954db8e5fa048236a25 error="Proxy is not running: no such process" name=kata-runtime pid=89666 sandbox=33b530cde3a4704e5c797e3944e98f54d5666ba65b420954db8e5fa048236a25 sandboxid=33b530cde3a4704e5c797e3944e98f54d5666ba65b420954db8e5fa048236a25 source=virtcontainers subsystem=sandbox
time="2019-07-29T10:25:27.977076319+08:00" level=warning msg="Unable to connect to unix socket (/run/vc/vm/33b530cde3a4704e5c797e3944e98f54d5666ba65b420954db8e5fa048236a25/qmp.sock): dial unix /run/vc/vm/33b530cde3a4704e5c797e3944e98f54d5666ba65b420954db8e5fa048236a25/qmp.sock: connect: no such file or directory" arch=amd64 command=kill container=33b530cde3a4704e5c797e3944e98f54d5666ba65b420954db8e5fa048236a25 name=kata-runtime pid=89666 sandbox=33b530cde3a4704e5c797e3944e98f54d5666ba65b420954db8e5fa048236a25 source=virtcontainers subsystem=qmp
time="2019-07-29T10:25:28.018625578+08:00" level=error msg="Failed to connect to QEMU instance" arch=amd64 command=kill container=33b530cde3a4704e5c797e3944e98f54d5666ba65b420954db8e5fa048236a25 error="dial unix /run/vc/vm/33b530cde3a4704e5c797e3944e98f54d5666ba65b420954db8e5fa048236a25/qmp.sock: connect: no such file or directory" name=kata-runtime pid=89666 sandbox=33b530cde3a4704e5c797e3944e98f54d5666ba65b420954db8e5fa048236a25 source=virtcontainers subsystem=qemu
time="2019-07-29T10:25:28.060402469+08:00" level=warning msg="failed to resolve vm path" arch=amd64 command=kill container=33b530cde3a4704e5c797e3944e98f54d5666ba65b420954db8e5fa048236a25 dir=/run/vc/vm/33b530cde3a4704e5c797e3944e98f54d5666ba65b420954db8e5fa048236a25 error="lstat /run/vc/vm/33b530cde3a4704e5c797e3944e98f54d5666ba65b420954db8e5fa048236a25: no such file or directory" name=kata-runtime pid=89666 sandbox=33b530cde3a4704e5c797e3944e98f54d5666ba65b420954db8e5fa048236a25 source=virtcontainers subsystem=qemu
time="2019-07-29T10:25:28.227289336+08:00" level=error msg="dial unix /run/vc/vm/33b530cde3a4704e5c797e3944e98f54d5666ba65b420954db8e5fa048236a25/qmp.sock: connect: no such file or directory" arch=amd64 command=kill container=33b530cde3a4704e5c797e3944e98f54d5666ba65b420954db8e5fa048236a25 name=kata-runtime pid=89666 sandbox=33b530cde3a4704e5c797e3944e98f54d5666ba65b420954db8e5fa048236a25 source=runtime

Proxy logs

Recent proxy problems found in system journal:

time="2019-07-27T00:35:53.055536596+08:00" level=info msg="time=\"2019-07-26T16:35:52.911834881Z\" level=info msg=\"ignoring unexpected signal\" name=kata-agent pid=169 sandbox=33b530cde3a4704e5c797e3944e98f54d5666ba65b420954db8e5fa048236a25 signal=\"child exited\" source=agent\n" name=kata-proxy pid=220106 sandbox=33b530cde3a4704e5c797e3944e98f54d5666ba65b420954db8e5fa048236a25 source=agent
time="2019-07-27T00:35:53.055738886+08:00" level=info msg="time=\"2019-07-26T16:35:52.912153579Z\" level=info msg=\"ignoring unexpected signal\" name=kata-agent pid=169 sandbox=33b530cde3a4704e5c797e3944e98f54d5666ba65b420954db8e5fa048236a25 signal=\"child exited\" source=agent\n" name=kata-proxy pid=220106 sandbox=33b530cde3a4704e5c797e3944e98f54d5666ba65b420954db8e5fa048236a25 source=agent
time="2019-07-27T00:36:48.71831361+08:00" level=fatal msg="channel error" error="session shutdown" name=kata-proxy pid=220106 sandbox=33b530cde3a4704e5c797e3944e98f54d5666ba65b420954db8e5fa048236a25 source=proxy
time="2019-07-29T10:12:57.847498201+08:00" level=info msg="time=\"2019-07-29T02:12:57.806756039Z\" level=info msg=\"ignoring unexpected signal\" debug_console=false name=kata-agent pid=190 sandbox=402e2000f225b1ab28373e71115f588014db925afbaa71a1201958a00d15cfce signal=\"child exited\" source=agent\n" name=kata-proxy pid=74010 sandbox=402e2000f225b1ab28373e71115f588014db925afbaa71a1201958a00d15cfce source=agent
time="2019-07-29T10:12:57.847638143+08:00" level=info msg="time=\"2019-07-29T02:12:57.807023118Z\" level=info msg=\"ignoring unexpected signal\" debug_console=false name=kata-agent pid=190 sandbox=402e2000f225b1ab28373e71115f588014db925afbaa71a1201958a00d15cfce signal=\"child exited\" source=agent\n" name=kata-proxy pid=74010 sandbox=402e2000f225b1ab28373e71115f588014db925afbaa71a1201958a00d15cfce source=agent
time="2019-07-29T10:13:03.319133744+08:00" level=info msg="time=\"2019-07-29T02:13:03.278481995Z\" level=info msg=\"ignoring unexpected signal\" debug_console=false name=kata-agent pid=190 sandbox=402e2000f225b1ab28373e71115f588014db925afbaa71a1201958a00d15cfce signal=\"child exited\" source=agent\n" name=kata-proxy pid=74010 sandbox=402e2000f225b1ab28373e71115f588014db925afbaa71a1201958a00d15cfce source=agent
time="2019-07-29T10:13:03.319250143+08:00" level=info msg="time=\"2019-07-29T02:13:03.278673665Z\" level=info msg=\"ignoring unexpected signal\" debug_console=false name=kata-agent pid=190 sandbox=402e2000f225b1ab28373e71115f588014db925afbaa71a1201958a00d15cfce signal=\"child exited\" source=agent\n" name=kata-proxy pid=74010 sandbox=402e2000f225b1ab28373e71115f588014db925afbaa71a1201958a00d15cfce source=agent
time="2019-07-29T10:13:55.292546536+08:00" level=info msg="time=\"2019-07-29T02:13:55.251747186Z\" level=info msg=\"ignoring unexpected signal\" debug_console=false name=kata-agent pid=190 sandbox=402e2000f225b1ab28373e71115f588014db925afbaa71a1201958a00d15cfce signal=\"child exited\" source=agent\n" name=kata-proxy pid=74010 sandbox=402e2000f225b1ab28373e71115f588014db925afbaa71a1201958a00d15cfce source=agent
time="2019-07-29T10:13:55.395270488+08:00" level=info msg="time=\"2019-07-29T02:13:55.354536278Z\" level=info msg=\"ignoring unexpected signal\" debug_console=false name=kata-agent pid=190 sandbox=402e2000f225b1ab28373e71115f588014db925afbaa71a1201958a00d15cfce signal=\"child exited\" source=agent\n" name=kata-proxy pid=74010 sandbox=402e2000f225b1ab28373e71115f588014db925afbaa71a1201958a00d15cfce source=agent

Shim logs

Recent shim problems found in system journal:

time="2019-07-27T00:35:53.037701196+08:00" level=info msg="copy stdout failed" container=7c45f14645f28c294e8f231da47d9bdf28b386c1608ed2b1a5f33f164d0ac84b error="rpc error: code = Unknown desc = EOF" exec-id=7c45f14645f28c294e8f231da47d9bdf28b386c1608ed2b1a5f33f164d0ac84b name=kata-shim pid=1 source=shim
time="2019-07-27T00:35:53.037712462+08:00" level=info msg="copy stderr failed" container=7c45f14645f28c294e8f231da47d9bdf28b386c1608ed2b1a5f33f164d0ac84b error="rpc error: code = Unknown desc = EOF" exec-id=7c45f14645f28c294e8f231da47d9bdf28b386c1608ed2b1a5f33f164d0ac84b name=kata-shim pid=1 source=shim
time="2019-07-27T00:35:53.051936679+08:00" level=info msg="copy stdout failed" container=7c45f14645f28c294e8f231da47d9bdf28b386c1608ed2b1a5f33f164d0ac84b error="rpc error: code = Unknown desc = read /dev/ptmx: input/output error" exec-id=68eb6311-633f-47a7-9804-0235e4cb8c70 name=kata-shim pid=9 source=shim
time="2019-07-29T10:13:55.393821959+08:00" level=info msg="copy stdout failed" container=e924b267ef4e8758fa2cc9ea73420c77eb8a18532617bf3980d6ceb14d45bc7d error="rpc error: code = Unknown desc = read /dev/ptmx: input/output error" exec-id=e8f65c96-6399-4efd-8331-04df6eb29959 name=kata-shim pid=9 source=shim

Throttler logs

No recent throttler problems found in system journal.


Container manager details

No docker Have kubectl

Kubernetes

Output of "kubectl version":

Client Version: version.Info{Major:"1", Minor:"13", GitVersion:"v1.13.4", GitCommit:"c27b913fddd1a6c480c229191a087698aa92f0b1", GitTreeState:"clean", BuildDate:"2019-02-28T13:37:52Z", GoVersion:"go1.11.5", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"13", GitVersion:"v1.13.4", GitCommit:"c27b913fddd1a6c480c229191a087698aa92f0b1", GitTreeState:"clean", BuildDate:"2019-02-28T13:30:26Z", GoVersion:"go1.11.5", Compiler:"gc", Platform:"linux/amd64"}

Output of "kubectl config view":

apiVersion: v1
clusters:
- cluster:
    certificate-authority-data: DATA+OMITTED
    server: https://220.194.64.132:6443
  name: kubernetes
contexts:
- context:
    cluster: kubernetes
    user: kubernetes-admin
  name: kubernetes-admin@kubernetes
current-context: kubernetes-admin@kubernetes
kind: Config
preferences: {}
users:
- name: kubernetes-admin
  user:
    client-certificate-data: REDACTED
    client-key-data: REDACTED

Output of "systemctl show kubelet":

Type=simple
Restart=always
NotifyAccess=none
RestartUSec=10s
TimeoutStartUSec=1min 30s
TimeoutStopUSec=1min 30s
WatchdogUSec=0
WatchdogTimestamp=Fri 2019-07-12 11:15:47 CST
WatchdogTimestampMonotonic=234075122187
StartLimitInterval=0
StartLimitBurst=5
StartLimitAction=none
FailureAction=none
PermissionsStartOnly=no
RootDirectoryStartOnly=no
RemainAfterExit=no
GuessMainPID=yes
MainPID=43447
ControlPID=0
FileDescriptorStoreMax=0
StatusErrno=0
Result=success
ExecMainStartTimestamp=Fri 2019-07-12 11:15:47 CST
ExecMainStartTimestampMonotonic=234075122115
ExecMainExitTimestampMonotonic=0
ExecMainPID=43447
ExecMainCode=0
ExecMainStatus=0
ExecStart={ path=/usr/bin/kubelet ; argv[]=/usr/bin/kubelet $KUBELET_KUBECONFIG_ARGS $KUBELET_CONFIG_ARGS $KUBELET_KUBEADM_ARGS $KUBELET_EXTRA_ARGS ; ignore_errors=no ; start_time=[Fri 2019-07-12 11:15:47 CST] ; stop_time=[n/a] ; pid=43447 ; code=(null) ; status=0/0 }
Slice=system.slice
ControlGroup=/system.slice/kubelet.service
MemoryCurrent=101855232
Delegate=no
CPUAccounting=no
CPUShares=18446744073709551615
StartupCPUShares=18446744073709551615
CPUQuotaPerSecUSec=infinity
BlockIOAccounting=no
BlockIOWeight=18446744073709551615
StartupBlockIOWeight=18446744073709551615
MemoryAccounting=no
MemoryLimit=18446744073709551615
DevicePolicy=auto
Environment=KUBELET_KUBECONFIG_ARGS=--bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf\x20--kubeconfig=/etc/kubernetes/kubelet.conf KUBELET_CONFIG_ARGS=--config=/var/lib/kubelet/config.yaml
EnvironmentFile=/var/lib/kubelet/kubeadm-flags.env (ignore_errors=yes)
EnvironmentFile=/etc/sysconfig/kubelet (ignore_errors=yes)
UMask=0022
LimitCPU=18446744073709551615
LimitFSIZE=18446744073709551615
LimitDATA=18446744073709551615
LimitSTACK=18446744073709551615
LimitCORE=18446744073709551615
LimitRSS=18446744073709551615
LimitNOFILE=4096
LimitAS=18446744073709551615
LimitNPROC=770566
LimitMEMLOCK=65536
LimitLOCKS=18446744073709551615
LimitSIGPENDING=770566
LimitMSGQUEUE=819200
LimitNICE=0
LimitRTPRIO=0
LimitRTTIME=18446744073709551615
OOMScoreAdjust=0
Nice=0
IOScheduling=0
CPUSchedulingPolicy=0
CPUSchedulingPriority=0
TimerSlackNSec=50000
CPUSchedulingResetOnFork=no
NonBlocking=no
StandardInput=null
StandardOutput=journal
StandardError=inherit
TTYReset=no
TTYVHangup=no
TTYVTDisallocate=no
SyslogPriority=30
SyslogLevelPrefix=yes
SecureBits=0
CapabilityBoundingSet=18446744073709551615
MountFlags=0
PrivateTmp=no
PrivateNetwork=no
PrivateDevices=no
ProtectHome=no
ProtectSystem=no
SameProcessGroup=no
IgnoreSIGPIPE=yes
NoNewPrivileges=no
SystemCallErrorNumber=0
RuntimeDirectoryMode=0755
KillMode=control-group
KillSignal=15
SendSIGKILL=yes
SendSIGHUP=no
Id=kubelet.service
Names=kubelet.service
Requires=basic.target
Wants=system.slice
WantedBy=multi-user.target
Conflicts=shutdown.target
Before=shutdown.target multi-user.target
After=system.slice systemd-journald.socket basic.target
Documentation=https://kubernetes.io/docs/
Description=kubelet: The Kubernetes Node Agent
LoadState=loaded
ActiveState=active
SubState=running
FragmentPath=/etc/systemd/system/kubelet.service
DropInPaths=/etc/systemd/system/kubelet.service.d/10-kubeadm.conf
UnitFileState=enabled
UnitFilePreset=disabled
InactiveExitTimestamp=Fri 2019-07-12 11:15:47 CST
InactiveExitTimestampMonotonic=234075122216
ActiveEnterTimestamp=Fri 2019-07-12 11:15:47 CST
ActiveEnterTimestampMonotonic=234075122216
ActiveExitTimestamp=Fri 2019-07-12 11:15:43 CST
ActiveExitTimestampMonotonic=234070821201
InactiveEnterTimestamp=Fri 2019-07-12 11:15:43 CST
InactiveEnterTimestampMonotonic=234070841948
CanStart=yes
CanStop=yes
CanReload=no
CanIsolate=no
StopWhenUnneeded=no
RefuseManualStart=no
RefuseManualStop=no
AllowIsolate=no
DefaultDependencies=yes
OnFailureJobMode=replace
IgnoreOnIsolate=no
IgnoreOnSnapshot=no
NeedDaemonReload=no
JobTimeoutUSec=0
JobTimeoutAction=none
ConditionResult=yes
AssertResult=yes
ConditionTimestamp=Fri 2019-07-12 11:15:47 CST
ConditionTimestampMonotonic=234075118217
AssertTimestamp=Fri 2019-07-12 11:15:47 CST
AssertTimestampMonotonic=234075118218
Transient=no

No crio Have containerd

containerd

Output of "containerd --version":

containerd github.com/containerd/containerd v1.2.6 894b81a4b802e4eb2a91d1ce216b8817763c29fb

Output of "systemctl show containerd":

Type=simple
Restart=always
NotifyAccess=none
RestartUSec=5s
TimeoutStartUSec=1min 30s
TimeoutStopUSec=1min 30s
WatchdogUSec=0
WatchdogTimestamp=Thu 2019-07-11 23:04:54 CST
WatchdogTimestampMonotonic=190222470548
StartLimitInterval=10000000
StartLimitBurst=5
StartLimitAction=none
FailureAction=none
PermissionsStartOnly=no
RootDirectoryStartOnly=no
RemainAfterExit=no
GuessMainPID=yes
MainPID=28387
ControlPID=0
FileDescriptorStoreMax=0
StatusErrno=0
Result=success
ExecMainStartTimestamp=Thu 2019-07-11 23:04:54 CST
ExecMainStartTimestampMonotonic=190222470491
ExecMainExitTimestampMonotonic=0
ExecMainPID=28387
ExecMainCode=0
ExecMainStatus=0
ExecStartPre={ path=/sbin/modprobe ; argv[]=/sbin/modprobe overlay ; ignore_errors=no ; start_time=[n/a] ; stop_time=[n/a] ; pid=0 ; code=(null) ; status=0/0 }
ExecStart={ path=/usr/local/bin/containerd ; argv[]=/usr/local/bin/containerd ; ignore_errors=no ; start_time=[n/a] ; stop_time=[n/a] ; pid=0 ; code=(null) ; status=0/0 }
Slice=system.slice
ControlGroup=/system.slice/containerd.service
MemoryCurrent=7974514688
Delegate=yes
CPUAccounting=no
CPUShares=18446744073709551615
StartupCPUShares=18446744073709551615
CPUQuotaPerSecUSec=infinity
BlockIOAccounting=no
BlockIOWeight=18446744073709551615
StartupBlockIOWeight=18446744073709551615
MemoryAccounting=no
MemoryLimit=18446744073709551615
DevicePolicy=auto
UMask=0022
LimitCPU=18446744073709551615
LimitFSIZE=18446744073709551615
LimitDATA=18446744073709551615
LimitSTACK=18446744073709551615
LimitCORE=18446744073709551615
LimitRSS=18446744073709551615
LimitNOFILE=1048576
LimitAS=18446744073709551615
LimitNPROC=18446744073709551615
LimitMEMLOCK=65536
LimitLOCKS=18446744073709551615
LimitSIGPENDING=770566
LimitMSGQUEUE=819200
LimitNICE=0
LimitRTPRIO=0
LimitRTTIME=18446744073709551615
OOMScoreAdjust=-999
Nice=0
IOScheduling=0
CPUSchedulingPolicy=0
CPUSchedulingPriority=0
TimerSlackNSec=50000
CPUSchedulingResetOnFork=no
NonBlocking=no
StandardInput=null
StandardOutput=journal
StandardError=inherit
TTYReset=no
TTYVHangup=no
TTYVTDisallocate=no
SyslogPriority=30
SyslogLevelPrefix=yes
SecureBits=0
CapabilityBoundingSet=18446744073709551615
MountFlags=0
PrivateTmp=no
PrivateNetwork=no
PrivateDevices=no
ProtectHome=no
ProtectSystem=no
SameProcessGroup=no
IgnoreSIGPIPE=yes
NoNewPrivileges=no
SystemCallErrorNumber=0
RuntimeDirectoryMode=0755
KillMode=process
KillSignal=15
SendSIGKILL=yes
SendSIGHUP=no
Id=containerd.service
Names=containerd.service
Requires=basic.target
Wants=system.slice
WantedBy=multi-user.target
Conflicts=shutdown.target
Before=shutdown.target multi-user.target
After=network.target system.slice systemd-journald.socket basic.target
Documentation=https://containerd.io
Description=containerd container runtime
LoadState=loaded
ActiveState=active
SubState=running
FragmentPath=/etc/systemd/system/containerd.service
UnitFileState=enabled
UnitFilePreset=disabled
InactiveExitTimestamp=Thu 2019-07-11 23:04:54 CST
InactiveExitTimestampMonotonic=190222455938
ActiveEnterTimestamp=Thu 2019-07-11 23:04:54 CST
ActiveEnterTimestampMonotonic=190222470606
ActiveExitTimestamp=Thu 2019-07-11 23:04:54 CST
ActiveExitTimestampMonotonic=190222438078
InactiveEnterTimestamp=Thu 2019-07-11 23:04:54 CST
InactiveEnterTimestampMonotonic=190222454471
CanStart=yes
CanStop=yes
CanReload=no
CanIsolate=no
StopWhenUnneeded=no
RefuseManualStart=no
RefuseManualStop=no
AllowIsolate=no
DefaultDependencies=yes
OnFailureJobMode=replace
IgnoreOnIsolate=no
IgnoreOnSnapshot=no
NeedDaemonReload=no
JobTimeoutUSec=0
JobTimeoutAction=none
ConditionResult=yes
AssertResult=yes
ConditionTimestamp=Thu 2019-07-11 23:04:54 CST
ConditionTimestampMonotonic=190222455135
AssertTimestamp=Thu 2019-07-11 23:04:54 CST
AssertTimestampMonotonic=190222455136
Transient=no

Output of "cat /etc/containerd/config.toml":

root = "/data/containerd/lib"
state = "/data/containerd/run"
[plugins]
  [plugins.cri]
    sandbox_image = "hub.baidubce.com/edge/pause:3.1"
    [plugins.cri.containerd]
      [plugins.cri.containerd.untrusted_workload_runtime]
        runtime_type = "io.containerd.runtime.v1.linux"
        runtime_engine = "/opt/kata/bin/kata-runtime"
    [plugins.cri.registry]
      [plugins.cri.registry.mirrors]
        [plugins.cri.registry.mirrors."docker.io"]
          endpoint = ["https://mirror.baidubce.com"]
        [plugins.cri.registry.mirrors."hub.baidubce.com"]
          endpoint = ["http://hub.baidubce.com"]

Packages

No dpkg Have rpm Output of "rpm -qa|egrep "(cc-oci-runtimecc-runtimerunv|kata-proxy|kata-runtime|kata-shim|kata-ksm-throttler|kata-containers-image|linux-container|qemu-)"":


jodh-intel commented 5 years ago

Hi @clarklee92 - thanks for raising. I can recreate this:

$ sudo apt -y install bash-static
$ mkdir /tmp/shared
$ cp /bin/bash-static /tmp/shared
$ sudo docker run -ti --runtime kata-runtime -v /tmp/shared:/shared centos:6 sh  
$ echo $?
139
$ sudo docker run -ti --runtime kata-runtime -v /tmp/shared:/shared centos:6 bash
$ echo $?                                                                        
139
$ sudo docker run -ti --runtime kata-runtime -v /tmp/shared:/shared centos:6 dash
# bash
Segmentation fault
# /shared/bash-static
[root@ce174b28c495 /]# exit
exit
# exit

If you enable proxy debug and try to run the CentOS 6 version of /bin/bash, you'll find something like the following in the proxy logs:

bash[102] vsyscall attempted with vsyscall=none ip:ffffffffff600400 cs:33 sp:7ffce1477248 ax:ffffffffff600400 si:7ffce1477f75 di:0
bash[102]: segfault at ffffffffff600400 ip ffffffffff600400 sp 00007ffce1477248 error 15
Code: Bad RIP value.

I think this is due to the current guest kernel config, specifically CONFIG_LEGACY_VSYSCALL_* settings:

@grahamwhaley and @devimc may have further info on this.

grahamwhaley commented 5 years ago

I suspect this is indeed the missing vsyscall on the old distro/libraries - it is documented in the configuration file at https://github.com/kata-containers/runtime/blob/master/cli/config/configuration-qemu.toml.in#L20-L30 as :

# Optional space-separated list of options to pass to the guest kernel.
# For example, use `kernel_params = "vsyscall=emulate"` if you are having
# trouble running pre-2.15 glibc.
#
# WARNING: - any parameter specified here will take priority over the default
# parameter value of the same name used to start the virtual machine.
# Do not set values here unless you understand the impact of doing so as you
# may stop the virtual machine from booting.
# To see the list of default parameters, enable hypervisor debug, create a
# container and look for 'default-kernel-parameters' log entries.
kernel_params = "@KERNELPARAMS@"

@clarklee92 - please try adding that kernel command line option to your kata config file and see if that helps.

clarklee92 commented 5 years ago

Thank you! I just add kernel_params = "vsyscall=emulate" to config and kernel never recompiled. It seems ok now, but i am a little worried about the performance as emulate used. We will test and give more info next.

zhiminghufighting commented 5 years ago

@jodh-intel centos7 container image will be ok with the same Host and Kata config. It doesn't need to invoke vsyscall in centos7?

jodh-intel commented 5 years ago

@zhiminghufighting - The comment that @grahamwhaley pasted shows the key factor is whether the version of glibc is newer than 2.15...

clarklee92 commented 5 years ago

"yum install will hang forever at installation step" , This is still exist. Now I run a centos6 container. The problem is different with different VM image :

Warning: RPMDB altered outside of yum.
  Installing : rdma-6.9_4.1-3.el6.noarch

hanging there forever.

There is no relationship with packages,I have tried a lot. @jodh-intel Please take a look , many thanks.

grahamwhaley commented 5 years ago

Let's try to pick some setup details out - @clarklee92 - please correct if I get any wrong:

What Version Notes
stack k8s v1.13.4 kata-deploy install? (/opt/kata)
runtime 1.6.2 1.6.7 is released
qemu qemu 2.11.2 kata-static
rootfs centos7
agent 1.7.0-rc1-f983b3665ff954864de23c0a81e15378ef300855 Not v1.6.2 ?
proxy 1.6.2
shim 1.6.2
kernel 4.19.28-33
storage virtio-scsi

I guess the thing that jumps out at me straight away is that you have kata v1.6.2 static installed, but have the agent v1.7.0 in your image. Maybe you checked out the osbuilder repository from the HEAD branch, and not the v1.6.2 branch. I don't know how well we fork/branch osbuilder, or if the agent version difference will matter.

Also, v1.6.7 of Kata is available (or even v1.8 etc. if you were to upgrade). I've not looked at what was fixed between v1.6.2 and v1.6.7.

clarklee92 commented 5 years ago

I guess the thing that jumps out at me straight away is that you have kata v1.6.2 static installed, but have the agent v1.7.0 in your image. Maybe you checked out the osbuilder repository from the HEAD branch, and not the v1.6.2 branch. I don't know how well we fork/branch osbuilder, or if the agent version difference will matter.

Also, v1.6.7 of Kata is available (or even v1.8 etc. if you were to upgrade). I've not looked at what was fixed between v1.6.2 and v1.6.7.

Yep, I knew I was using kata-agent:1.7.0 in my own build image, but the kata-containers-image_clearlinux_1.6.2_agent.img rised the same problem. storage now is overlay(9pfs)

devimc commented 5 years ago

@clarklee92

"yum install will hang forever at installation step" , This is still exist.

yes this is a known issue https://github.com/kata-containers/osbuilder/issues/237 , the solutions is to use kata-agent as init kernel_params = "init=/usr/bin/kata-agent" you will get a better boot time, small footprint and yum working again :smile:

clarklee92 commented 5 years ago

@clarklee92

"yum install will hang forever at installation step" , This is still exist.

yes this is a known issue kata-containers/osbuilder#237 , the solutions is to use kata-agent as init kernel_params = "init=/usr/bin/kata-agent" you will get a better boot time, small footprint and yum working again 😄

I‘ve tried with kata-1.8.0 static version with kata-containers-image_clearlinux_1.8.0_agent.img and add kernel_params = "vsyscall=emulate init=/usr/bin/kata-agent" to config.

The problem turn from hanging to :

Installing : librdmacm-1.0.21-0.el6.x86_64                                                                                                                                                                        1/2
Traceback (most recent call last):
  File "/usr/lib/python2.6/site-packages/yum/rpmtrans.py", line 444, in callback
    self._instCloseFile(  bytes, total, h )
  File "/usr/lib/python2.6/site-packages/yum/rpmtrans.py", line 526, in _instCloseFile
    self.base.history.trans_data_pid_end(pid, state)
  File "/usr/lib/python2.6/site-packages/yum/history.py", line 868, in trans_data_pid_end
    """, ('TRUE', self._tid, pid, state))
  File "/usr/lib/python2.6/site-packages/yum/sqlutils.py", line 168, in executeSQLQmark
    return cursor.execute(query, params)
sqlite3.OperationalError: disk I/O error
error: python callback <bound method RPMTransaction.callback of <yum.rpmtrans.RPMTransaction instance at 0x24829e0>> failed, aborting!

😬

devimc commented 5 years ago

@clarklee92 this is a problem with 9p, to fix it you need to use virtio-fs or switch to devicemapper

clarklee92 commented 5 years ago

@clarklee92 this is a problem with 9p, to fix it you need to use virtio-fs or switch to devicemapper

Oh, no. So we finally go back to this issue?
In fact, 9p is a temporary option for us as we thought k8s block device can be used. virtio-fs ( especially it's quota)is not good enough in my opinion. dm is hard for operation and maintenance(containerd support dm at it's 1.3.0). Or develop your own raw driver.Relatively, I think dm is kata's future if it is well reconstructed. (purely personal view)😊

clarklee92 commented 5 years ago

@clarklee92 this is a problem with 9p, to fix it you need to use virtio-fs or switch to devicemapper

Today I tried 1.8 with nemu + virtio-fs, centos6 container started up normally. Unfortunately,error message of yum install changed as below:

Running Transaction
  Installing : rdma-6.9_4.1-3.el6.noarch                                                                                                                                                                                                                                 1/4
  Installing : libibverbs-1.1.8-4.el6.x86_64                                                                                                                                                                                                                             2/4
  Installing : librdmacm-1.0.21-0.el6.x86_64                                                                                                                                                                                                                             3/4
  Installing : fio-2.0.13-2.el6.x86_64                                                                                                                                                                                                                                   4/4
  Verifying  : rdma-6.9_4.1-3.el6.noarch                                                                                                                                                                                                                                 1/4
Traceback (most recent call last):
  File "/usr/bin/yum", line 29, in <module>
    yummain.user_main(sys.argv[1:], exit_code=True)
  File "/usr/share/yum-cli/yummain.py", line 298, in user_main
    errcode = main(args)
  File "/usr/share/yum-cli/yummain.py", line 227, in main
    return_code = base.doTransaction()
  File "/usr/share/yum-cli/cli.py", line 588, in doTransaction
    resultobject = self.runTransaction(cb=cb)
  File "/usr/lib/python2.6/site-packages/yum/__init__.py", line 1630, in runTransaction
    self.verifyTransaction(resultobject, vTcb)
  File "/usr/lib/python2.6/site-packages/yum/__init__.py", line 1680, in verifyTransaction
    po.yumdb_info.checksum_type = str(csum[0])
  File "/usr/lib/python2.6/site-packages/yum/rpmsack.py", line 1857, in __setattr__
    self._write(attr, value)
  File "/usr/lib/python2.6/site-packages/yum/rpmsack.py", line 1793, in _write
    misc.unlink_f(fn + '.tmp')
  File "/usr/lib/python2.6/site-packages/yum/misc.py", line 912, in unlink_f
    os.unlink(filename)
OSError: [Errno 5] Input/output error: '/var/lib/yum/yumdb/r/fd63d4cb559a10ba076a3091dfb77ed852b2db99-rdma-6.9_4.1-3.el6-noarch/checksum_type.tmp'
grahamwhaley commented 5 years ago

Hi @clarklee92 - can you show us what the output of mount looks like inside your container? That unlink of a tmpfile type error is classically what we saw with 9p, and I am wondering what sort of filesystem your TMP (/tmp) path is.... just in case that is still a 9p mount for instance. One way to check if this is the likely problem is to set the TMP dir to be on a RAMFS. In the iperf3 test we do that by setting TMPDIR to point at /dev/shm (a quick hack) https://github.com/kata-containers/tests/blob/master/metrics/network/network-metrics-iperf3.sh#L38-L41

Previously we have also used ramfs and remounted it to /tmp as a test/workaround - you will need to add the SYS_ADMIN capability to your container to be able to do the mount I believe.

clarklee92 commented 5 years ago

I have noticed the diff output of df -lh & mount between centos6 & centos7 kata container:

df -lh

Filesystem      Size  Used Avail Use% Mounted on
kataShared      7.1T  6.8G  6.8T   1% /
tmpfs            64M     0   64M   0% /dev
tmpfs            16G     0   16G   0% /sys/fs/cgroup
shm              64M     0   64M   0% /dev/shm
tmpfs            16G     0   16G   0% /proc/acpi
tmpfs            16G     0   16G   0% /proc/scsi
tmpfs            16G     0   16G   0% /sys/firmware

mount

kataShared on / type 9p (rw,nodev,relatime,dirsync,mmap,access=client,trans=virtio)
proc on /proc type proc (rw,nosuid,nodev,noexec,relatime)
tmpfs on /dev type tmpfs (rw,nosuid,size=65536k,mode=755)
devpts on /dev/pts type devpts (rw,nosuid,noexec,relatime,gid=5,mode=620,ptmxmode=666)
mqueue on /dev/mqueue type mqueue (rw,nosuid,nodev,noexec,relatime)
sysfs on /sys type sysfs (ro,nosuid,nodev,noexec,relatime)
tmpfs on /sys/fs/cgroup type tmpfs (ro,nosuid,nodev,noexec,relatime,mode=755)
cgroup on /sys/fs/cgroup/cpuset type cgroup (ro,nosuid,nodev,noexec,relatime,cpuset)
cgroup on /sys/fs/cgroup/cpu type cgroup (ro,nosuid,nodev,noexec,relatime,cpu)
cgroup on /sys/fs/cgroup/cpuacct type cgroup (ro,nosuid,nodev,noexec,relatime,cpuacct)
cgroup on /sys/fs/cgroup/blkio type cgroup (ro,nosuid,nodev,noexec,relatime,blkio)
cgroup on /sys/fs/cgroup/memory type cgroup (ro,nosuid,nodev,noexec,relatime,memory)
cgroup on /sys/fs/cgroup/devices type cgroup (ro,nosuid,nodev,noexec,relatime,devices)
cgroup on /sys/fs/cgroup/freezer type cgroup (ro,nosuid,nodev,noexec,relatime,freezer)
cgroup on /sys/fs/cgroup/net_cls type cgroup (ro,nosuid,nodev,noexec,relatime,net_cls)
cgroup on /sys/fs/cgroup/perf_event type cgroup (ro,nosuid,nodev,noexec,relatime,perf_event)
cgroup on /sys/fs/cgroup/net_prio type cgroup (ro,nosuid,nodev,noexec,relatime,net_prio)
cgroup on /sys/fs/cgroup/pids type cgroup (ro,nosuid,nodev,noexec,relatime,pids)
kataShared on /etc/hosts type 9p (rw,nodev,relatime,dirsync,mmap,access=client,trans=virtio)
kataShared on /dev/termination-log type 9p (rw,nodev,relatime,dirsync,mmap,access=client,trans=virtio)
kataShared on /etc/hostname type 9p (rw,nodev,relatime,dirsync,mmap,access=client,trans=virtio)
kataShared on /etc/resolv.conf type 9p (rw,nodev,relatime,dirsync,mmap,access=client,trans=virtio)
shm on /dev/shm type tmpfs (rw,nosuid,nodev,noexec,relatime,size=65536k)
kataShared on /run/secrets/kubernetes.io/serviceaccount type 9p (ro,relatime,dirsync,mmap,access=client,trans=virtio)
proc on /proc/bus type proc (ro,relatime)
proc on /proc/fs type proc (ro,relatime)
proc on /proc/irq type proc (ro,relatime)
proc on /proc/sys type proc (ro,relatime)
tmpfs on /proc/acpi type tmpfs (ro,relatime)
tmpfs on /proc/keys type tmpfs (rw,nosuid,size=65536k,mode=755)
tmpfs on /proc/timer_list type tmpfs (rw,nosuid,size=65536k,mode=755)
tmpfs on /proc/sched_debug type tmpfs (rw,nosuid,size=65536k,mode=755)
tmpfs on /proc/scsi type tmpfs (ro,relatime)
tmpfs on /sys/firmware type tmpfs (ro,relatime)

I also login VM to get this:

Filesystem          Size      Used    Avail   Use%  Mounted on
/dev/root           1.5G      595M     822M    42%   /
devtmpfs             16G         0      16G     0%   /dev
tmpfs                16G         0      16G     0%   /dev/shm
tmpfs                16G       36k      16G     1%   /run
tmpfs                16G         0      16G     0%   /sys/fs/cgroup
kataShared           95G      4.2G      90G     5%   /run/kata-containers/shared/containers
shm                  64M         0      64M     0%   /run/kata-containers/sandbox/shm

df -lh

Filesystem      Size  Used Avail Use% Mounted on
/dev/sda1       7.1T  6.8G  6.8T   1% /
tmpfs            64M     0   64M   0% /dev/shm

mount

/dev/sda1 on / type ext4 (rw)
devpts on /dev/pts type devpts (rw)
tmpfs on /dev/shm type tmpfs (rw)
proc on /proc type proc (rw)
sysfs on /sys type sysfs (rw)

Two lines only, Is this the key point?

grahamwhaley commented 5 years ago

My key points were:

1) to see the mount inside the failing container to try and rule 2) to try to replace /tmp with ramfs to confirm if this was a 9p issue.

So we could try to rule out 9p as the problem.

wrt (1) then, in your centos7 mount list I see:

kataShared on / type 9p (rw,nodev,relatime,dirsync,mmap,access=client,trans=virtio)

and no separate mount point for /tmp. That means, in that container, your '/tmp' is on 9p filesystem, and this can cause known issues like you are seeing.

Your mount in your centos6 - that only shows 5 lines - that was inside the container? We need to ask @devimc and @amshinde if that is what we would expect to see? The good thing is, that shows no 9p mount points..

devimc commented 5 years ago

Today I tried 1.8 with nemu + virtio-fs, centos6 container started up normally. Unfortunately,error message of yum install changed as below:

cc @ganeshmaharaj

Your mount in your centos6 - that only shows 5 lines - that was inside the container? We need to ask @devimc and @amshinde if that is what we would expect to see? The good thing is, that shows no 9p mount points..

that's weird because /etc/resolv.conf , /etc/hostname and /etc/hosts are shared through 9p, so they should be there, I ran a centos 6 container and I can see them

# df -lh
Filesystem      Size  Used Avail Use% Mounted on
kataShared       69G   29G   38G  43% /
tmpfs            64M     0   64M   0% /dev
tmpfs           997M     0  997M   0% /sys/fs/cgroup
kataShared       69G   29G   38G  43% /etc/resolv.conf
kataShared       69G   29G   38G  43% /etc/hostname
kataShared       69G   29G   38G  43% /etc/hosts
shm              64M     0   64M   0% /dev/shm
tmpfs           997M     0  997M   0% /proc/acpi
tmpfs            64M     0   64M   0% /proc/keys
tmpfs            64M     0   64M   0% /proc/timer_list
tmpfs            64M     0   64M   0% /proc/sched_debug
tmpfs           997M     0  997M   0% /proc/scsi
tmpfs           997M     0  997M   0% /sys/firmware

@clarklee92 are you using a custom kernel? or may be we are seeing a different output because of the kata version :thinking:

clarklee92 commented 5 years ago

@clarklee92 are you using a custom kernel? or may be we are seeing a different output because of the kata version 🤔

No, I just download form: https://github.com/kata-containers/runtime/releases/download/1.8.0/kata-static-1.8.0-x86_64.tar.xz. Than tar to /opt/kata & config containerd untrusted workload path to /opt/kata/bin/kata-nemu , overlay as default snapshoter. Follow this: https://github.com/kata-containers/documentation/blob/master/how-to/how-to-use-virtio-fs-with-kata.md. Everything is ok, pod start up successfully. I've make sure virtiofs and nemu used by: ps aux.

My host is bare metal with centos7/3.10 . I don't understand how to lead to that.

stefanha commented 5 years ago

@clarklee92 Please retry without overlayfs. virtio-fs is currently not compatible with overlayfs and this may solve the issue.

clarklee92 commented 5 years ago

@clarklee92 Please retry without overlayfs. virtio-fs is currently not compatible with overlayfs and this may solve the issue.

Sorry, which containerd snapshoter shoud I choose?

stefanha commented 5 years ago

@clarklee92 It depends on your environment. "devmapper" should work well but may require device-mapper setup. "native" is less efficient but should work in all cases.

ganeshmaharaj commented 5 years ago

@clarklee92

"yum install will hang forever at installation step" , This is still exist.

yes this is a known issue kata-containers/osbuilder#237 , the solutions is to use kata-agent as init kernel_params = "init=/usr/bin/kata-agent" you will get a better boot time, small footprint and yum working again 😄

I‘ve tried with kata-1.8.0 static version with kata-containers-image_clearlinux_1.8.0_agent.img and add kernel_params = "vsyscall=emulate init=/usr/bin/kata-agent" to config.

The problem turn from hanging to :

Installing : librdmacm-1.0.21-0.el6.x86_64                                                                                                                                                                        1/2
Traceback (most recent call last):
  File "/usr/lib/python2.6/site-packages/yum/rpmtrans.py", line 444, in callback
    self._instCloseFile(  bytes, total, h )
  File "/usr/lib/python2.6/site-packages/yum/rpmtrans.py", line 526, in _instCloseFile
    self.base.history.trans_data_pid_end(pid, state)
  File "/usr/lib/python2.6/site-packages/yum/history.py", line 868, in trans_data_pid_end
    """, ('TRUE', self._tid, pid, state))
  File "/usr/lib/python2.6/site-packages/yum/sqlutils.py", line 168, in executeSQLQmark
    return cursor.execute(query, params)
sqlite3.OperationalError: disk I/O error
error: python callback <bound method RPMTransaction.callback of <yum.rpmtrans.RPMTransaction instance at 0x24829e0>> failed, aborting!

😬

@clarklee92 Just tested it with latest kata 1.9.0-alpha0 and i saw that crash once but the app did get installed. Since then i have tried removing it and re-installing it and everytime i get this message.

sh-4.1# yum remove vim
Loaded plugins: fastestmirror, ovl
Setting up Remove Process
Resolving Dependencies
--> Running transaction check
---> Package vim-enhanced.x86_64 2:7.4.629-5.el6_10.2 will be erased
--> Finished Dependency Resolution

Dependencies Resolved

==============================================================================================================================================================================================================================================================================
 Package                                                           Arch                                                        Version                                                                    Repository                                                     Size
==============================================================================================================================================================================================================================================================================
Removing:
 vim-enhanced                                                      x86_64                                                      2:7.4.629-5.el6_10.2                                                       @updates                                                      2.2 M

Transaction Summary
==============================================================================================================================================================================================================================================================================
Remove        1 Package(s)

Installed size: 2.2 M
Is this ok [y/N]: y
Downloading Packages:
Running rpm_check_debug
Running Transaction Test
Transaction Test Succeeded
Running Transaction
  Erasing    : 2:vim-enhanced-7.4.629-5.el6_10.2.x86_64                                                                                                                                                                                                                   1/1
  Verifying  : 2:vim-enhanced-7.4.629-5.el6_10.2.x86_64                                                                                                                                                                                                                   1/1

Removed:
  vim-enhanced.x86_64 2:7.4.629-5.el6_10.2

Complete!
sh-4.1# yum install vim
Loaded plugins: fastestmirror, ovl
Setting up Install Process
Loading mirror speeds from cached hostfile
 * base: mirrors.usc.edu
 * extras: mirrors.usc.edu
 * updates: mirrors.usc.edu
Resolving Dependencies
--> Running transaction check
---> Package vim-enhanced.x86_64 2:7.4.629-5.el6_10.2 will be installed
--> Finished Dependency Resolution

Dependencies Resolved

==============================================================================================================================================================================================================================================================================
 Package                                                           Arch                                                        Version                                                                     Repository                                                    Size
==============================================================================================================================================================================================================================================================================
Installing:
 vim-enhanced                                                      x86_64                                                      2:7.4.629-5.el6_10.2                                                        updates                                                      1.0 M

Transaction Summary
==============================================================================================================================================================================================================================================================================
Install       1 Package(s)

Total download size: 1.0 M
Installed size: 2.2 M
Is this ok [y/N]: y
Downloading Packages:
vim-enhanced-7.4.629-5.el6_10.2.x86_64.rpm                                                                                                                                                                                                             | 1.0 MB     00:00
Running rpm_check_debug
Running Transaction Test
Transaction Test Succeeded
Running Transaction
Warning: RPMDB altered outside of yum.
  Installing : 2:vim-enhanced-7.4.629-5.el6_10.2.x86_64                                                                                                                                                                                                                   1/1
  Verifying  : 2:vim-enhanced-7.4.629-5.el6_10.2.x86_64                                                                                                                                                                                                                   1/1

Installed:
  vim-enhanced.x86_64 2:7.4.629-5.el6_10.2

Complete!
sh-4.1#

The big thing there being this Warning: RPMDB altered outside of yum.. This is indeed odd but there is no other process that is running in the container that should modify the contents of rootfs. Only thing that i can think of is virtiofs daemon on the host syncing with the cached data in the guest.

ganeshmaharaj commented 5 years ago

@clarklee92 Please retry without overlayfs. virtio-fs is currently not compatible with overlayfs and this may solve the issue.

Sorry, which containerd snapshoter shoud I choose?

You can use the devmapper snapshotter that comes with containerd https://github.com/containerd/containerd/tree/master/snapshots/devmapper and the same devicemapper for cri-o depending on your choice of runtime.

skaegi commented 4 years ago

Some more recent data using virtiofs... I'm using the 1.11.0 kata-static release for the kata-runtime and kata-agent.

With the clear containers image and using kernel 5.6.15 and qemu 5.0.0 with virtiofs I was hanging for yum install but with the kernel_params = "init=/usr/bin/kata-agent" suggestion from @devimc everything works.