clearcontainers / runtime

OCI (Open Containers Initiative) compatible runtime using Virtual Machines
Apache License 2.0
589 stars 70 forks source link

docker: Error response from daemon: oci runtime error: Unable to launch qemu: exit status 1. #842

Closed sidealice closed 6 years ago

sidealice commented 6 years ago

Description of problem

when i run "sudo docker run -ti busybox sh" i got the error

Actual result

docker: Error response from daemon: oci runtime error: Unable to launch qemu: exit status 1.


Hardware passed ./clear-linux-check-config.sh container and cc-runtime cc-check

cc-runtime cc-check INFO[0000] CPU property found description="Intel Architecture CPU" name=GenuineIntel source=runtime type=attribute INFO[0000] CPU property found description="Virtualization support" name=vmx source=runtime type=flag INFO[0000] CPU property found description="64Bit CPU" name=lm source=runtime type=flag INFO[0000] CPU property found description=SSE4.1 name=sse4_1 source=runtime type=flag INFO[0000] kernel property found description="Host kernel accelerator for virtio network" name=vhost_net source=runtime type=module INFO[0000] kernel property found description="Kernel-based Virtual Machine" name=kvm source=runtime type=module INFO[0000] kernel property found description="Intel KVM" name=kvm_intel source=runtime type=module INFO[0000] Kernel property value correct description="Intel KVM" name=kvm_intel source=runtime type=module INFO[0000] Kernel property value correct description="Intel KVM" name=kvm_intel source=runtime type=module INFO[0000] kernel property found description="Host kernel accelerator for virtio" name=vhost source=runtime type=module INFO[0000] System is capable of running Intel® Clear Containers source=runtime

./clear-linux-check-config.sh container

SUCCESS: Intel CPU SUCCESS: 64-bit CPU (lm) SUCCESS: Streaming SIMD Extensions v4.1 (sse4_1) SUCCESS: Virtualisation support (vmx) SUCCESS: Kernel module kvm SUCCESS: Kernel module kvm_intel SUCCESS: Nested KVM support SUCCESS: Unrestricted guest KVM support SUCCESS: Kernel module vhost SUCCESS: Kernel module vhost_net

CPU info: E5620*2

Thanks for the help

sidealice commented 6 years ago

Docker version 17.09.0-ce, build afdb6d4

sidealice commented 6 years ago

Ubuntu 16.04.3 LTS

jodh-intel commented 6 years ago

Hi @sidealice - please could you paste the output of sudo cc-collect-data.sh as a comment on this issue (after you've checked it doesn't contain anything you would consider to be confidential / sensitive).

sidealice commented 6 years ago

Meta details

Running cc-collect-data.sh version 3.0.9 (commit c3e0aca) at 2017-12-04.20:36:41.032068522.


Runtime is /usr/bin/cc-runtime.

cc-env

Output of "/usr/bin/cc-runtime cc-env":

[Meta]
  Version = "1.0.6"

[Runtime]
  [Runtime.Version]
    Semver = "3.0.9"
    Commit = "c3e0aca"
    OCI = "1.0.0-dev"
  [Runtime.Config]
    Path = "/usr/share/defaults/clear-containers/configuration.toml"

[Hypervisor]
  MachineType = "pc"
  Version = "QEMU emulator version 2.7.1(2.7.1+git.d4a337fe91-9.cc), Copyright (c) 2003-2016 Fabrice Bellard and the QEMU Project developers"
  Path = "/usr/bin/qemu-lite-system-x86_64"

[Image]
  Path = "/usr/share/clear-containers/clear-19350-containers.img"

[Kernel]
  Path = "/usr/share/clear-containers/vmlinuz-4.9.60-80.container"
  Parameters = ""

[Proxy]
  Type = "ccProxy"
  Version = "Version: 3.0.9+git.4d8aed1"
  URL = "unix:///var/run/clear-containers/proxy.sock"

[Shim]
  Type = "ccShim"
  Version = "shim version: 3.0.9 (commit: 838039b)"
  Path = "/usr/libexec/clear-containers/cc-shim"

[Agent]
  Type = "hyperstart"
  Version = "<<unknown>>"

[Host]
  Kernel = "4.4.0-34-generic"
  CCCapable = true
  [Host.Distro]
    Name = "Ubuntu"
    Version = "16.04"
  [Host.CPU]
    Vendor = "GenuineIntel"
    Model = "Intel(R) Xeon(R) CPU           X5650  @ 2.67GHz"

Runtime config files

Runtime default config files

/usr/share/defaults/clear-containers/configuration.toml
/usr/share/defaults/clear-containers/configuration.toml

Runtime config file contents

Config file /etc/clear-containers/configuration.toml not found Output of "cat "/usr/share/defaults/clear-containers/configuration.toml"":

# XXX: Warning: this file is auto-generated from file "config/configuration.toml.in".

[hypervisor.qemu]
path = "/usr/bin/qemu-lite-system-x86_64"
kernel = "/usr/share/clear-containers/vmlinuz.container"
image = "/usr/share/clear-containers/clear-containers.img"
machine_type = "pc"
# Optional space-separated list of options to pass to the guest kernel.
# For example, use `kernel_params = "vsyscall=emulate"` if you are having
# trouble running pre-2.15 glibc
kernel_params = ""

# Path to the firmware.
# If you want that qemu uses the default firmware leave this option empty
firmware = ""

# Machine accelerators
# comma-separated list of machine accelerators to pass to the hypervisor.
# For example, `machine_accelerators = "nosmm,nosmbus,nosata,nopit,static-prt,nofw"`
machine_accelerators=""

# Default number of vCPUs per POD/VM:
# unspecified or 0 --> will be set to 1
# < 0              --> will be set to the actual number of physical cores
# > 0 <= 255       --> will be set to the specified number
# > 255            --> will be set to 255
default_vcpus = -1

# Default memory size in MiB for POD/VM.
# If unspecified then it will be set 2048 MiB.
#default_memory = 2048
disable_block_device_use = false

# Enable pre allocation of VM RAM, default false
# Enabling this will result in lower container density
# as all of the memory will be allocated and locked
# This is useful when you want to reserve all the memory
# upfront or in the cases where you want memory latencies
# to be very predictable
# Default false
#enable_mem_prealloc = true

# Enable huge pages for VM RAM, default false
# Enabling this will result in the VM memory
# being allocated using huge pages.
# This is useful when you want to use vhost-user network
# stacks within the container. This will automatically 
# result in memory pre allocation
#enable_hugepages = true

# Enable swap of vm memory. Default false.
# The behaviour is undefined if mem_prealloc is also set to true
#enable_swap = true

# Debug changes the default hypervisor and kernel parameters to
# enable debug output where available.
# Default false
# these logs can be obtained in the cc-proxy logs  when the 
# proxy is set to run in debug mode
# /usr/libexec/clear-containers/cc-proxy -log debug
# or by stopping the cc-proxy service and running the cc-proxy 
# explicitly using the same command line
# 
#enable_debug = true

# Disable the customizations done in the runtime when it detects
# that it is running on top a VMM. This will result in the runtime
# behaving as it would when running on bare metal.
# 
#disable_nesting_checks = true

[proxy.cc]
url = "unix:///var/run/clear-containers/proxy.sock"

[shim.cc]
path = "/usr/libexec/clear-containers/cc-shim"

# If enabled, shim messages will be sent to the system log
# (default: disabled)
#enable_debug = true

[runtime]
# If enabled, the runtime will log additional debug messages to the
# system log
# (default: disabled)
#enable_debug = true

Logfiles

Runtime logs

Recent runtime problems found in system journal:

time="2017-12-04T02:34:08+08:00" level=info msg="launching qemu with: [-name pod-db3ab33e7b3b25481da8169a6cc2df8e9cd78548e70c4263a22d0e2232452b84 -uuid 8ac137f6-5913-4744-a6d4-8363a6246aa2 -machine pc,accel=kvm,kernel_irqchip,nvdimm -cpu host -qmp unix:/run/virtcontainers/pods/db3ab33e7b3b25481da8169a6cc2df8e9cd78548e70c4263a22d0e2232452b84/8ac137f6-5913-474,server,nowait -qmp unix:/run/virtcontainers/pods/db3ab33e7b3b25481da8169a6cc2df8e9cd78548e70c4263a22d0e2232452b84/8ac137f6-5913-474,server,nowait -m 2048M,slots=2,maxmem=17055M -smp 24,cores=24,threads=1,sockets=1 -device virtio-9p-pci,fsdev=ctr-9p-0,mount_tag=ctr-rootfs-0 -fsdev local,id=ctr-9p-0,path=/var/lib/docker/aufs/mnt/38c1620f25a008ff111a40086c246071c59b629bc3fc351a9c983d15bed66ad0,security_model=none -device virtio-serial-pci,id=serial0 -device virtconsole,chardev=charconsole0,id=console0 -chardev socket,id=charconsole0,path=/run/virtcontainers/pods/db3ab33e7b3b25481da8169a6cc2df8e9cd78548e70c4263a22d0e2232452b84/console.sock,server,nowait -device nvdimm,id=nv0,memdev=mem0 -object memory-backend-file,id=mem0,mem-path=/usr/share/clear-containers/clear-19350-containers.img,size=235929600 -device pci-bridge,bus=pci.0,id=pci-bridge-0,chassis_nr=1,shpc=on -device virtserialport,chardev=charch0,id=channel0,name=sh.hyper.channel.0 -chardev socket,id=charch0,path=/run/virtcontainers/pods/db3ab33e7b3b25481da8169a6cc2df8e9cd78548e70c4263a22d0e2232452b84/hyper.sock,server,nowait -device virtserialport,chardev=charch1,id=channel1,name=sh.hyper.channel.1 -chardev socket,id=charch1,path=/run/virtcontainers/pods/db3ab33e7b3b25481da8169a6cc2df8e9cd78548e70c4263a22d0e2232452b84/tty.sock,server,nowait -device virtio-9p-pci,fsdev=extra-9p-hyperShared,mount_tag=hyperShared -fsdev local,id=extra-9p-hyperShared,path=/tmp/hyper/shared/pods/db3ab33e7b3b25481da8169a6cc2df8e9cd78548e70c4263a22d0e2232452b84,security_model=none -netdev tap,id=network-0,vhost=on,fds=3:4:5:6:7:8:9:10 -device driver=virtio-net-pci,netdev=network-0,mac=02:42:ac:11:00:02,mq=on,vectors=18 -rtc base=utc,driftfix=slew -global kvm-pit.lost_tick_policy=discard -vga none -no-user-config -nodefaults -nographic -daemonize -kernel /usr/share/clear-containers/vmlinuz-4.9.60-80.container -append root=/dev/pmem0p1 rootflags=dax,data=ordered,errors=remount-ro rw rootfstype=ext4 tsc=reliable no_timer_check rcupdate.rcu_expedited=1 i8042.direct=1 i8042.dumbkbd=1 i8042.nopnp=1 i8042.noaux=1 noreplace-smp reboot=k panic=1 console=hvc0 console=hvc1 initcall_debug iommu=off cryptomgr.notests net.ifnames=0 quiet systemd.show_status=false init=/usr/lib/systemd/systemd systemd.unit=clear-containers.target systemd.mask=systemd-networkd.service systemd.mask=systemd-networkd.socket ip=::::::db3ab33e7b3b25481da8169a6cc2df8e9cd78548e70c4263a22d0e2232452b84::off::]" source=virtcontainers subsystem=qmp
time="2017-12-04T02:34:08+08:00" level=error msg="Unable to launch qemu: exit status 1" source=virtcontainers subsystem=qmp
time="2017-12-04T02:34:08+08:00" level=error msg="ioctl(KVM_CREATE_VM) failed: 16 Device or resource busy\nfailed to initialize KVM: Device or resource busy\n" source=virtcontainers subsystem=qmp
time="2017-12-04T02:34:08+08:00" level=error msg="ioctl(KVM_CREATE_VM) failed: 16 Device or resource busy\nfailed to initialize KVM: Device or resource busy\n" source=runtime
time="2017-12-04T02:34:08+08:00" level=error msg="unknown containerID: db3ab33e7b3b25481da8169a6cc2df8e9cd78548e70c4263a22d0e2232452b84" source=runtime
time="2017-12-04T02:34:08+08:00" level=error msg="Can not move from stopped to stopped" source=runtime
time="2017-12-04T02:34:37+08:00" level=info msg="launching qemu with: [-name pod-0dc86374592aa8dbae45a59cfa63444b0b08a63500726d56cb5a75c907d26dd2 -uuid 91ce7c81-468c-4b60-b87f-1116535b024b -machine pc,accel=kvm,kernel_irqchip,nvdimm -cpu host -qmp unix:/run/virtcontainers/pods/0dc86374592aa8dbae45a59cfa63444b0b08a63500726d56cb5a75c907d26dd2/91ce7c81-468c-4b6,server,nowait -qmp unix:/run/virtcontainers/pods/0dc86374592aa8dbae45a59cfa63444b0b08a63500726d56cb5a75c907d26dd2/91ce7c81-468c-4b6,server,nowait -m 2048M,slots=2,maxmem=17055M -smp 24,cores=24,threads=1,sockets=1 -device virtio-9p-pci,fsdev=ctr-9p-0,mount_tag=ctr-rootfs-0 -fsdev local,id=ctr-9p-0,path=/var/lib/docker/aufs/mnt/2e21f528f9198f06c5cf33978fa40409a7004de5830a2804f60fdc76fd13f9f4,security_model=none -device virtio-serial-pci,id=serial0 -device virtconsole,chardev=charconsole0,id=console0 -chardev socket,id=charconsole0,path=/run/virtcontainers/pods/0dc86374592aa8dbae45a59cfa63444b0b08a63500726d56cb5a75c907d26dd2/console.sock,server,nowait -device nvdimm,id=nv0,memdev=mem0 -object memory-backend-file,id=mem0,mem-path=/usr/share/clear-containers/clear-19350-containers.img,size=235929600 -device pci-bridge,bus=pci.0,id=pci-bridge-0,chassis_nr=1,shpc=on -device virtserialport,chardev=charch0,id=channel0,name=sh.hyper.channel.0 -chardev socket,id=charch0,path=/run/virtcontainers/pods/0dc86374592aa8dbae45a59cfa63444b0b08a63500726d56cb5a75c907d26dd2/hyper.sock,server,nowait -device virtserialport,chardev=charch1,id=channel1,name=sh.hyper.channel.1 -chardev socket,id=charch1,path=/run/virtcontainers/pods/0dc86374592aa8dbae45a59cfa63444b0b08a63500726d56cb5a75c907d26dd2/tty.sock,server,nowait -device virtio-9p-pci,fsdev=extra-9p-hyperShared,mount_tag=hyperShared -fsdev local,id=extra-9p-hyperShared,path=/tmp/hyper/shared/pods/0dc86374592aa8dbae45a59cfa63444b0b08a63500726d56cb5a75c907d26dd2,security_model=none -netdev tap,id=network-0,vhost=on,fds=3:4:5:6:7:8:9:10 -device driver=virtio-net-pci,netdev=network-0,mac=02:42:ac:11:00:02,mq=on,vectors=18 -rtc base=utc,driftfix=slew -global kvm-pit.lost_tick_policy=discard -vga none -no-user-config -nodefaults -nographic -daemonize -kernel /usr/share/clear-containers/vmlinuz-4.9.60-80.container -append root=/dev/pmem0p1 rootflags=dax,data=ordered,errors=remount-ro rw rootfstype=ext4 tsc=reliable no_timer_check rcupdate.rcu_expedited=1 i8042.direct=1 i8042.dumbkbd=1 i8042.nopnp=1 i8042.noaux=1 noreplace-smp reboot=k panic=1 console=hvc0 console=hvc1 initcall_debug iommu=off cryptomgr.notests net.ifnames=0 quiet systemd.show_status=false init=/usr/lib/systemd/systemd systemd.unit=clear-containers.target systemd.mask=systemd-networkd.service systemd.mask=systemd-networkd.socket ip=::::::0dc86374592aa8dbae45a59cfa63444b0b08a63500726d56cb5a75c907d26dd2::off::]" source=virtcontainers subsystem=qmp
time="2017-12-04T02:34:37+08:00" level=error msg="Unable to launch qemu: exit status 1" source=virtcontainers subsystem=qmp
time="2017-12-04T02:34:37+08:00" level=error msg="ioctl(KVM_CREATE_VM) failed: 16 Device or resource busy\nfailed to initialize KVM: Device or resource busy\n" source=virtcontainers subsystem=qmp
time="2017-12-04T02:34:37+08:00" level=error msg="ioctl(KVM_CREATE_VM) failed: 16 Device or resource busy\nfailed to initialize KVM: Device or resource busy\n" source=runtime
time="2017-12-04T02:34:37+08:00" level=error msg="unknown containerID: 0dc86374592aa8dbae45a59cfa63444b0b08a63500726d56cb5a75c907d26dd2" source=runtime
time="2017-12-04T02:34:37+08:00" level=error msg="Can not move from stopped to stopped" source=runtime
time="2017-12-04T02:35:03+08:00" level=info msg="launching qemu with: [-name pod-12118d198bb6ec7102ac8d68ae17781560ca1c69b160431edcb8293cd89c6ba1 -uuid 914d4ea7-d42e-4066-b806-dd960ea620fd -machine pc,accel=kvm,kernel_irqchip,nvdimm -cpu host -qmp unix:/run/virtcontainers/pods/12118d198bb6ec7102ac8d68ae17781560ca1c69b160431edcb8293cd89c6ba1/914d4ea7-d42e-406,server,nowait -qmp unix:/run/virtcontainers/pods/12118d198bb6ec7102ac8d68ae17781560ca1c69b160431edcb8293cd89c6ba1/914d4ea7-d42e-406,server,nowait -m 2048M,slots=2,maxmem=17055M -smp 24,cores=24,threads=1,sockets=1 -device virtio-9p-pci,fsdev=ctr-9p-0,mount_tag=ctr-rootfs-0 -fsdev local,id=ctr-9p-0,path=/var/lib/docker/aufs/mnt/e480318aa4330fe982b24b4e9e7e130e6bab2f1a95f15456aae93a9f7f392cb2,security_model=none -device virtio-serial-pci,id=serial0 -device virtconsole,chardev=charconsole0,id=console0 -chardev socket,id=charconsole0,path=/run/virtcontainers/pods/12118d198bb6ec7102ac8d68ae17781560ca1c69b160431edcb8293cd89c6ba1/console.sock,server,nowait -device nvdimm,id=nv0,memdev=mem0 -object memory-backend-file,id=mem0,mem-path=/usr/share/clear-containers/clear-19350-containers.img,size=235929600 -device pci-bridge,bus=pci.0,id=pci-bridge-0,chassis_nr=1,shpc=on -device virtserialport,chardev=charch0,id=channel0,name=sh.hyper.channel.0 -chardev socket,id=charch0,path=/run/virtcontainers/pods/12118d198bb6ec7102ac8d68ae17781560ca1c69b160431edcb8293cd89c6ba1/hyper.sock,server,nowait -device virtserialport,chardev=charch1,id=channel1,name=sh.hyper.channel.1 -chardev socket,id=charch1,path=/run/virtcontainers/pods/12118d198bb6ec7102ac8d68ae17781560ca1c69b160431edcb8293cd89c6ba1/tty.sock,server,nowait -device virtio-9p-pci,fsdev=extra-9p-hyperShared,mount_tag=hyperShared -fsdev local,id=extra-9p-hyperShared,path=/tmp/hyper/shared/pods/12118d198bb6ec7102ac8d68ae17781560ca1c69b160431edcb8293cd89c6ba1,security_model=none -netdev tap,id=network-0,vhost=on,fds=3:4:5:6:7:8:9:10 -device driver=virtio-net-pci,netdev=network-0,mac=02:42:ac:11:00:02,mq=on,vectors=18 -rtc base=utc,driftfix=slew -global kvm-pit.lost_tick_policy=discard -vga none -no-user-config -nodefaults -nographic -daemonize -kernel /usr/share/clear-containers/vmlinuz-4.9.60-80.container -append root=/dev/pmem0p1 rootflags=dax,data=ordered,errors=remount-ro rw rootfstype=ext4 tsc=reliable no_timer_check rcupdate.rcu_expedited=1 i8042.direct=1 i8042.dumbkbd=1 i8042.nopnp=1 i8042.noaux=1 noreplace-smp reboot=k panic=1 console=hvc0 console=hvc1 initcall_debug iommu=off cryptomgr.notests net.ifnames=0 quiet systemd.show_status=false init=/usr/lib/systemd/systemd systemd.unit=clear-containers.target systemd.mask=systemd-networkd.service systemd.mask=systemd-networkd.socket ip=::::::12118d198bb6ec7102ac8d68ae17781560ca1c69b160431edcb8293cd89c6ba1::off::]" source=virtcontainers subsystem=qmp
time="2017-12-04T02:35:03+08:00" level=error msg="Unable to launch qemu: exit status 1" source=virtcontainers subsystem=qmp
time="2017-12-04T02:35:03+08:00" level=error msg="ioctl(KVM_CREATE_VM) failed: 16 Device or resource busy\nfailed to initialize KVM: Device or resource busy\n" source=virtcontainers subsystem=qmp
time="2017-12-04T02:35:03+08:00" level=error msg="ioctl(KVM_CREATE_VM) failed: 16 Device or resource busy\nfailed to initialize KVM: Device or resource busy\n" source=runtime
time="2017-12-04T02:35:04+08:00" level=error msg="unknown containerID: 12118d198bb6ec7102ac8d68ae17781560ca1c69b160431edcb8293cd89c6ba1" source=runtime
time="2017-12-04T02:35:04+08:00" level=error msg="Can not move from stopped to stopped" source=runtime
time="2017-12-04T02:43:49+08:00" level=info msg="launching qemu with: [-name pod-583e8b8375ab90a6e203f3c442a1085ea259c4ad1f14edd3e560f4c1110b092c -uuid 592b9048-b149-419a-bca5-f0a8e09d95d1 -machine pc,accel=kvm,kernel_irqchip,nvdimm -cpu host -qmp unix:/run/virtcontainers/pods/583e8b8375ab90a6e203f3c442a1085ea259c4ad1f14edd3e560f4c1110b092c/592b9048-b149-419,server,nowait -qmp unix:/run/virtcontainers/pods/583e8b8375ab90a6e203f3c442a1085ea259c4ad1f14edd3e560f4c1110b092c/592b9048-b149-419,server,nowait -m 2048M,slots=2,maxmem=17055M -smp 24,cores=24,threads=1,sockets=1 -device virtio-9p-pci,fsdev=ctr-9p-0,mount_tag=ctr-rootfs-0 -fsdev local,id=ctr-9p-0,path=/var/lib/docker/aufs/mnt/0aa80cb816346cd524f2e11559894f55d2b08d8a9a7fecdf835924f7a1bbe833,security_model=none -device virtio-serial-pci,id=serial0 -device virtconsole,chardev=charconsole0,id=console0 -chardev socket,id=charconsole0,path=/run/virtcontainers/pods/583e8b8375ab90a6e203f3c442a1085ea259c4ad1f14edd3e560f4c1110b092c/console.sock,server,nowait -device nvdimm,id=nv0,memdev=mem0 -object memory-backend-file,id=mem0,mem-path=/usr/share/clear-containers/clear-19350-containers.img,size=235929600 -device pci-bridge,bus=pci.0,id=pci-bridge-0,chassis_nr=1,shpc=on -device virtserialport,chardev=charch0,id=channel0,name=sh.hyper.channel.0 -chardev socket,id=charch0,path=/run/virtcontainers/pods/583e8b8375ab90a6e203f3c442a1085ea259c4ad1f14edd3e560f4c1110b092c/hyper.sock,server,nowait -device virtserialport,chardev=charch1,id=channel1,name=sh.hyper.channel.1 -chardev socket,id=charch1,path=/run/virtcontainers/pods/583e8b8375ab90a6e203f3c442a1085ea259c4ad1f14edd3e560f4c1110b092c/tty.sock,server,nowait -device virtio-9p-pci,fsdev=extra-9p-hyperShared,mount_tag=hyperShared -fsdev local,id=extra-9p-hyperShared,path=/tmp/hyper/shared/pods/583e8b8375ab90a6e203f3c442a1085ea259c4ad1f14edd3e560f4c1110b092c,security_model=none -netdev tap,id=network-0,vhost=on,fds=3:4:5:6:7:8:9:10 -device driver=virtio-net-pci,netdev=network-0,mac=02:42:ac:11:00:02,mq=on,vectors=18 -rtc base=utc,driftfix=slew -global kvm-pit.lost_tick_policy=discard -vga none -no-user-config -nodefaults -nographic -daemonize -kernel /usr/share/clear-containers/vmlinuz-4.9.60-80.container -append root=/dev/pmem0p1 rootflags=dax,data=ordered,errors=remount-ro rw rootfstype=ext4 tsc=reliable no_timer_check rcupdate.rcu_expedited=1 i8042.direct=1 i8042.dumbkbd=1 i8042.nopnp=1 i8042.noaux=1 noreplace-smp reboot=k panic=1 console=hvc0 console=hvc1 initcall_debug iommu=off cryptomgr.notests net.ifnames=0 quiet systemd.show_status=false init=/usr/lib/systemd/systemd systemd.unit=clear-containers.target systemd.mask=systemd-networkd.service systemd.mask=systemd-networkd.socket ip=::::::583e8b8375ab90a6e203f3c442a1085ea259c4ad1f14edd3e560f4c1110b092c::off::]" source=virtcontainers subsystem=qmp
time="2017-12-04T02:43:49+08:00" level=error msg="Unable to launch qemu: exit status 1" source=virtcontainers subsystem=qmp
time="2017-12-04T02:43:49+08:00" level=error msg="ioctl(KVM_CREATE_VM) failed: 16 Device or resource busy\nfailed to initialize KVM: Device or resource busy\n" source=virtcontainers subsystem=qmp
time="2017-12-04T02:43:49+08:00" level=error msg="ioctl(KVM_CREATE_VM) failed: 16 Device or resource busy\nfailed to initialize KVM: Device or resource busy\n" source=runtime
time="2017-12-04T02:43:49+08:00" level=error msg="unknown containerID: 583e8b8375ab90a6e203f3c442a1085ea259c4ad1f14edd3e560f4c1110b092c" source=runtime
time="2017-12-04T02:43:49+08:00" level=error msg="Can not move from stopped to stopped" source=runtime
time="2017-12-04T02:46:05+08:00" level=info msg="launching qemu with: [-name pod-e680d3cb352e2bfbc69b364e4e29050d863ba0bbb236779f77e5900f860cafa1 -uuid 7d63c0a1-3999-4edd-aef1-77adcb64c166 -machine pc,accel=kvm,kernel_irqchip,nvdimm -cpu host -qmp unix:/run/virtcontainers/pods/e680d3cb352e2bfbc69b364e4e29050d863ba0bbb236779f77e5900f860cafa1/7d63c0a1-3999-4ed,server,nowait -qmp unix:/run/virtcontainers/pods/e680d3cb352e2bfbc69b364e4e29050d863ba0bbb236779f77e5900f860cafa1/7d63c0a1-3999-4ed,server,nowait -m 2048M,slots=2,maxmem=17055M -smp 24,cores=24,threads=1,sockets=1 -device virtio-9p-pci,fsdev=ctr-9p-0,mount_tag=ctr-rootfs-0 -fsdev local,id=ctr-9p-0,path=/var/lib/docker/aufs/mnt/eebe76537c107cb6136f14cb8e2f9d2eb5f1ac67b4b74f838e2d72179e084d6b,security_model=none -device virtio-serial-pci,id=serial0 -device virtconsole,chardev=charconsole0,id=console0 -chardev socket,id=charconsole0,path=/run/virtcontainers/pods/e680d3cb352e2bfbc69b364e4e29050d863ba0bbb236779f77e5900f860cafa1/console.sock,server,nowait -device nvdimm,id=nv0,memdev=mem0 -object memory-backend-file,id=mem0,mem-path=/usr/share/clear-containers/clear-19350-containers.img,size=235929600 -device pci-bridge,bus=pci.0,id=pci-bridge-0,chassis_nr=1,shpc=on -device virtserialport,chardev=charch0,id=channel0,name=sh.hyper.channel.0 -chardev socket,id=charch0,path=/run/virtcontainers/pods/e680d3cb352e2bfbc69b364e4e29050d863ba0bbb236779f77e5900f860cafa1/hyper.sock,server,nowait -device virtserialport,chardev=charch1,id=channel1,name=sh.hyper.channel.1 -chardev socket,id=charch1,path=/run/virtcontainers/pods/e680d3cb352e2bfbc69b364e4e29050d863ba0bbb236779f77e5900f860cafa1/tty.sock,server,nowait -device virtio-9p-pci,fsdev=extra-9p-hyperShared,mount_tag=hyperShared -fsdev local,id=extra-9p-hyperShared,path=/tmp/hyper/shared/pods/e680d3cb352e2bfbc69b364e4e29050d863ba0bbb236779f77e5900f860cafa1,security_model=none -netdev tap,id=network-0,vhost=on,fds=3:4:5:6:7:8:9:10 -device driver=virtio-net-pci,netdev=network-0,mac=02:42:ac:11:00:02,mq=on,vectors=18 -rtc base=utc,driftfix=slew -global kvm-pit.lost_tick_policy=discard -vga none -no-user-config -nodefaults -nographic -daemonize -kernel /usr/share/clear-containers/vmlinuz-4.9.60-80.container -append root=/dev/pmem0p1 rootflags=dax,data=ordered,errors=remount-ro rw rootfstype=ext4 tsc=reliable no_timer_check rcupdate.rcu_expedited=1 i8042.direct=1 i8042.dumbkbd=1 i8042.nopnp=1 i8042.noaux=1 noreplace-smp reboot=k panic=1 console=hvc0 console=hvc1 initcall_debug iommu=off cryptomgr.notests net.ifnames=0 quiet systemd.show_status=false init=/usr/lib/systemd/systemd systemd.unit=clear-containers.target systemd.mask=systemd-networkd.service systemd.mask=systemd-networkd.socket ip=::::::e680d3cb352e2bfbc69b364e4e29050d863ba0bbb236779f77e5900f860cafa1::off::]" source=virtcontainers subsystem=qmp
time="2017-12-04T02:46:05+08:00" level=error msg="Unable to launch qemu: exit status 1" source=virtcontainers subsystem=qmp
time="2017-12-04T02:46:05+08:00" level=error msg="ioctl(KVM_CREATE_VM) failed: 16 Device or resource busy\nfailed to initialize KVM: Device or resource busy\n" source=virtcontainers subsystem=qmp
time="2017-12-04T02:46:05+08:00" level=error msg="ioctl(KVM_CREATE_VM) failed: 16 Device or resource busy\nfailed to initialize KVM: Device or resource busy\n" source=runtime
time="2017-12-04T02:46:06+08:00" level=error msg="unknown containerID: e680d3cb352e2bfbc69b364e4e29050d863ba0bbb236779f77e5900f860cafa1" source=runtime
time="2017-12-04T02:46:06+08:00" level=error msg="Can not move from stopped to stopped" source=runtime

Proxy logs

No recent proxy problems found in system journal.

Shim logs

No recent shim problems found in system journal.


Container manager details

Have docker

Docker

Output of "docker info":

Containers: 5
 Running: 0
 Paused: 0
 Stopped: 5
Images: 8
Server Version: 17.09.0-ce
Storage Driver: aufs
 Root Dir: /var/lib/docker/aufs
 Backing Filesystem: extfs
 Dirs: 18
 Dirperm1 Supported: true
Logging Driver: json-file
Cgroup Driver: cgroupfs
Plugins:
 Volume: local
 Network: bridge host macvlan null overlay
 Log: awslogs fluentd gcplogs gelf journald json-file logentries splunk syslog
Swarm: inactive
Runtimes: cc-runtime runc
Default Runtime: cc-runtime
Init Binary: docker-init
containerd version: 06b9cb35161009dcb7123345749fef02f7cea8e0
runc version: c3e0aca (expected: 3f2f8b84a77f73d38244dd690525642a72156c64)
init version: 949e6fa
Security Options:
 apparmor
 seccomp
  Profile: default
Kernel Version: 4.4.0-34-generic
Operating System: Ubuntu 16.04.3 LTS
OSType: linux
Architecture: x86_64
CPUs: 24
Total Memory: 15.66GiB
Name: local
ID: GY6Y:WUG6:KEUO:J23J:HCXM:ZKM3:6NN7:E37B:RLC5:GTHA:FTFP:IBN6
Docker Root Dir: /var/lib/docker
Debug Mode (client): false
Debug Mode (server): true
 File Descriptors: 18
 Goroutines: 26
 System Time: 2017-12-04T20:36:41.13946819+08:00
 EventsListeners: 0
Registry: https://index.docker.io/v1/
Experimental: false
Insecure Registries:
 127.0.0.0/8
Live Restore Enabled: false

WARNING: No swap limit support

No kubectl


Packages

Have dpkg Output of "dpkg -l|egrep "(cc-proxy|cc-runtime|cc-shim|clear-containers-image|linux-container|qemu-lite|qemu-system-x86|cc-oci-runtime)"":

ii  cc-proxy                                                    3.0.9+git.4d8aed1-14                                     amd64        
ii  cc-runtime                                                  3.0.9+git.c3e0aca-14                                     amd64        
ii  cc-runtime-bin                                              3.0.9+git.c3e0aca-14                                     amd64        
ii  cc-runtime-config                                           3.0.9+git.c3e0aca-14                                     amd64        
ii  cc-shim                                                     3.0.9+git.838039b-14                                     amd64        
ii  clear-containers-image                                      19350-40                                                 amd64        Clear containers image
ii  linux-container                                             4.9.60-80                                                amd64        linux kernel optimised for container-like workloads.
ii  qemu-lite                                                   2.7.1+git.d4a337fe91-9                                   amd64        linux kernel optimised for container-like workloads.
ii  qemu-system-x86                                             1:2.5+dfsg-5ubuntu10.16                                  amd64        QEMU full system emulation binaries (x86)

Have rpm Output of "rpm -qa|egrep "(cc-proxy|cc-runtime|cc-shim|clear-containers-image|linux-container|qemu-lite|qemu-system-x86|cc-oci-runtime)"":


sboeuf commented 6 years ago

Hi @sidealice, looks like you have /dev/kvm already used on your system. Clear Containers relies on /dev/kvm through Qemu in order to virtualize. Make sure you don't have virtualbox or any other hypervisor using /dev/kvm when you start Clear Containers.

sidealice commented 6 years ago

thanks for your help. solved my problem!

It works after i close the vagrant(virtual box).

jodh-intel commented 6 years ago

Hi @sidealice - we've added a new feature to cc-check so that if you run sudo cc-runtime cc-check, it will now state "another hypervisor running" for the scenario you reported. See #849 for a full example.