kata-containers / runtime

Kata Containers version 1.x runtime (for version 2.x see https://github.com/kata-containers/kata-containers).
https://katacontainers.io/
Apache License 2.0
2.1k stars 375 forks source link

memory hotplug failed on SUSE #1228

Closed yuntongjin closed 3 years ago

yuntongjin commented 5 years ago

Description of problem

Set default_memory = 16384

Kata runtime failed to create: runtime log: msg="Unable to hotplug 16384 MiB memory, the SB has 116736 MiB and the maximum amount is 2048 MiB" arch=amd64 command=create container=

msg="{\"error\": {\"class\": \"GenericError\", \"desc\": \"no free slots available\"}}" arch=amd64 command=create container

(replace this text with the list of steps you followed)

Expected result

(replace this text with an explanation of what you thought would happen)

Actual result

(replace this text with details of what actually happened)


(replace this text with the output of the kata-collect-data.sh script, after you have reviewed its content to ensure it does not contain any private information).

Meta details

Running kata-collect-data.sh version 1.4.0 (commit 21f0059) at 2019-01-14.14:32:37.443546011+0800.


Runtime is /usr/bin/kata-runtime.

kata-env

Output of "/usr/bin/kata-runtime kata-env":

[Meta]
  Version = "1.0.19"

[Runtime]
  Debug = true
  DisableNewNetNs = false
  Path = "/usr/bin/kata-runtime"
  [Runtime.Version]
    Semver = "1.4.0"
    Commit = "21f0059"
    OCI = "1.0.1-dev"
  [Runtime.Config]
    Path = "/usr/share/defaults/kata-containers/configuration.toml"

[Hypervisor]
  MachineType = "pc"
  Version = "QEMU emulator version 2.11.0\nCopyright (c) 2003-2017 Fabrice Bellard and the QEMU Project developers"
  Path = "/usr/bin/qemu-lite-system-x86_64"
  BlockDeviceDriver = "virtio-scsi"
  EntropySource = "/dev/urandom"
  Msize9p = 8192
  MemorySlots = 10
  Debug = true
  UseVSock = false

[Image]
  Path = "/usr/share/kata-containers/kata-containers-image_clearlinux_1.4.0_agent_0ff30063f7e.img"

[Kernel]
  Path = "/usr/share/kata-containers/vmlinuz-4.14.67.17-11.1.container"
  Parameters = "agent.log=debug initcall_debug agent.log=debug initcall_debug"

[Initrd]
  Path = ""

[Proxy]
  Type = "kataProxy"
  Version = "kata-proxy version 1.4.0-e1856c2"
  Path = "/usr/libexec/kata-containers/kata-proxy"
  Debug = true

[Shim]
  Type = "kataShim"
  Version = "kata-shim version 1.4.0-b02868b"
  Path = "/usr/libexec/kata-containers/kata-shim"
  Debug = true

[Agent]
  Type = "kata"

[Host]
  Kernel = "4.4.103-6.38-default"
  Architecture = "amd64"
  VMContainerCapable = true
  SupportVSocks = false
  [Host.Distro]
    Name = "SLES"
    Version = "12.3"
  [Host.CPU]
    Vendor = "GenuineIntel"
    Model = "Intel(R) Xeon(R) CPU E5-2658 v4 @ 2.30GHz"

[Netmon]
  Version = "kata-netmon version 1.4.0"
  Path = "/usr/libexec/kata-containers/kata-netmon"
  Debug = true
  Enable = false

Runtime config files

Runtime default config files

/etc/kata-containers/configuration.toml
/usr/share/defaults/kata-containers/configuration.toml

Runtime config file contents

Config file /etc/kata-containers/configuration.toml not found Output of "cat "/usr/share/defaults/kata-containers/configuration.toml"":

# Copyright (c) 2017-2018 Intel Corporation
#
# SPDX-License-Identifier: Apache-2.0
#

# XXX: WARNING: this file is auto-generated.
# XXX:
# XXX: Source file: "cli/config/configuration.toml.in"
# XXX: Project:
# XXX:   Name: Kata Containers
# XXX:   Type: kata

[hypervisor.qemu]
path = "/usr/bin/qemu-lite-system-x86_64"
kernel = "/usr/share/kata-containers/vmlinuz.container"
image = "/usr/share/kata-containers/kata-containers.img"
machine_type = "pc"

# Optional space-separated list of options to pass to the guest kernel.
# For example, use `kernel_params = "vsyscall=emulate"` if you are having
# trouble running pre-2.15 glibc.
#
# WARNING: - any parameter specified here will take priority over the default
# parameter value of the same name used to start the virtual machine.
# Do not set values here unless you understand the impact of doing so as you
# may stop the virtual machine from booting.
# To see the list of default parameters, enable hypervisor debug, create a
# container and look for 'default-kernel-parameters' log entries.
kernel_params = " agent.log=debug initcall_debug agent.log=debug initcall_debug"

# Path to the firmware.
# If you want that qemu uses the default firmware leave this option empty
firmware = ""

# Machine accelerators
# comma-separated list of machine accelerators to pass to the hypervisor.
# For example, `machine_accelerators = "nosmm,nosmbus,nosata,nopit,static-prt,nofw"`
machine_accelerators=""

# Default number of vCPUs per SB/VM:
# unspecified or 0                --> will be set to 1
# < 0                             --> will be set to the actual number of physical cores
# > 0 <= number of physical cores --> will be set to the specified number
# > number of physical cores      --> will be set to the actual number of physical cores
default_vcpus = 1

# Default maximum number of vCPUs per SB/VM:
# unspecified or == 0             --> will be set to the actual number of physical cores or to the maximum number
#                                     of vCPUs supported by KVM if that number is exceeded
# > 0 <= number of physical cores --> will be set to the specified number
# > number of physical cores      --> will be set to the actual number of physical cores or to the maximum number
#                                     of vCPUs supported by KVM if that number is exceeded
# WARNING: Depending of the architecture, the maximum number of vCPUs supported by KVM is used when
# the actual number of physical cores is greater than it.
# WARNING: Be aware that this value impacts the virtual machine's memory footprint and CPU
# the hotplug functionality. For example, `default_maxvcpus = 240` specifies that until 240 vCPUs
# can be added to a SB/VM, but the memory footprint will be big. Another example, with
# `default_maxvcpus = 8` the memory footprint will be small, but 8 will be the maximum number of
# vCPUs supported by the SB/VM. In general, we recommend that you do not edit this variable,
# unless you know what are you doing.
default_maxvcpus = 0

# Bridges can be used to hot plug devices.
# Limitations:
# * Currently only pci bridges are supported
# * Until 30 devices per bridge can be hot plugged.
# * Until 5 PCI bridges can be cold plugged per VM.
#   This limitation could be a bug in qemu or in the kernel
# Default number of bridges per SB/VM:
# unspecified or 0   --> will be set to 1
# > 1 <= 5           --> will be set to the specified number
# > 5                --> will be set to 5
default_bridges = 1

# Default memory size in MiB for SB/VM.
# If unspecified then it will be set 2048 MiB.
default_memory = 16384
#
# Default memory slots per SB/VM.
# If unspecified then it will be set 10.
# This is will determine the times that memory will be hotadded to sandbox/VM.
#memory_slots = 10

# Disable block device from being used for a container's rootfs.
# In case of a storage driver like devicemapper where a container's 
# root file system is backed by a block device, the block device is passed
# directly to the hypervisor for performance reasons. 
# This flag prevents the block device from being passed to the hypervisor, 
# 9pfs is used instead to pass the rootfs.
disable_block_device_use = false

# Block storage driver to be used for the hypervisor in case the container
# rootfs is backed by a block device. This is either virtio-scsi or 
# virtio-blk.
block_device_driver = "virtio-scsi"

# Enable iothreads (data-plane) to be used. This causes IO to be
# handled in a separate IO thread. This is currently only implemented
# for SCSI.
#
enable_iothreads = false

# Enable pre allocation of VM RAM, default false
# Enabling this will result in lower container density
# as all of the memory will be allocated and locked
# This is useful when you want to reserve all the memory
# upfront or in the cases where you want memory latencies
# to be very predictable
# Default false
#enable_mem_prealloc = true

# Enable huge pages for VM RAM, default false
# Enabling this will result in the VM memory
# being allocated using huge pages.
# This is useful when you want to use vhost-user network
# stacks within the container. This will automatically 
# result in memory pre allocation
#enable_hugepages = true

# Enable swap of vm memory. Default false.
# The behaviour is undefined if mem_prealloc is also set to true
#enable_swap = true

# This option changes the default hypervisor and kernel parameters
# to enable debug output where available. This extra output is added
# to the proxy logs, but only when proxy debug is also enabled.
# 
# Default false
enable_debug = true

# Disable the customizations done in the runtime when it detects
# that it is running on top a VMM. This will result in the runtime
# behaving as it would when running on bare metal.
# 
#disable_nesting_checks = true

# This is the msize used for 9p shares. It is the number of bytes 
# used for 9p packet payload.
#msize_9p = 8192

# If true and vsocks are supported, use vsocks to communicate directly
# with the agent and no proxy is started, otherwise use unix
# sockets and start a proxy to communicate with the agent.
# Default false
#use_vsock = true

# VFIO devices are hotplugged on a bridge by default. 
# Enable hotplugging on root bus. This may be required for devices with
# a large PCI bar, as this is a current limitation with hotplugging on 
# a bridge. This value is valid for "pc" machine type.
# Default false
#hotplug_vfio_on_root_bus = true

# If host doesn't support vhost_net, set to true. Thus we won't create vhost fds for nics.
# Default false
#disable_vhost_net = true
#
# Default entropy source.
# The path to a host source of entropy (including a real hardware RNG)
# /dev/urandom and /dev/random are two main options.
# Be aware that /dev/random is a blocking source of entropy.  If the host
# runs out of entropy, the VMs boot time will increase leading to get startup
# timeouts.
# The source of entropy /dev/urandom is non-blocking and provides a
# generally acceptable source of entropy. It should work well for pretty much
# all practical purposes.
#entropy_source= "/dev/urandom"

# Path to OCI hook binaries in the *guest rootfs*.
# This does not affect host-side hooks which must instead be added to
# the OCI spec passed to the runtime.
#
# You can create a rootfs with hooks by customizing the osbuilder scripts:
# https://github.com/kata-containers/osbuilder
#
# Hooks must be stored in a subdirectory of guest_hook_path according to their
# hook type, i.e. "guest_hook_path/{prestart,postart,poststop}".
# The agent will scan these directories for executable files and add them, in
# lexicographical order, to the lifecycle of the guest container.
# Hooks are executed in the runtime namespace of the guest. See the official documentation:
# https://github.com/opencontainers/runtime-spec/blob/v1.0.1/config.md#posix-platform-hooks
# Warnings will be logged if any error is encountered will scanning for hooks,
# but it will not abort container execution.
#guest_hook_path = "/usr/share/oci/hooks"

[factory]
# VM templating support. Once enabled, new VMs are created from template
# using vm cloning. They will share the same initial kernel, initramfs and
# agent memory by mapping it readonly. It helps speeding up new container
# creation and saves a lot of memory if there are many kata containers running
# on the same host.
#
# When disabled, new VMs are created from scratch.
#
# Default false
#enable_template = true

[proxy.kata]
path = "/usr/libexec/kata-containers/kata-proxy"

# If enabled, proxy messages will be sent to the system log
# (default: disabled)
enable_debug = true

[shim.kata]
path = "/usr/libexec/kata-containers/kata-shim"

# If enabled, shim messages will be sent to the system log
# (default: disabled)
enable_debug = true

[agent.kata]
# There is no field for this section. The goal is only to be able to
# specify which type of agent the user wants to use.

[netmon]
# If enabled, the network monitoring process gets started when the
# sandbox is created. This allows for the detection of some additional
# network being added to the existing network namespace, after the
# sandbox has been created.
# (default: disabled)
#enable_netmon = true

# Specify the path to the netmon binary.
path = "/usr/libexec/kata-containers/kata-netmon"

# If enabled, netmon messages will be sent to the system log
# (default: disabled)
enable_debug = true

[runtime]
# If enabled, the runtime will log additional debug messages to the
# system log
# (default: disabled)
enable_debug = true
#
# Internetworking model
# Determines how the VM should be connected to the
# the container network interface
# Options:
#
#   - bridged
#     Uses a linux bridge to interconnect the container interface to
#     the VM. Works for most cases except macvlan and ipvlan.
#
#   - macvtap
#     Used when the Container network interface can be bridged using
#     macvtap.
#
#   - none
#     Used when customize network. Only creates a tap device. No veth pair.
#
#   - tcfilter
#     Uses tc filter rules to redirect traffic from the network interface
#     provided by plugin to a tap interface connected to the VM.
#
internetworking_model="macvtap"

# If enabled, the runtime will create opentracing.io traces and spans.
# (See https://www.jaegertracing.io/docs/getting-started).
# (default: disabled)
#enable_tracing = true

# If enabled, the runtime will not create a network namespace for shim and hypervisor processes.
# This option may have some potential impacts to your host. It should only be used when you know what you're doing.
# `disable_new_netns` conflicts with `enable_netmon`
# `disable_new_netns` conflicts with `internetworking_model=bridged` and `internetworking_model=macvtap`. It works only
# with `internetworking_model=none`. The tap device will be in the host network namespace and can connect to a bridge
# (like OVS) directly.
# If you are using docker, `disable_new_netns` only works with `docker run --net=none`
# (default: false)
#disable_new_netns = true

KSM throttler

version

Output of "/usr/lib64/kata-ksm-throttler/kata-ksm-throttler --version":

kata-ksm-throttler version 1.4.0-1212de2

Output of "/usr/lib/systemd/system/kata-ksm-throttler.service --version":

kata-collect-data.sh: line 168: /usr/lib/systemd/system/kata-ksm-throttler.service: Permission denied

systemd service

Image details

---
osbuilder:
  url: "https://github.com/kata-containers/osbuilder"
  version: "unknown"
rootfs-creation-time: "2018-11-23T07:21:30.662044613+0000Z"
description: "osbuilder rootfs"
file-format-version: "0.0.2"
architecture: "x86_64"
base-distro:
  name: "Clear"
  version: "26450"
  packages:
    default:
      - "iptables-bin"
      - "libudev0-shim"
      - "systemd"
    extra:

agent:
  url: "https://github.com/kata-containers/agent"
  name: "kata-agent"
  version: "1.4.0-0ff30063f7e71eb0f48d60c21156cd18b8a58024"
  agent-is-init-daemon: "no"

Initrd details

No initrd


Logfiles

Runtime logs

Recent runtime problems found in system journal:

time="2019-01-14T13:51:08.563734423+08:00" level=error msg="hotplug memory" arch=amd64 command=create container=b1f78429ab392861433146f477b47d87d7fbd96355f860c4928a5e1d74532ab2 error="QMP command failed" name=kata-runtime pid=47236 sandbox=005f172716152f819cc9ea463fb5a6f672232dc030a2117937564277dc75aeb4 source=virtcontainers subsystem=qemu
time="2019-01-14T13:51:08.723811023+08:00" level=error msg="QMP command failed" arch=amd64 command=create container=b1f78429ab392861433146f477b47d87d7fbd96355f860c4928a5e1d74532ab2 name=kata-runtime pid=47236 sandbox=005f172716152f819cc9ea463fb5a6f672232dc030a2117937564277dc75aeb4 source=runtime
time="2019-01-14T13:54:19.686995276+08:00" level=error msg="Unable to hotplug 16384 MiB memory, the SB has 116736 MiB and the maximum amount is 2048 MiB" arch=amd64 command=create container=4ab7bc86fc60548b1547d408a5f827eb8742a6580fba3bedae4355a552f3bfac name=kata-runtime pid=49571 sandbox=fa16d7139b430aa16cb228f2b9c7ff8ebef0f47ed93b8130daeb805fa552d6f1 source=runtime
time="2019-01-14T13:55:08.723654305+08:00" level=error msg="Unable to hotplug 16384 MiB memory, the SB has 116736 MiB and the maximum amount is 2048 MiB" arch=amd64 command=create container=c4a7793b1df12f4f98610ca0ed0c1cc0e5f928dd4fe395e4707b4d769a28e262 name=kata-runtime pid=50198 sandbox=65ee0b13190bffcf3e313eda491d153e44b4ab3c5fec2e7ff5183452a142f5e1 source=runtime
time="2019-01-14T13:56:18.563725841+08:00" level=info msg="{\"error\": {\"class\": \"GenericError\", \"desc\": \"no free slots available\"}}" arch=amd64 command=create container=1a55cabfd2644bba02a3dd4f68cc001349731719f35dd56236bb66df60f0dae2 name=kata-runtime pid=51072 sandbox=005f172716152f819cc9ea463fb5a6f672232dc030a2117937564277dc75aeb4 source=virtcontainers subsystem=qmp
time="2019-01-14T13:56:18.563825096+08:00" level=error msg="Unable to hotplug memory device: QMP command failed" arch=amd64 command=create container=1a55cabfd2644bba02a3dd4f68cc001349731719f35dd56236bb66df60f0dae2 name=kata-runtime pid=51072 sandbox=005f172716152f819cc9ea463fb5a6f672232dc030a2117937564277dc75aeb4 source=virtcontainers subsystem=qmp
time="2019-01-14T13:56:18.564696594+08:00" level=error msg="hotplug memory" arch=amd64 command=create container=1a55cabfd2644bba02a3dd4f68cc001349731719f35dd56236bb66df60f0dae2 error="QMP command failed" name=kata-runtime pid=51072 sandbox=005f172716152f819cc9ea463fb5a6f672232dc030a2117937564277dc75aeb4 source=virtcontainers subsystem=qemu
time="2019-01-14T13:56:18.691763287+08:00" level=error msg="QMP command failed" arch=amd64 command=create container=1a55cabfd2644bba02a3dd4f68cc001349731719f35dd56236bb66df60f0dae2 name=kata-runtime pid=51072 sandbox=005f172716152f819cc9ea463fb5a6f672232dc030a2117937564277dc75aeb4 source=runtime
time="2019-01-14T13:59:33.664382531+08:00" level=error msg="Unable to hotplug 16384 MiB memory, the SB has 116736 MiB and the maximum amount is 2048 MiB" arch=amd64 command=create container=a4a78e39d6b81cae4df7933481ae509af3b8bbb8968a66fe5b1687c6d454e23f name=kata-runtime pid=53951 sandbox=fa16d7139b430aa16cb228f2b9c7ff8ebef0f47ed93b8130daeb805fa552d6f1 source=runtime
time="2019-01-14T14:00:09.688298079+08:00" level=error msg="Unable to hotplug 16384 MiB memory, the SB has 116736 MiB and the maximum amount is 2048 MiB" arch=amd64 command=create container=9c1796f71d055cffa103155fcadd026d507cc7066acdc2bb8405740bf6aee330 name=kata-runtime pid=54783 sandbox=65ee0b13190bffcf3e313eda491d153e44b4ab3c5fec2e7ff5183452a142f5e1 source=runtime
time="2019-01-14T14:01:32.562218019+08:00" level=info msg="{\"error\": {\"class\": \"GenericError\", \"desc\": \"no free slots available\"}}" arch=amd64 command=create container=72b9561cff287f0a32b145ba64670b9ecf6200c7882c6d217ff0861679fd3016 name=kata-runtime pid=56367 sandbox=005f172716152f819cc9ea463fb5a6f672232dc030a2117937564277dc75aeb4 source=virtcontainers subsystem=qmp
time="2019-01-14T14:01:32.562317334+08:00" level=error msg="Unable to hotplug memory device: QMP command failed" arch=amd64 command=create container=72b9561cff287f0a32b145ba64670b9ecf6200c7882c6d217ff0861679fd3016 name=kata-runtime pid=56367 sandbox=005f172716152f819cc9ea463fb5a6f672232dc030a2117937564277dc75aeb4 source=virtcontainers subsystem=qmp
time="2019-01-14T14:01:32.563129722+08:00" level=error msg="hotplug memory" arch=amd64 command=create container=72b9561cff287f0a32b145ba64670b9ecf6200c7882c6d217ff0861679fd3016 error="QMP command failed" name=kata-runtime pid=56367 sandbox=005f172716152f819cc9ea463fb5a6f672232dc030a2117937564277dc75aeb4 source=virtcontainers subsystem=qemu
time="2019-01-14T14:01:32.731809929+08:00" level=error msg="QMP command failed" arch=amd64 command=create container=72b9561cff287f0a32b145ba64670b9ecf6200c7882c6d217ff0861679fd3016 name=kata-runtime pid=56367 sandbox=005f172716152f819cc9ea463fb5a6f672232dc030a2117937564277dc75aeb4 source=runtime
time="2019-01-14T14:04:45.713430536+08:00" level=error msg="Unable to hotplug 16384 MiB memory, the SB has 116736 MiB and the maximum amount is 2048 MiB" arch=amd64 command=create container=03bc26d4090f31b9134cc75d1e6220740520115a4a9603c34ffafdc643bbfd49 name=kata-runtime pid=1963 sandbox=fa16d7139b430aa16cb228f2b9c7ff8ebef0f47ed93b8130daeb805fa552d6f1 source=runtime
time="2019-01-14T14:05:18.695312918+08:00" level=error msg="Unable to hotplug 16384 MiB memory, the SB has 116736 MiB and the maximum amount is 2048 MiB" arch=amd64 command=create container=82e29edbcc781709e86e9f31d840756171e46fa942841e4f7f327764836481c0 name=kata-runtime pid=2467 sandbox=65ee0b13190bffcf3e313eda491d153e44b4ab3c5fec2e7ff5183452a142f5e1 source=runtime
time="2019-01-14T14:06:35.560102511+08:00" level=info msg="{\"error\": {\"class\": \"GenericError\", \"desc\": \"no free slots available\"}}" arch=amd64 command=create container=b408f554991f9cac6991d7b7fc12ad7416d52b246c20af3cbea3c14aee46c3c0 name=kata-runtime pid=3413 sandbox=005f172716152f819cc9ea463fb5a6f672232dc030a2117937564277dc75aeb4 source=virtcontainers subsystem=qmp
time="2019-01-14T14:06:35.560161692+08:00" level=error msg="Unable to hotplug memory device: QMP command failed" arch=amd64 command=create container=b408f554991f9cac6991d7b7fc12ad7416d52b246c20af3cbea3c14aee46c3c0 name=kata-runtime pid=3413 sandbox=005f172716152f819cc9ea463fb5a6f672232dc030a2117937564277dc75aeb4 source=virtcontainers subsystem=qmp
time="2019-01-14T14:06:35.562238037+08:00" level=error msg="hotplug memory" arch=amd64 command=create container=b408f554991f9cac6991d7b7fc12ad7416d52b246c20af3cbea3c14aee46c3c0 error="QMP command failed" name=kata-runtime pid=3413 sandbox=005f172716152f819cc9ea463fb5a6f672232dc030a2117937564277dc75aeb4 source=virtcontainers subsystem=qemu
time="2019-01-14T14:06:35.767285748+08:00" level=error msg="QMP command failed" arch=amd64 command=create container=b408f554991f9cac6991d7b7fc12ad7416d52b246c20af3cbea3c14aee46c3c0 name=kata-runtime pid=3413 sandbox=005f172716152f819cc9ea463fb5a6f672232dc030a2117937564277dc75aeb4 source=runtime
time="2019-01-14T14:09:54.714439352+08:00" level=error msg="Unable to hotplug 16384 MiB memory, the SB has 116736 MiB and the maximum amount is 2048 MiB" arch=amd64 command=create container=165fc66244de879c8b1f07f6693ec984ee883be74c2de52e9a83f754fa212b29 name=kata-runtime pid=5664 sandbox=fa16d7139b430aa16cb228f2b9c7ff8ebef0f47ed93b8130daeb805fa552d6f1 source=runtime
time="2019-01-14T14:10:24.736450177+08:00" level=error msg="Unable to hotplug 16384 MiB memory, the SB has 116736 MiB and the maximum amount is 2048 MiB" arch=amd64 command=create container=f9315f74e06d9866caddcffcedf51cf5ebec4b9fff6c401d53ffe6a31a999a1c name=kata-runtime pid=6285 sandbox=65ee0b13190bffcf3e313eda491d153e44b4ab3c5fec2e7ff5183452a142f5e1 source=runtime
time="2019-01-14T14:11:41.561934346+08:00" level=info msg="{\"error\": {\"class\": \"GenericError\", \"desc\": \"no free slots available\"}}" arch=amd64 command=create container=7ff6bc9e648b7841dc3b3d5347cf36a4754b8440325ef5d98c242bc640f507eb name=kata-runtime pid=7165 sandbox=005f172716152f819cc9ea463fb5a6f672232dc030a2117937564277dc75aeb4 source=virtcontainers subsystem=qmp
time="2019-01-14T14:11:41.562006167+08:00" level=error msg="Unable to hotplug memory device: QMP command failed" arch=amd64 command=create container=7ff6bc9e648b7841dc3b3d5347cf36a4754b8440325ef5d98c242bc640f507eb name=kata-runtime pid=7165 sandbox=005f172716152f819cc9ea463fb5a6f672232dc030a2117937564277dc75aeb4 source=virtcontainers subsystem=qmp
time="2019-01-14T14:11:41.562825438+08:00" level=error msg="hotplug memory" arch=amd64 command=create container=7ff6bc9e648b7841dc3b3d5347cf36a4754b8440325ef5d98c242bc640f507eb error="QMP command failed" name=kata-runtime pid=7165 sandbox=005f172716152f819cc9ea463fb5a6f672232dc030a2117937564277dc75aeb4 source=virtcontainers subsystem=qemu
time="2019-01-14T14:11:41.730790612+08:00" level=error msg="QMP command failed" arch=amd64 command=create container=7ff6bc9e648b7841dc3b3d5347cf36a4754b8440325ef5d98c242bc640f507eb name=kata-runtime pid=7165 sandbox=005f172716152f819cc9ea463fb5a6f672232dc030a2117937564277dc75aeb4 source=runtime
time="2019-01-14T14:15:04.682407424+08:00" level=error msg="Unable to hotplug 16384 MiB memory, the SB has 116736 MiB and the maximum amount is 2048 MiB" arch=amd64 command=create container=66136efacccc47939dbbbd0a2971ff08f0d1d9d265827e27b5b4f44366d01e9f name=kata-runtime pid=11274 sandbox=fa16d7139b430aa16cb228f2b9c7ff8ebef0f47ed93b8130daeb805fa552d6f1 source=runtime
time="2019-01-14T14:15:33.679344413+08:00" level=error msg="Unable to hotplug 16384 MiB memory, the SB has 116736 MiB and the maximum amount is 2048 MiB" arch=amd64 command=create container=7a82907711fe2e1b1f0fae5ad9502a4b053505d72965ede090ed52c85a1a3ebc name=kata-runtime pid=11479 sandbox=65ee0b13190bffcf3e313eda491d153e44b4ab3c5fec2e7ff5183452a142f5e1 source=runtime
time="2019-01-14T14:16:45.566584548+08:00" level=info msg="{\"error\": {\"class\": \"GenericError\", \"desc\": \"no free slots available\"}}" arch=amd64 command=create container=aeae48e5e893c2d50c621dbb718a1ac1b94b8228a7be8f6ff435007ee876ab40 name=kata-runtime pid=12417 sandbox=005f172716152f819cc9ea463fb5a6f672232dc030a2117937564277dc75aeb4 source=virtcontainers subsystem=qmp
time="2019-01-14T14:16:45.566684278+08:00" level=error msg="Unable to hotplug memory device: QMP command failed" arch=amd64 command=create container=aeae48e5e893c2d50c621dbb718a1ac1b94b8228a7be8f6ff435007ee876ab40 name=kata-runtime pid=12417 sandbox=005f172716152f819cc9ea463fb5a6f672232dc030a2117937564277dc75aeb4 source=virtcontainers subsystem=qmp
time="2019-01-14T14:16:45.567536692+08:00" level=error msg="hotplug memory" arch=amd64 command=create container=aeae48e5e893c2d50c621dbb718a1ac1b94b8228a7be8f6ff435007ee876ab40 error="QMP command failed" name=kata-runtime pid=12417 sandbox=005f172716152f819cc9ea463fb5a6f672232dc030a2117937564277dc75aeb4 source=virtcontainers subsystem=qemu
time="2019-01-14T14:16:45.705797187+08:00" level=error msg="QMP command failed" arch=amd64 command=create container=aeae48e5e893c2d50c621dbb718a1ac1b94b8228a7be8f6ff435007ee876ab40 name=kata-runtime pid=12417 sandbox=005f172716152f819cc9ea463fb5a6f672232dc030a2117937564277dc75aeb4 source=runtime
time="2019-01-14T14:20:08.788450871+08:00" level=error msg="Unable to hotplug 16384 MiB memory, the SB has 116736 MiB and the maximum amount is 2048 MiB" arch=amd64 command=create container=2aa0622bd1cbae52cce06a4180ab62ff8f42d37f8e32217481f32310141d9d40 name=kata-runtime pid=14990 sandbox=fa16d7139b430aa16cb228f2b9c7ff8ebef0f47ed93b8130daeb805fa552d6f1 source=runtime
time="2019-01-14T14:20:36.754372291+08:00" level=error msg="Unable to hotplug 16384 MiB memory, the SB has 116736 MiB and the maximum amount is 2048 MiB" arch=amd64 command=create container=1a713f920ae1be931051492afea9054bb226e6696cb6bdc1d3740eb2e598d4d0 name=kata-runtime pid=15359 sandbox=65ee0b13190bffcf3e313eda491d153e44b4ab3c5fec2e7ff5183452a142f5e1 source=runtime
time="2019-01-14T14:21:59.565382991+08:00" level=info msg="{\"error\": {\"class\": \"GenericError\", \"desc\": \"no free slots available\"}}" arch=amd64 command=create container=22b332da23bcf176f8eb7b6a68f741cfe2cb9d6148096234567a847cc6a932df name=kata-runtime pid=16217 sandbox=005f172716152f819cc9ea463fb5a6f672232dc030a2117937564277dc75aeb4 source=virtcontainers subsystem=qmp
time="2019-01-14T14:21:59.565475992+08:00" level=error msg="Unable to hotplug memory device: QMP command failed" arch=amd64 command=create container=22b332da23bcf176f8eb7b6a68f741cfe2cb9d6148096234567a847cc6a932df name=kata-runtime pid=16217 sandbox=005f172716152f819cc9ea463fb5a6f672232dc030a2117937564277dc75aeb4 source=virtcontainers subsystem=qmp
time="2019-01-14T14:21:59.566313676+08:00" level=error msg="hotplug memory" arch=amd64 command=create container=22b332da23bcf176f8eb7b6a68f741cfe2cb9d6148096234567a847cc6a932df error="QMP command failed" name=kata-runtime pid=16217 sandbox=005f172716152f819cc9ea463fb5a6f672232dc030a2117937564277dc75aeb4 source=virtcontainers subsystem=qemu
time="2019-01-14T14:21:59.751721002+08:00" level=error msg="QMP command failed" arch=amd64 command=create container=22b332da23bcf176f8eb7b6a68f741cfe2cb9d6148096234567a847cc6a932df name=kata-runtime pid=16217 sandbox=005f172716152f819cc9ea463fb5a6f672232dc030a2117937564277dc75aeb4 source=runtime
time="2019-01-14T14:25:10.659401513+08:00" level=error msg="Unable to hotplug 16384 MiB memory, the SB has 116736 MiB and the maximum amount is 2048 MiB" arch=amd64 command=create container=6227af959fb574cbf62dbf29b63d7496ad9d05bc7d23e5a61f963897591d53a7 name=kata-runtime pid=19136 sandbox=fa16d7139b430aa16cb228f2b9c7ff8ebef0f47ed93b8130daeb805fa552d6f1 source=runtime
time="2019-01-14T14:25:44.711387723+08:00" level=error msg="Unable to hotplug 16384 MiB memory, the SB has 116736 MiB and the maximum amount is 2048 MiB" arch=amd64 command=create container=015fcd0c3583e697cf30aee86096a58d43fc71efbed6439286d5e11290d1921e name=kata-runtime pid=19984 sandbox=65ee0b13190bffcf3e313eda491d153e44b4ab3c5fec2e7ff5183452a142f5e1 source=runtime
time="2019-01-14T14:27:02.563872899+08:00" level=info msg="{\"error\": {\"class\": \"GenericError\", \"desc\": \"no free slots available\"}}" arch=amd64 command=create container=f7a4af927e40d14fdd4b451f1234f2d8e04cd44fdc308372f92d9265f022e42e name=kata-runtime pid=21975 sandbox=005f172716152f819cc9ea463fb5a6f672232dc030a2117937564277dc75aeb4 source=virtcontainers subsystem=qmp
time="2019-01-14T14:27:02.563973634+08:00" level=error msg="Unable to hotplug memory device: QMP command failed" arch=amd64 command=create container=f7a4af927e40d14fdd4b451f1234f2d8e04cd44fdc308372f92d9265f022e42e name=kata-runtime pid=21975 sandbox=005f172716152f819cc9ea463fb5a6f672232dc030a2117937564277dc75aeb4 source=virtcontainers subsystem=qmp
time="2019-01-14T14:27:02.56481543+08:00" level=error msg="hotplug memory" arch=amd64 command=create container=f7a4af927e40d14fdd4b451f1234f2d8e04cd44fdc308372f92d9265f022e42e error="QMP command failed" name=kata-runtime pid=21975 sandbox=005f172716152f819cc9ea463fb5a6f672232dc030a2117937564277dc75aeb4 source=virtcontainers subsystem=qemu
time="2019-01-14T14:27:02.673841363+08:00" level=error msg="QMP command failed" arch=amd64 command=create container=f7a4af927e40d14fdd4b451f1234f2d8e04cd44fdc308372f92d9265f022e42e name=kata-runtime pid=21975 sandbox=005f172716152f819cc9ea463fb5a6f672232dc030a2117937564277dc75aeb4 source=runtime
time="2019-01-14T14:30:17.714497969+08:00" level=error msg="Unable to hotplug 16384 MiB memory, the SB has 116736 MiB and the maximum amount is 2048 MiB" arch=amd64 command=create container=d52cdd089dd7e2b91cbb2b4d4341a6c46dfd50aac493855a6def6efedfaaf616 name=kata-runtime pid=24604 sandbox=fa16d7139b430aa16cb228f2b9c7ff8ebef0f47ed93b8130daeb805fa552d6f1 source=runtime
time="2019-01-14T14:30:53.759372579+08:00" level=error msg="Unable to hotplug 16384 MiB memory, the SB has 116736 MiB and the maximum amount is 2048 MiB" arch=amd64 command=create container=7d248e767f5597461fd163bab0d97cd1776e04a0ecfd7f6940d1d4055d1eb0b3 name=kata-runtime pid=25073 sandbox=65ee0b13190bffcf3e313eda491d153e44b4ab3c5fec2e7ff5183452a142f5e1 source=runtime
time="2019-01-14T14:32:14.566042234+08:00" level=info msg="{\"error\": {\"class\": \"GenericError\", \"desc\": \"no free slots available\"}}" arch=amd64 command=create container=2943d0eb3508a26fafff282cf5ec27dc75edf2340a6a53e22c55f6729f212dde name=kata-runtime pid=26123 sandbox=005f172716152f819cc9ea463fb5a6f672232dc030a2117937564277dc75aeb4 source=virtcontainers subsystem=qmp
time="2019-01-14T14:32:14.566141726+08:00" level=error msg="Unable to hotplug memory device: QMP command failed" arch=amd64 command=create container=2943d0eb3508a26fafff282cf5ec27dc75edf2340a6a53e22c55f6729f212dde name=kata-runtime pid=26123 sandbox=005f172716152f819cc9ea463fb5a6f672232dc030a2117937564277dc75aeb4 source=virtcontainers subsystem=qmp
time="2019-01-14T14:32:14.566957709+08:00" level=error msg="hotplug memory" arch=amd64 command=create container=2943d0eb3508a26fafff282cf5ec27dc75edf2340a6a53e22c55f6729f212dde error="QMP command failed" name=kata-runtime pid=26123 sandbox=005f172716152f819cc9ea463fb5a6f672232dc030a2117937564277dc75aeb4 source=virtcontainers subsystem=qemu
time="2019-01-14T14:32:14.733876921+08:00" level=error msg="QMP command failed" arch=amd64 command=create container=2943d0eb3508a26fafff282cf5ec27dc75edf2340a6a53e22c55f6729f212dde name=kata-runtime pid=26123 sandbox=005f172716152f819cc9ea463fb5a6f672232dc030a2117937564277dc75aeb4 source=runtime

Proxy logs

Recent proxy problems found in system journal:

time="2018-12-25T11:22:43.215982421+08:00" level=fatal msg="failed to handle exit signal" error="close unix @->/run/vc/vm/34cb0b4f5c07106b0f6cfca70cc4644c8927c7a04da4dbbdd9d31cd8475c5df3/kata.sock: use of closed network connection" name=kata-proxy pid=37205 sandbox=34cb0b4f5c07106b0f6cfca70cc4644c8927c7a04da4dbbdd9d31cd8475c5df3 source=proxy
time="2018-12-25T11:22:47.241907687+08:00" level=fatal msg="failed to handle exit signal" error="close unix @->/run/vc/vm/0440d9a9f05fc4be338fcbeeafebeb4980632a98eb10a3770c502e123c29594a/kata.sock: use of closed network connection" name=kata-proxy pid=44811 sandbox=0440d9a9f05fc4be338fcbeeafebeb4980632a98eb10a3770c502e123c29594a source=proxy
time="2018-12-25T11:22:55.04942771+08:00" level=fatal msg="failed to handle exit signal" error="close unix @->/run/vc/vm/a1ee54d5bfe0ef07fa36034c81ac6e5e1fdf2e0a1bb6bc64c138d0705adb4180/kata.sock: use of closed network connection" name=kata-proxy pid=44822 sandbox=a1ee54d5bfe0ef07fa36034c81ac6e5e1fdf2e0a1bb6bc64c138d0705adb4180 source=proxy
time="2018-12-25T11:23:03.165171255+08:00" level=fatal msg="failed to handle exit signal" error="close unix @->/run/vc/vm/4c84b4552ebe54a1c4a28638bbee507550d5b1c6f85f215137a6c1a27c3a62cd/kata.sock: use of closed network connection" name=kata-proxy pid=45271 sandbox=4c84b4552ebe54a1c4a28638bbee507550d5b1c6f85f215137a6c1a27c3a62cd source=proxy
time="2018-12-25T11:23:13.253156672+08:00" level=fatal msg="channel error" error="accept unix /run/vc/sbs/39187bc76f4bfb1574a79f5917616e0c55f38e6f9b10d270b81c297b96a5977d/proxy.sock: use of closed network connection" name=kata-proxy pid=45472 sandbox=39187bc76f4bfb1574a79f5917616e0c55f38e6f9b10d270b81c297b96a5977d source=proxy
time="2018-12-25T11:23:17.365396037+08:00" level=fatal msg="failed to handle exit signal" error="close unix @->/run/vc/vm/ae8736dc6addecaa9289a5bff6eb8f4bee58b01c55b3700d6867b29705240423/kata.sock: use of closed network connection" name=kata-proxy pid=45665 sandbox=ae8736dc6addecaa9289a5bff6eb8f4bee58b01c55b3700d6867b29705240423 source=proxy
time="2018-12-25T11:23:23.16511346+08:00" level=fatal msg="failed to handle exit signal" error="close unix @->/run/vc/vm/864685fe26838939494966dfb4bdece902ac48a566dfffc561e4264113b3b296/kata.sock: use of closed network connection" name=kata-proxy pid=45874 sandbox=864685fe26838939494966dfb4bdece902ac48a566dfffc561e4264113b3b296 source=proxy
time="2018-12-25T11:23:31.213514062+08:00" level=fatal msg="channel error" error="accept unix /run/vc/sbs/709f721c47e0056df7f7135a474bb286bbee34253139a66e4681f1995d0052b7/proxy.sock: use of closed network connection" name=kata-proxy pid=46190 sandbox=709f721c47e0056df7f7135a474bb286bbee34253139a66e4681f1995d0052b7 source=proxy
time="2018-12-25T11:23:35.286957437+08:00" level=fatal msg="channel error" error="accept unix /run/vc/sbs/9674c4eb06154eff1a038c2d04bec0534e02e334b4807382600b87d2ea897e75/proxy.sock: use of closed network connection" name=kata-proxy pid=46421 sandbox=9674c4eb06154eff1a038c2d04bec0534e02e334b4807382600b87d2ea897e75 source=proxy
time="2018-12-25T11:23:43.167333662+08:00" level=fatal msg="failed to handle exit signal" error="close unix @->/run/vc/vm/db54eb8821f3d8b546ed6c9c02bed4e83fb65dc4c14051c704c755c636a48acb/kata.sock: use of closed network connection" name=kata-proxy pid=46674 sandbox=db54eb8821f3d8b546ed6c9c02bed4e83fb65dc4c14051c704c755c636a48acb source=proxy
time="2018-12-25T11:23:47.086546498+08:00" level=fatal msg="channel error" error="accept unix /run/vc/sbs/d281bc1fcbe429fb85d85e5c9f4392b249659c48b7d295b386b85d96ed4f8fdf/proxy.sock: use of closed network connection" name=kata-proxy pid=47439 sandbox=d281bc1fcbe429fb85d85e5c9f4392b249659c48b7d295b386b85d96ed4f8fdf source=proxy
time="2018-12-25T11:23:53.067515513+08:00" level=fatal msg="failed to handle exit signal" error="close unix @->/run/vc/vm/f67cf5ace3e86e102021ae54791279fee9ead9ef7d570908397819dcc8545aa8/kata.sock: use of closed network connection" name=kata-proxy pid=47410 sandbox=f67cf5ace3e86e102021ae54791279fee9ead9ef7d570908397819dcc8545aa8 source=proxy
time="2018-12-25T11:24:01.143639552+08:00" level=fatal msg="channel error" error="accept unix /run/vc/sbs/2577d41b4502c7c3c06aec6fa27333b804f8ad360659970599860220e340a22a/proxy.sock: use of closed network connection" name=kata-proxy pid=47980 sandbox=2577d41b4502c7c3c06aec6fa27333b804f8ad360659970599860220e340a22a source=proxy
time="2018-12-25T11:24:13.178578229+08:00" level=fatal msg="channel error" error="accept unix /run/vc/sbs/6ddcb613cdf21d3fda56a1839e2659a0151b6b93252f35a3cff9f0b5b246fa66/proxy.sock: use of closed network connection" name=kata-proxy pid=48241 sandbox=6ddcb613cdf21d3fda56a1839e2659a0151b6b93252f35a3cff9f0b5b246fa66 source=proxy
time="2018-12-25T11:26:14.351805888+08:00" level=fatal msg="channel error" error="accept unix /run/vc/sbs/d956450e9901225f89f011c90a22dec60aacfd0551b412f8e405c68f13277e0d/proxy.sock: use of closed network connection" name=kata-proxy pid=25229 sandbox=d956450e9901225f89f011c90a22dec60aacfd0551b412f8e405c68f13277e0d source=proxy
time="2018-12-25T12:41:13.031024013+08:00" level=fatal msg="failed to handle exit signal" error="close unix @->/run/vc/vm/f4acb721b13076e0c00d79f35071962381d729b64c694f02259731dbd91cd210/kata.sock: use of closed network connection" name=kata-proxy pid=27875 sandbox=f4acb721b13076e0c00d79f35071962381d729b64c694f02259731dbd91cd210 source=proxy
time="2019-01-03T09:55:57.171182302+08:00" level=fatal msg="failed to handle exit signal" error="close unix @->/run/vc/vm/047dc95bb198f5d81133f609d5c8e73b49999da85a0163e9d6cbbd11735ce688/kata.sock: use of closed network connection" name=kata-proxy pid=14114 sandbox=047dc95bb198f5d81133f609d5c8e73b49999da85a0163e9d6cbbd11735ce688 source=proxy
time="2019-01-03T09:56:07.078636526+08:00" level=fatal msg="channel error" error="accept unix /run/vc/sbs/c1228f7e81aea20e3b09c99d091a61d44055a8706050c91f04e7e77c52daa019/proxy.sock: use of closed network connection" name=kata-proxy pid=53426 sandbox=c1228f7e81aea20e3b09c99d091a61d44055a8706050c91f04e7e77c52daa019 source=proxy
time="2019-01-03T10:26:10.302743111+08:00" level=fatal msg="channel error" error="accept unix /run/vc/sbs/bafec41a30414deac5628eebbca4a78a6f2275de635c01007a964437c99fc0de/proxy.sock: use of closed network connection" name=kata-proxy pid=56270 sandbox=bafec41a30414deac5628eebbca4a78a6f2275de635c01007a964437c99fc0de source=proxy
time="2019-01-03T11:18:14.877101747+08:00" level=fatal msg="failed to handle exit signal" error="close unix @->/run/vc/vm/5098fe8207c4cfb2112a57421713c318ab22b76d0f133ae8423e78560e924e6f/kata.sock: use of closed network connection" name=kata-proxy pid=55887 sandbox=5098fe8207c4cfb2112a57421713c318ab22b76d0f133ae8423e78560e924e6f source=proxy
time="2019-01-03T11:18:38.552573976+08:00" level=fatal msg="failed to handle exit signal" error="close unix @->/run/vc/vm/a69f43b1dfa8055fdc11d093787ea3f7f95cc76f94043dfb6e40efce553641e1/kata.sock: use of closed network connection" name=kata-proxy pid=6966 sandbox=a69f43b1dfa8055fdc11d093787ea3f7f95cc76f94043dfb6e40efce553641e1 source=proxy
time="2019-01-03T11:19:12.516898856+08:00" level=fatal msg="channel error" error="accept unix /run/vc/sbs/88150c8932d4f316769775d0e422e5265dac3693330f2cdac425f157c6bf8129/proxy.sock: use of closed network connection" name=kata-proxy pid=45980 sandbox=88150c8932d4f316769775d0e422e5265dac3693330f2cdac425f157c6bf8129 source=proxy
time="2019-01-03T11:19:13.433499579+08:00" level=fatal msg="failed to handle exit signal" error="close unix @->/run/vc/vm/db232bae11ce21b1a05ce0fb1991bc82da3865d3dc557f9b994824647f1c5ab3/kata.sock: use of closed network connection" name=kata-proxy pid=46385 sandbox=db232bae11ce21b1a05ce0fb1991bc82da3865d3dc557f9b994824647f1c5ab3 source=proxy
time="2019-01-03T11:19:14.500796925+08:00" level=fatal msg="failed to handle exit signal" error="close unix @->/run/vc/vm/d41f17e923039973ef8da5672e380befa98b25fc4f65b8b52ba14ccd2a69f13c/kata.sock: use of closed network connection" name=kata-proxy pid=47248 sandbox=d41f17e923039973ef8da5672e380befa98b25fc4f65b8b52ba14ccd2a69f13c source=proxy
time="2019-01-03T11:19:14.947969721+08:00" level=fatal msg="channel error" error="accept unix /run/vc/sbs/f0e0202d900a2e3c4bc47a8a7161990c8700653e2977b315096dd7f59cd4f44a/proxy.sock: use of closed network connection" name=kata-proxy pid=51286 sandbox=f0e0202d900a2e3c4bc47a8a7161990c8700653e2977b315096dd7f59cd4f44a source=proxy
time="2019-01-03T11:19:14.948297338+08:00" level=fatal msg="failed to handle exit signal" error="close unix @->/run/vc/vm/b51346b99b940d4d316f821805a6b105cd925baef3dee64dbaf07239e4eec73f/kata.sock: use of closed network connection" name=kata-proxy pid=46676 sandbox=b51346b99b940d4d316f821805a6b105cd925baef3dee64dbaf07239e4eec73f source=proxy
time="2019-01-03T11:19:14.96380586+08:00" level=fatal msg="channel error" error="accept unix /run/vc/sbs/cf296407383ead4367b66309f266a8187079625516b99afb5bda9df438e1b625/proxy.sock: use of closed network connection" name=kata-proxy pid=46968 sandbox=cf296407383ead4367b66309f266a8187079625516b99afb5bda9df438e1b625 source=proxy
time="2019-01-03T11:21:01.768075724+08:00" level=fatal msg="failed to handle exit signal" error="close unix @->/run/vc/vm/a62e81a118a827de94821e546ba1c528a8eda498c1fb120972062336c5e2c1cc/kata.sock: use of closed network connection" name=kata-proxy pid=54246 sandbox=a62e81a118a827de94821e546ba1c528a8eda498c1fb120972062336c5e2c1cc source=proxy
time="2019-01-03T11:21:02.390482173+08:00" level=fatal msg="channel error" error="accept unix /run/vc/sbs/e7bc0ff98b49d99f379c929c96ee6991a928048143e3edb748b1b092362b7ecf/proxy.sock: use of closed network connection" name=kata-proxy pid=54472 sandbox=e7bc0ff98b49d99f379c929c96ee6991a928048143e3edb748b1b092362b7ecf source=proxy
time="2019-01-03T11:21:02.567263674+08:00" level=fatal msg="channel error" error="accept unix /run/vc/sbs/e0327e0ab2a3b747801d99830695fd365d508ab30ab24d209c9bd85658b1e343/proxy.sock: use of closed network connection" name=kata-proxy pid=54727 sandbox=e0327e0ab2a3b747801d99830695fd365d508ab30ab24d209c9bd85658b1e343 source=proxy
time="2019-01-03T11:21:03.142807559+08:00" level=fatal msg="channel error" error="accept unix /run/vc/sbs/01f74a47d6f12a39c921981a182e6ca81e3231c05f2dbbc9f5c360900403655e/proxy.sock: use of closed network connection" name=kata-proxy pid=54960 sandbox=01f74a47d6f12a39c921981a182e6ca81e3231c05f2dbbc9f5c360900403655e source=proxy
time="2019-01-03T11:21:03.799095331+08:00" level=fatal msg="failed to handle exit signal" error="close unix @->/run/vc/vm/4276767ffce67d198e6d1290d9c1501f3460b08aa6758668d186dafd0092d7a7/kata.sock: use of closed network connection" name=kata-proxy pid=55493 sandbox=4276767ffce67d198e6d1290d9c1501f3460b08aa6758668d186dafd0092d7a7 source=proxy
time="2019-01-03T11:21:03.820432194+08:00" level=fatal msg="failed to handle exit signal" error="close unix @->/run/vc/vm/a28a8c26983738fa2820a318fe0c1ed6e8bc1f4280fa64cd3110a1562799fd62/kata.sock: use of closed network connection" name=kata-proxy pid=55222 sandbox=a28a8c26983738fa2820a318fe0c1ed6e8bc1f4280fa64cd3110a1562799fd62 source=proxy
time="2019-01-03T11:21:03.87652263+08:00" level=fatal msg="channel error" error="accept unix /run/vc/sbs/379a0699518da22f17d4a3df2bb83680700184ee12eee3fe2e3c275fb1a71341/proxy.sock: use of closed network connection" name=kata-proxy pid=55709 sandbox=379a0699518da22f17d4a3df2bb83680700184ee12eee3fe2e3c275fb1a71341 source=proxy
time="2019-01-03T11:21:47.621195447+08:00" level=fatal msg="channel error" error="accept unix /run/vc/sbs/3bdc04d37cc2fb38523c009177d5da7d3f309f3b68ad4a3793fd3120aefae6d0/proxy.sock: use of closed network connection" name=kata-proxy pid=57046 sandbox=3bdc04d37cc2fb38523c009177d5da7d3f309f3b68ad4a3793fd3120aefae6d0 source=proxy
time="2019-01-03T11:21:47.621218745+08:00" level=fatal msg="failed to handle exit signal" error="close unix @->/run/vc/vm/3c22058ab1f7e9d118abd727c14bfc19126e2ddb4df85356e68c693e793553dc/kata.sock: use of closed network connection" name=kata-proxy pid=57301 sandbox=3c22058ab1f7e9d118abd727c14bfc19126e2ddb4df85356e68c693e793553dc source=proxy
time="2019-01-03T11:21:48.131989112+08:00" level=fatal msg="failed to handle exit signal" error="close unix @->/run/vc/vm/f31cececd11f524d70d069dd4b434b7531ed220be7292681c260fba09ebfe21c/kata.sock: use of closed network connection" name=kata-proxy pid=2557 sandbox=f31cececd11f524d70d069dd4b434b7531ed220be7292681c260fba09ebfe21c source=proxy
time="2019-01-03T11:21:48.175206505+08:00" level=fatal msg="failed to handle exit signal" error="close unix @->/run/vc/vm/274cef733db9c078974a5122c3820475ac74ca191a907b64c4386162195cc340/kata.sock: use of closed network connection" name=kata-proxy pid=2817 sandbox=274cef733db9c078974a5122c3820475ac74ca191a907b64c4386162195cc340 source=proxy
time="2019-01-03T11:21:48.56149678+08:00" level=fatal msg="channel error" error="accept unix /run/vc/sbs/628417ef10be9f89d8c751c4b0f40050f6268b6b5212e2e693197535e9bca6b3/proxy.sock: use of closed network connection" name=kata-proxy pid=3068 sandbox=628417ef10be9f89d8c751c4b0f40050f6268b6b5212e2e693197535e9bca6b3 source=proxy
time="2019-01-03T11:21:48.940009123+08:00" level=fatal msg="channel error" error="accept unix /run/vc/sbs/971ef1944b2eaecb6c502a0856de4bc19568bf68306bf89bab78ed99c0663e72/proxy.sock: use of closed network connection" name=kata-proxy pid=3726 sandbox=971ef1944b2eaecb6c502a0856de4bc19568bf68306bf89bab78ed99c0663e72 source=proxy
time="2019-01-03T11:21:49.232221903+08:00" level=fatal msg="channel error" error="accept unix /run/vc/sbs/c33d8d39eb38737172b34143b4d9fed9b5a84aa8aea4f32a967cc96c1400ae2b/proxy.sock: use of closed network connection" name=kata-proxy pid=3500 sandbox=c33d8d39eb38737172b34143b4d9fed9b5a84aa8aea4f32a967cc96c1400ae2b source=proxy
time="2019-01-03T11:26:24.460379057+08:00" level=fatal msg="channel error" error="accept unix /run/vc/sbs/c4bccee5fbe24a5718afd1dae9ffd8a63ccd8553df0f7d5034b5efb22bc1410a/proxy.sock: use of closed network connection" name=kata-proxy pid=4253 sandbox=c4bccee5fbe24a5718afd1dae9ffd8a63ccd8553df0f7d5034b5efb22bc1410a source=proxy
time="2019-01-03T11:26:24.846706379+08:00" level=fatal msg="failed to handle exit signal" error="close unix @->/run/vc/vm/19b66f785a5dee0961e89991f660ffb69d2bd42288e46469c90d9d41c05f56e8/kata.sock: use of closed network connection" name=kata-proxy pid=4968 sandbox=19b66f785a5dee0961e89991f660ffb69d2bd42288e46469c90d9d41c05f56e8 source=proxy
time="2019-01-03T11:26:26.042100432+08:00" level=fatal msg="channel error" error="accept unix /run/vc/sbs/e74e5ed2f215f36223e2aafe189806812a7d2ecc89cc945a00c4498a333d1394/proxy.sock: use of closed network connection" name=kata-proxy pid=6465 sandbox=e74e5ed2f215f36223e2aafe189806812a7d2ecc89cc945a00c4498a333d1394 source=proxy
time="2019-01-03T11:26:26.054706075+08:00" level=fatal msg="failed to handle exit signal" error="close unix @->/run/vc/vm/e1fd51a3d82150cbd89f85491301a8ab179f5bf3f15c096f1456113cb88f9284/kata.sock: use of closed network connection" name=kata-proxy pid=6214 sandbox=e1fd51a3d82150cbd89f85491301a8ab179f5bf3f15c096f1456113cb88f9284 source=proxy
time="2019-01-03T11:26:26.100399768+08:00" level=fatal msg="failed to handle exit signal" error="close unix @->/run/vc/vm/45d2de3b612e22b0fd1e32857cc58901803abef9a8eab112db48a6393c443f41/kata.sock: use of closed network connection" name=kata-proxy pid=6754 sandbox=45d2de3b612e22b0fd1e32857cc58901803abef9a8eab112db48a6393c443f41 source=proxy
time="2019-01-03T11:26:26.829621744+08:00" level=fatal msg="channel error" error="accept unix /run/vc/sbs/26f6a7b880889a5bd641cb6651550b50e18a090313fa8b119e8fb7d802b865d4/proxy.sock: use of closed network connection" name=kata-proxy pid=7282 sandbox=26f6a7b880889a5bd641cb6651550b50e18a090313fa8b119e8fb7d802b865d4 source=proxy
time="2019-01-03T11:26:26.829715333+08:00" level=fatal msg="failed to handle exit signal" error="close unix @->/run/vc/vm/26f6a7b880889a5bd641cb6651550b50e18a090313fa8b119e8fb7d802b865d4/kata.sock: use of closed network connection" name=kata-proxy pid=7282 sandbox=26f6a7b880889a5bd641cb6651550b50e18a090313fa8b119e8fb7d802b865d4 source=proxy
time="2019-01-03T11:26:27.8660207+08:00" level=fatal msg="failed to handle exit signal" error="close unix @->/run/vc/vm/d74d2a10d0b62bc1624d57c1c4b8728f810653f6ebf4550c326143ceb3242f32/kata.sock: use of closed network connection" name=kata-proxy pid=7550 sandbox=d74d2a10d0b62bc1624d57c1c4b8728f810653f6ebf4550c326143ceb3242f32 source=proxy
time="2019-01-11T16:33:06.854545347+08:00" level=fatal msg="channel error" error="accept unix /run/vc/sbs/da731faaafac6c7d9075e2e705789660cf78cd2b6694f56ee6ae977ceac03070/proxy.sock: use of closed network connection" name=kata-proxy pid=32175 sandbox=da731faaafac6c7d9075e2e705789660cf78cd2b6694f56ee6ae977ceac03070 source=proxy

Shim logs

No recent shim problems found in system journal.

Throttler logs

No recent throttler problems found in system journal.


Container manager details

Have docker

Docker

Output of "docker version":

Client:
 Version:      17.04.0-ce
 API version:  1.28
 Go version:   go1.7.5
 Git commit:   78d1802
 Built:        Tue May 30 18:21:18 2017
 OS/Arch:      linux/amd64
Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?

Output of "docker info":

Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?

Output of "systemctl show docker":

Type=simple
Restart=no
NotifyAccess=none
RestartUSec=100ms
TimeoutStartUSec=10min
TimeoutStopUSec=1min 30s
WatchdogUSec=0
WatchdogTimestampMonotonic=0
StartLimitInterval=10000000
StartLimitBurst=5
StartLimitAction=none
FailureAction=none
PermissionsStartOnly=no
RootDirectoryStartOnly=no
RemainAfterExit=no
GuessMainPID=yes
MainPID=0
ControlPID=0
FileDescriptorStoreMax=0
NFileDescriptorStore=0
StatusErrno=0
Result=success
ExecMainStartTimestamp=Mon 2018-12-17 14:40:20 CST
ExecMainStartTimestampMonotonic=3139553112
ExecMainExitTimestamp=Mon 2018-12-17 22:07:00 CST
ExecMainExitTimestampMonotonic=29939263627
ExecMainPID=25896
ExecMainCode=1
ExecMainStatus=0
ExecStart={ path=/usr/bin/dockerd ; argv[]=/usr/bin/dockerd --containerd /run/containerd/containerd.sock --add-runtime oci=/usr/bin/docker-runc $DOCKER_NETWORK_OPTIONS $DOCKER_OPTS ; ignore_errors=no ; start_time=[n/a] ; stop_time=[n/a] ; pid=0 ; code=(null) ; status=0/0 }
ExecStartPost={ path=/usr/lib/docker/docker_service_helper.sh ; argv[]=/usr/lib/docker/docker_service_helper.sh wait ; ignore_errors=no ; start_time=[n/a] ; stop_time=[n/a] ; pid=0 ; code=(null) ; status=0/0 }
ExecReload={ path=/bin/kill ; argv[]=/bin/kill -s HUP $MAINPID ; ignore_errors=no ; start_time=[n/a] ; stop_time=[n/a] ; pid=0 ; code=(null) ; status=0/0 }
Slice=system.slice
**MemoryCurrent=18446744073709551615**
CPUUsageNSec=18446744073709551615
TasksCurrent=18446744073709551615
Delegate=yes
CPUAccounting=no
CPUShares=18446744073709551615
StartupCPUShares=18446744073709551615
CPUQuotaPerSecUSec=infinity
BlockIOAccounting=no
BlockIOWeight=18446744073709551615
StartupBlockIOWeight=18446744073709551615
MemoryAccounting=no
MemoryLimit=18446744073709551615
DevicePolicy=auto
TasksAccounting=yes
TasksMax=18446744073709551615
EnvironmentFile=/etc/sysconfig/docker (ignore_errors=no)
UMask=0022
LimitCPU=18446744073709551615
LimitFSIZE=18446744073709551615
LimitDATA=18446744073709551615
LimitSTACK=18446744073709551615
LimitCORE=18446744073709551615
LimitRSS=18446744073709551615
LimitNOFILE=18446744073709551615
LimitAS=18446744073709551615
LimitNPROC=18446744073709551615
LimitMEMLOCK=65536
LimitLOCKS=18446744073709551615
LimitSIGPENDING=510994
LimitMSGQUEUE=819200
LimitNICE=0
LimitRTPRIO=0
LimitRTTIME=18446744073709551615
OOMScoreAdjust=0
Nice=0
IOScheduling=0
CPUSchedulingPolicy=0
CPUSchedulingPriority=0
TimerSlackNSec=50000
CPUSchedulingResetOnFork=no
NonBlocking=no
StandardInput=null
StandardOutput=journal
StandardError=inherit
TTYReset=no
TTYVHangup=no
TTYVTDisallocate=no
SyslogPriority=30
SyslogLevelPrefix=yes
SyslogLevel=6
SyslogFacility=3
SecureBits=0
CapabilityBoundingSet=18446744073709551615
MountFlags=0
PrivateTmp=no
PrivateNetwork=no
PrivateDevices=no
ProtectHome=no
ProtectSystem=no
SameProcessGroup=no
UtmpMode=init
IgnoreSIGPIPE=yes
NoNewPrivileges=no
SystemCallErrorNumber=0
RuntimeDirectoryMode=0755
KillMode=control-group
KillSignal=15
SendSIGKILL=yes
SendSIGHUP=no
Id=docker.service
Names=docker.service
Requires=system.slice sysinit.target containerd.socket containerd.service
WantedBy=multi-user.target
Conflicts=shutdown.target
Before=shutdown.target multi-user.target
After=system.slice sysinit.target systemd-journald.socket containerd.socket network.target basic.target containerd.service
Documentation=http://docs.docker.com
Description=Docker Application Container Engine
LoadState=loaded
ActiveState=inactive
SubState=dead
FragmentPath=/usr/lib/systemd/system/docker.service
UnitFileState=enabled
UnitFilePreset=disabled
InactiveExitTimestamp=Mon 2018-12-17 14:40:20 CST
InactiveExitTimestampMonotonic=3139553138
ActiveEnterTimestamp=Mon 2018-12-17 14:40:21 CST
ActiveEnterTimestampMonotonic=3140033640
ActiveExitTimestamp=Mon 2018-12-17 22:07:00 CST
ActiveExitTimestampMonotonic=29939255674
InactiveEnterTimestamp=Mon 2018-12-17 22:07:00 CST
InactiveEnterTimestampMonotonic=29939263701
CanStart=yes
CanStop=yes
CanReload=yes
CanIsolate=no
StopWhenUnneeded=no
RefuseManualStart=no
RefuseManualStop=no
AllowIsolate=no
DefaultDependencies=yes
OnFailureJobMode=replace
IgnoreOnIsolate=no
NeedDaemonReload=no
JobTimeoutUSec=0
JobRunningTimeoutUSec=0
JobTimeoutAction=none
ConditionResult=yes
AssertResult=yes
ConditionTimestamp=Mon 2018-12-17 14:40:20 CST
ConditionTimestampMonotonic=3139552287
AssertTimestamp=Mon 2018-12-17 14:40:20 CST
AssertTimestampMonotonic=3139552287
Transient=no
NetClass=0

Have kubectl

Kubernetes

Output of "kubectl version":

Client Version: version.Info{Major:"1", Minor:"9", GitVersion:"v1.9.8", GitCommit:"c138b85178156011dc934c2c9f4837476876fb07", GitTreeState:"clean", BuildDate:"2018-05-21T19:01:12Z", GoVersion:"go1.9.3", Compiler:"gc", Platform:"linux/amd64"}
The connection to the server localhost:8080 was refused - did you specify the right host or port?

Output of "kubectl config view":

apiVersion: v1
clusters: []
contexts: []
current-context: ""
kind: Config
preferences: {}
users: []

Output of "systemctl show kubelet":

Type=simple
Restart=on-failure
NotifyAccess=none
RestartUSec=5s
TimeoutStartUSec=10min
TimeoutStopUSec=1min 30s
WatchdogUSec=0
WatchdogTimestamp=Wed 2019-01-09 09:05:42 CST
WatchdogTimestampMonotonic=1970261036141
StartLimitInterval=10000000
StartLimitBurst=5
StartLimitAction=none
FailureAction=none
PermissionsStartOnly=no
RootDirectoryStartOnly=no
RemainAfterExit=no
GuessMainPID=yes
MainPID=53195
ControlPID=0
FileDescriptorStoreMax=0
NFileDescriptorStore=0
StatusErrno=0
Result=success
ExecMainStartTimestamp=Wed 2019-01-09 09:05:42 CST
ExecMainStartTimestampMonotonic=1970261036100
ExecMainExitTimestampMonotonic=0
ExecMainPID=53195
ExecMainCode=0
ExecMainStatus=0
ExecStart={ path=/approot1/paas/kube/bin/kubelet ; argv[]=/approot1/paas/kube/bin/kubelet $KUBE_LOGTOSTDERR $KUBE_LOG_LEVEL $KUBELET_KUBECONFIG $KUBELET_ADDRESS $KUBELET_PORT $KUBELET_HOSTNAME $KUBE_ALLOW_PRIV $KUBELET_ARGS ; ignore_errors=no ; start_time=[Wed 2019-01-09 09:05:42 CST] ; stop_time=[n/a] ; pid=53195 ; code=(null) ; status=0/0 }
Slice=system.slice
ControlGroup=/system.slice/kubelet.service
MemoryCurrent=207298560
CPUUsageNSec=44169912952695
TasksCurrent=91
Delegate=no
CPUAccounting=no
CPUShares=18446744073709551615
StartupCPUShares=18446744073709551615
CPUQuotaPerSecUSec=infinity
BlockIOAccounting=no
BlockIOWeight=18446744073709551615
StartupBlockIOWeight=18446744073709551615
MemoryAccounting=no
MemoryLimit=18446744073709551615
DevicePolicy=auto
TasksAccounting=yes
TasksMax=12288
EnvironmentFile=/approot1/paas/kube/conf/config (ignore_errors=yes)
EnvironmentFile=/approot1/paas/kube/conf/kubelet (ignore_errors=yes)
UMask=0022
LimitCPU=18446744073709551615
LimitFSIZE=18446744073709551615
LimitDATA=18446744073709551615
LimitSTACK=18446744073709551615
LimitCORE=18446744073709551615
LimitRSS=18446744073709551615
LimitNOFILE=4096
LimitAS=18446744073709551615
LimitNPROC=510994
LimitMEMLOCK=65536
LimitLOCKS=18446744073709551615
LimitSIGPENDING=510994
LimitMSGQUEUE=819200
LimitNICE=0
LimitRTPRIO=0
LimitRTTIME=18446744073709551615
WorkingDirectory=/approot1/paas/kube/kubelet
OOMScoreAdjust=0
Nice=0
IOScheduling=0
CPUSchedulingPolicy=0
CPUSchedulingPriority=0
TimerSlackNSec=50000
CPUSchedulingResetOnFork=no
NonBlocking=no
StandardInput=null
StandardOutput=journal
StandardError=inherit
TTYReset=no
TTYVHangup=no
TTYVTDisallocate=no
SyslogPriority=30
SyslogLevelPrefix=yes
SyslogLevel=6
SyslogFacility=3
SecureBits=0
CapabilityBoundingSet=18446744073709551615
MountFlags=0
PrivateTmp=no
PrivateNetwork=no
PrivateDevices=no
ProtectHome=no
ProtectSystem=no
SameProcessGroup=no
UtmpMode=init
IgnoreSIGPIPE=yes
NoNewPrivileges=no
SystemCallErrorNumber=0
RuntimeDirectoryMode=0755
KillMode=process
KillSignal=15
SendSIGKILL=yes
SendSIGHUP=no
Id=kubelet.service
Names=kubelet.service
Requires=system.slice sysinit.target containerd.service -.mount
WantedBy=multi-user.target
Conflicts=shutdown.target
Before=shutdown.target multi-user.target
After=sysinit.target systemd-journald.socket system.slice -.mount containerd.service basic.target
RequiresMountsFor=/approot1/paas/kube/kubelet
Documentation=https://kubernetes.io/docs/concepts/overview/components/#kubelet https://kubernetes.io/docs/reference/generated/kubelet/
Description=Kubernetes Kubelet Server
LoadState=loaded
ActiveState=active
SubState=running
FragmentPath=/usr/lib/systemd/system/kubelet.service
UnitFileState=enabled
UnitFilePreset=disabled
InactiveExitTimestamp=Wed 2019-01-09 09:05:42 CST
InactiveExitTimestampMonotonic=1970261036142
ActiveEnterTimestamp=Wed 2019-01-09 09:05:42 CST
ActiveEnterTimestampMonotonic=1970261036142
ActiveExitTimestamp=Wed 2019-01-09 09:05:35 CST
ActiveExitTimestampMonotonic=1970254454398
InactiveEnterTimestamp=Wed 2019-01-09 09:05:42 CST
InactiveEnterTimestampMonotonic=1970261016597
CanStart=yes
CanStop=yes
CanReload=no
CanIsolate=no
StopWhenUnneeded=no
RefuseManualStart=no
RefuseManualStop=no
AllowIsolate=no
DefaultDependencies=yes
OnFailureJobMode=replace
IgnoreOnIsolate=no
NeedDaemonReload=no
JobTimeoutUSec=0
JobRunningTimeoutUSec=0
JobTimeoutAction=none
ConditionResult=yes
AssertResult=yes
ConditionTimestamp=Wed 2019-01-09 09:05:42 CST
ConditionTimestampMonotonic=1970261016620
AssertTimestamp=Wed 2019-01-09 09:05:42 CST
AssertTimestampMonotonic=1970261016620
Transient=no
NetClass=0

No crio


Packages

No dpkg Have rpm Output of "rpm -qa|egrep "(cc-oci-runtimecc-runtimerunv|kata-proxy|kata-runtime|kata-shim|kata-ksm-throttler|kata-containers-image|linux-container|qemu-)"":

qemu-lite-data-2.11.0+git.f886228056-13.1.x86_64
kata-proxy-bin-1.4.0+git.e1856c2-11.1.x86_64
kata-containers-image-1.4.0-10.1.x86_64
qemu-vanilla-2.11.2+git.0982a56a55-13.1.x86_64
kata-linux-container-4.14.67.17-11.1.x86_64
qemu-lite-bin-2.11.0+git.f886228056-13.1.x86_64
qemu-vanilla-data-2.11.2+git.0982a56a55-13.1.x86_64
kata-shim-bin-1.4.0+git.b02868b-9.1.x86_64
qemu-lite-2.11.0+git.f886228056-13.1.x86_64
kata-ksm-throttler-1.4.0.git+1212de2-12.1.x86_64
kata-shim-1.4.0+git.b02868b-9.1.x86_64
kata-runtime-1.4.0+git.21f0059-15.1.x86_64
qemu-vanilla-bin-2.11.2+git.0982a56a55-13.1.x86_64
kata-proxy-1.4.0+git.e1856c2-11.1.x86_64

yuntongjin commented 5 years ago

The root cause is that exceed the maximum amount of memory virtcontainers/qemu.go if currentMemory+memDev.sizeMB > int(maxMem) { // Fixme: return a typed error return 0, fmt.Errorf("Unable to hotplug %d MiB memory, the SB has %d MiB and the maximum amount is %d MiB", memDev.sizeMB, currentMemory, q.config.MemorySize) }

From kata-collect-data.sh systemctl show docker: MemoryCurrent=18446744073709551615 MemoryLimit= 18446744073 709551615

The MemoryCurrent doesn't sound right.

teawater commented 5 years ago

your host memory size is 2048M?

yuntongjin commented 5 years ago

the SB has 116736 MiB, the host memory should over 116 GB

teawater commented 5 years ago

Please run "cat /proc/meminfo" and post the result in your host.

grahamwhaley commented 5 years ago

/cc @jcvenegas - we had something like this recently where the host memory was big enough, but I think that fix got merged. I think we don't allow over-commit - and I suspect most host systems are by default configured to not allow it either.

sboeuf commented 5 years ago

@jcvenegas PTAL as this is some code you've been looking into recently.

jcvenegas commented 5 years ago

@yuntongjin see a couple of issues here. The first message you get was an issue fixed recently in master msg="Unable to hotplug 16384 MiB memory, the SB has 116736 MiB and the maximum amount is 2048 the amount reported was wrong , I will backport it to 1.4.x.

Also I see, no free slots available I wonder if in your test you previously hotplug memory in the container. I ask because seems that no free slots available reflects the real issue, to avoid increase the amount of memory footprint this is limited to 10 slots of memory to hotplug. You can increase it in the kata configuration file MemorySlots =

yuntongjin commented 5 years ago

@grahamwhaley do you have the issue # and commit to fix this issue?

yuntongjin commented 5 years ago

@jcvenegas the code commit a5a74f6d20f9c6c7cecee6209263b1b867b81244 fix the error message, is there commit to fix the logic currentMemory+memDev.sizeMB > int(maxMem)? meanwhile we are testing to workaround this issue by increasing MemorySlots =

grahamwhaley commented 5 years ago

@yuntongjin - I think that fix was just to fix the message, as it was misleading. In that case iirc, the user was trying to allocate more memory than their machine physically had. It will be good to see if the memoryslots helps/fixes/changes this.