Closed 4ePTuk closed 4 years ago
Is LXD installed via snap?
yes I don't like those cgoup errors...
Are you running the edge snap? It should already be fixed in there if it's the issue I'm suspecting.
Here is my version snap --version snap 2.43.3 snapd 2.43.3 series 16 debian 9 kernel 4.9.0-11-amd64
I think, that i should update it to latest...
On Mon, Mar 30, 2020 at 03:28:12AM -0700, Dmitry wrote:
Here is my version snap --version snap 2.43.3 snapd 2.43.3 series 16 debian 9 kernel 4.9.0-11-amd64
Sorry, I meant if you switch to the edge channel for the LXD snap
snap install --channel=edge
or
snap refresh --channel=edge
Though only do this if you're not running anything important rn.
Christian
lxd --version
3.23
It's already stable channel stable: 3.23 2020-03-28 (14066) 70MB -
Can you paste the output of:
snap info lxd
please
name: lxd
summary: System container manager and API
publisher: Canonical✓
store-url: https://snapcraft.io/lxd
contact: https://github.com/lxc/lxd/issues
license: unset
description: |
**LXD is a system container manager**
With LXD you can run hundreds of containers of a variety of Linux
distributions, apply resource limits, pass in directories, USB devices
or GPUs and setup any network and storage you want.
LXD containers are lightweight, secure by default and a great
alternative to running Linux virtual machines.
**Run any Linux distribution you want**
Pre-made images are available for Ubuntu, Alpine Linux, ArchLinux,
CentOS, Debian, Fedora, Gentoo, OpenSUSE and more.
A full list of available images can be found here: https://images.linuxcontainers.org
Can't find the distribution you want? It's easy to make your own images too, either using our
`distrobuilder` tool or by assembling your own image tarball by hand.
**Containers at scale**
LXD is network aware and all interactions go through a simple REST API,
making it possible to remotely interact with containers on remote
systems, copying and moving them as you wish.
Want to go big? LXD also has built-in clustering support,
letting you turn dozens of servers into one big LXD server.
**Configuration options**
Supported options for the LXD snap (`snap set lxd KEY=VALUE`):
- criu.enable: Enable experimental live-migration support [default=false]
- daemon.debug: Increases logging to debug level [default=false]
- daemon.group: Group of users that can interact with LXD [default=lxd]
- ceph.builtin: Use snap-specific ceph configuration [default=false]
- openvswitch.builtin: Run a snap-specific OVS daemon [default=false]
Documentation: https://lxd.readthedocs.io
commands:
- lxd.benchmark
- lxd.buginfo
- lxd.check-kernel
- lxd.lxc
- lxd
- lxd.migrate
services:
lxd.activate: oneshot, enabled, inactive
lxd.daemon: simple, enabled, active
snap-id: J60k4JY0HppjwOjW8dZdYc8obXKxujRu
tracking: latest/stable
refresh-date: 2 days ago, at 07:17 MSK
channels:
stable: 3.23 2020-03-28 (14066) 70MB -
candidate: 3.23 2020-03-28 (14095) 70MB -
beta: 3.23 2020-03-28 (14096) 60MB -
edge: git-350df50 2020-03-29 (14114) 60MB -
3.23/stable: 3.23 2020-03-28 (14066) 70MB -
3.23/candidate: 3.23 2020-03-28 (14095) 70MB -
3.23/beta: ↑
3.23/edge: ↑
3.22/stable: 3.22 2020-03-18 (13901) 70MB -
3.22/candidate: 3.22 2020-03-19 (13911) 70MB -
3.22/beta: ↑
3.22/edge: ↑
3.21/stable: 3.21 2020-02-24 (13522) 69MB -
3.21/candidate: 3.21 2020-03-04 (13588) 69MB -
3.21/beta: ↑
3.21/edge: ↑
3.20/stable: 3.20 2020-02-06 (13300) 69MB -
3.20/candidate: 3.20 2020-02-06 (13300) 69MB -
3.20/beta: ↑
3.20/edge: ↑
3.19/stable: 3.19 2020-01-27 (13162) 67MB -
3.19/candidate: 3.19 2020-01-27 (13162) 67MB -
3.19/beta: ↑
3.19/edge: ↑
3.18/stable: 3.18 2019-12-02 (12631) 57MB -
3.18/candidate: 3.18 2019-12-02 (12631) 57MB -
3.18/beta: ↑
3.18/edge: ↑
3.0/stable: 3.0.4 2019-10-10 (11348) 55MB -
3.0/candidate: 3.0.4 2019-10-10 (11348) 55MB -
3.0/beta: ↑
3.0/edge: git-81b81b9 2019-10-10 (11362) 55MB -
2.0/stable: 2.0.11 2019-10-10 (8023) 28MB -
2.0/candidate: 2.0.11 2019-10-10 (8023) 28MB -
2.0/beta: ↑
2.0/edge: git-160221d 2020-01-13 (12854) 27MB -
installed: 3.23 (14066) 70MB -
Ok, so can you - if your workload does allow it - try with:
snap refresh --channel=latest/candidate
and then report back if you still have the issue?
(Daemon might need to be restarted after that with systemctl restart snap.lxd.daemon.service
.
ok, I made snap refresh lxd --channel=latest/candidate
restarted daemon and rebooted server...still EOF error (
services:
lxd.activate: oneshot, enabled, inactive
lxd.daemon: simple, enabled, active
snap-id: J60k4JY0HppjwOjW8dZdYc8obXKxujRu
tracking: latest/candidate
refresh-date: today at 14:00 MSK
lxc exec Blog bash
Error: EOF
lxc info Blog --show-log
Name: Blog
Location: none
Remote: unix://
Architecture: x86_64
Created: 2020/03/28 16:48 UTC
Status: Running
Type: container
Profiles: default
Pid: 1396
Ips:
eth0: inet 192.168.31.120 veth4528aad7
eth0: inet6 fe80::216:3eff:fe0e:98a veth4528aad7
lo: inet 127.0.0.1
lo: inet6 ::1
Resources:
Processes: 6
CPU usage:
CPU usage (in seconds): 0
Memory usage:
Memory (current): 32.15MB
Network usage:
eth0:
Bytes received: 257.71kB
Bytes sent: 1.70kB
Packets received: 1044
Packets sent: 16
lo:
Bytes received: 0B
Bytes sent: 0B
Packets received: 0
Packets sent: 0
Log:
lxc Blog 20200330110329.915 WARN cgfsng - cgroups/cgfsng.c:cg_unified_delegate:2915 - No such file or directory - Failed to read /sys/kernel/cgroup/delegate
lxc Blog 20200330110329.918 ERROR cgfsng - cgroups/cgfsng.c:mkdir_eexist_on_last:1142 - File exists - Failed to create directory "/sys/fs/cgroup/cpuset//lxc.monitor.Blog"
lxc Blog 20200330110329.920 ERROR cgfsng - cgroups/cgfsng.c:mkdir_eexist_on_last:1142 - File exists - Failed to create directory "/sys/fs/cgroup/cpuset//lxc.payload.Blog"
lxc Blog 20200330110329.922 ERROR utils - utils.c:lxc_can_use_pidfd:1834 - Kernel does not support pidfds
lxc Blog 20200330110329.929 WARN cgfsng - cgroups/cgfsng.c:fchowmodat:1454 - No such file or directory - Failed to fchownat(17, cgroup.threads, 1000000000, 0, AT_EMPTY_PATH | AT_SYMLINK_NOFOLLOW )
lxc Blog 20200330110358.846 WARN cgfsng - cgroups/cgfsng.c:cg_unified_delegate:2915 - No such file or directory - Failed to read /sys/kernel/cgroup/delegate
lxc Blog 20200330110358.848 ERROR cgfsng - cgroups/cgfsng.c:cgroup_attach_leaf:2087 - Permission denied - Failed to attach to unified cgroup
lxc Blog 20200330110358.848 ERROR conf - conf.c:userns_exec_minimal:4194 - Permission denied - Running function in new user namespace failed
lxc Blog 20200330110525.413 WARN cgfsng - cgroups/cgfsng.c:cg_unified_delegate:2915 - No such file or directory - Failed to read /sys/kernel/cgroup/delegate
lxc Blog 20200330110525.415 ERROR cgfsng - cgroups/cgfsng.c:cgroup_attach_leaf:2087 - Permission denied - Failed to attach to unified cgroup
lxc Blog 20200330110525.415 ERROR conf - conf.c:userns_exec_minimal:4194 - Permission denied - Running function in new user namespace failed
Can you show me findmnt
, please?
findmnt
TARGET SOURCE FSTYPE OPTIONS
/ /dev/sda2 ext4 rw,relatime,errors=remount-ro,data=ordered
├─/sys sysfs sysfs rw,nosuid,nodev,noexec,relatime
│ ├─/sys/kernel/security securityfs securityfs rw,nosuid,nodev,noexec,relatime
│ ├─/sys/fs/cgroup tmpfs tmpfs ro,nosuid,nodev,noexec,mode=755
│ │ ├─/sys/fs/cgroup/unified cgroup2 cgroup2 rw,nosuid,nodev,noexec,relatime
│ │ ├─/sys/fs/cgroup/systemd cgroup cgroup rw,nosuid,nodev,noexec,relatime,xattr,name=systemd
│ │ ├─/sys/fs/cgroup/pids cgroup cgroup rw,nosuid,nodev,noexec,relatime,pids
│ │ ├─/sys/fs/cgroup/memory cgroup cgroup rw,nosuid,nodev,noexec,relatime,memory
│ │ ├─/sys/fs/cgroup/blkio cgroup cgroup rw,nosuid,nodev,noexec,relatime,blkio
│ │ ├─/sys/fs/cgroup/freezer cgroup cgroup rw,nosuid,nodev,noexec,relatime,freezer
│ │ ├─/sys/fs/cgroup/cpu,cpuacct cgroup cgroup rw,nosuid,nodev,noexec,relatime,cpu,cpuacct
│ │ ├─/sys/fs/cgroup/perf_event cgroup cgroup rw,nosuid,nodev,noexec,relatime,perf_event
│ │ ├─/sys/fs/cgroup/net_cls,net_prio cgroup cgroup rw,nosuid,nodev,noexec,relatime,net_cls,net_prio
│ │ ├─/sys/fs/cgroup/cpuset cgroup cgroup rw,nosuid,nodev,noexec,relatime,cpuset,clone_children
│ │ └─/sys/fs/cgroup/devices cgroup cgroup rw,nosuid,nodev,noexec,relatime,devices
│ ├─/sys/fs/pstore pstore pstore rw,nosuid,nodev,noexec,relatime
│ ├─/sys/firmware/efi/efivars efivarfs efivarfs rw,nosuid,nodev,noexec,relatime
│ ├─/sys/fs/bpf bpf bpf rw,nosuid,nodev,noexec,relatime
│ ├─/sys/kernel/debug debugfs debugfs rw,relatime
│ └─/sys/fs/fuse/connections fusectl fusectl rw,relatime
├─/proc proc proc rw,nosuid,nodev,noexec,relatime
│ └─/proc/sys/fs/binfmt_misc systemd-1 autofs rw,relatime,fd=40,pgrp=1,timeout=0,minproto=5,maxproto=5,direct,pipe_ino=10954
│ └─/proc/sys/fs/binfmt_misc binfmt_misc binfmt_misc rw,relatime
├─/dev udev devtmpfs rw,nosuid,relatime,size=6059904k,nr_inodes=1514976,mode=755
│ ├─/dev/pts devpts devpts rw,nosuid,noexec,relatime,gid=5,mode=620,ptmxmode=000
│ ├─/dev/shm tmpfs tmpfs rw,nosuid,nodev
│ ├─/dev/hugepages hugetlbfs hugetlbfs rw,relatime
│ └─/dev/mqueue mqueue mqueue rw,relatime
├─/run tmpfs tmpfs rw,nosuid,noexec,relatime,size=1215380k,mode=755
│ ├─/run/lock tmpfs tmpfs rw,nosuid,nodev,noexec,relatime,size=5120k
│ ├─/run/rpc_pipefs sunrpc rpc_pipefs rw,relatime
│ └─/run/snapd/ns tmpfs[/snapd/ns] tmpfs rw,nosuid,noexec,relatime,size=1215380k,mode=755
│ └─/run/snapd/ns/lxd.mnt nsfs[mnt:[4026532304]] nsfs rw
├─/snap/core/8689 /dev/loop0 squashfs ro,nodev,relatime
├─/snap/core/8592 /dev/loop1 squashfs ro,nodev,relatime
├─/snap/lxd/14095 /dev/loop2 squashfs ro,nodev,relatime
├─/snap/lxd/14066 /dev/loop3 squashfs ro,nodev,relatime
├─/boot/efi /dev/sda1 vfat rw,relatime,fmask=0077,dmask=0077,codepage=437,iocharset=ascii,shortname=mixed,utf8,errors=remount-ro
├─/tmp tmpfs tmpfs rw,relatime
├─/srv/dev-disk-by-label-UnsafePool /dev/sdd1 ext4 rw,noexec,relatime,data=ordered,jqfmt=vfsv0,usrjquota=aquota.user,grpjquota=aquota.group
├─/srv/dev-disk-by-label-SafePool /dev/md127 ext4 rw,noexec,relatime,data=ordered,jqfmt=vfsv0,usrjquota=aquota.user,grpjquota=aquota.group
│ └─/srv/dev-disk-by-label-SafePool/LxcContainers /dev/sda2[/var/snap/lxd/common/lxd/storage-pools] ext4 rw,relatime,errors=remount-ro,data=ordered
├─/sharedfolders/GuestShare /dev/sdd1[/GuestShare] ext4 rw,noexec,relatime,data=ordered,jqfmt=vfsv0,usrjquota=aquota.user,grpjquota=aquota.group
├─/sharedfolders/Media /dev/sdd1[/Media] ext4 rw,noexec,relatime,data=ordered,jqfmt=vfsv0,usrjquota=aquota.user,grpjquota=aquota.group
├─/sharedfolders/NextCloudData /dev/md127[/NextCloudData] ext4 rw,noexec,relatime,data=ordered,jqfmt=vfsv0,usrjquota=aquota.user,grpjquota=aquota.group
├─/sharedfolders/HomeShare /dev/md127[/HomeShare] ext4 rw,noexec,relatime,data=ordered,jqfmt=vfsv0,usrjquota=aquota.user,grpjquota=aquota.group
├─/sharedfolders/LxcContainers /dev/sda2[/var/snap/lxd/common/lxd/storage-pools] ext4 rw,relatime,errors=remount-ro,data=ordered
└─/var/snap/lxd/common/ns tmpfs tmpfs rw,relatime,size=1024k,mode=700
├─/var/snap/lxd/common/ns/mntns nsfs[mnt:[4026532304]] nsfs rw
└─/var/snap/lxd/common/ns/shmounts nsfs[mnt:[4026532305]] nsfs rw
Can you show:
ls -al /sys/fs/cgroup/unified
from inside the container, please?
damn...i think that it will be problem. The only way to get inside is exec... all conrainers working without update for months. Blog is new and empty one
i thought, that exec is normal way to get inside. Because i have 24/7 access to my home server. And don't install ssh daemon on them...
No, you can either get into the container via lxc console
or you can show me:
ls -al /sys/fs/cgroup/unified/
on the host and then I'll tell you which folder I need to look at :)
ls -al /sys/fs/cgroup/unified/
total 0
dr-xr-xr-x 15 root root 0 Mar 30 14:19 .
drwxr-xr-x 13 root root 340 Mar 30 14:03 ..
-r--r--r-- 1 root root 0 Mar 30 14:19 cgroup.controllers
-rw-r--r-- 1 root root 0 Mar 30 14:03 cgroup.procs
-rw-r--r-- 1 root root 0 Mar 30 14:19 cgroup.subtree_control
drwxr-xr-x 2 root root 0 Mar 30 14:19 init.scope
drwxr-xr-x 2 root root 0 Mar 30 14:19 lxc.monitor.Blog
drwxr-xr-x 2 root root 0 Mar 30 14:19 lxc.monitor.Emby
drwxr-xr-x 2 root root 0 Mar 30 14:19 lxc.monitor.Gitea
drwxr-xr-x 2 root root 0 Mar 30 14:19 lxc.monitor.NextCloud
drwxr-xr-x 2 root root 0 Mar 30 14:19 lxc.monitor.NginxReverseProxy
drwxrwxr-x 3 root 1000000 0 Mar 30 14:03 lxc.payload.Blog
drwxrwxr-x 2 root 1000000 0 Mar 30 14:03 lxc.payload.Emby
drwxrwxr-x 2 root 1000000 0 Mar 30 14:03 lxc.payload.Gitea
drwxrwxr-x 2 root 1000000 0 Mar 30 14:03 lxc.payload.NextCloud
drwxrwxr-x 2 root 1000000 0 Mar 30 14:03 lxc.payload.NginxReverseProxy
drwxr-xr-x 57 root root 0 Mar 30 14:19 system.slice
drwxr-xr-x 2 root root 0 Mar 30 14:03 user.slice
How to connect to Blog using console? lxc-console -n Blog
? It says that Blog is not running
Ok, can you show:
ls -al /sys/fs/cgroup/unified/lxc.payload.Blog
please, that should be the container you showed the log from, right?
root@openmediavault:~# ls -al /sys/fs/cgroup/unified/lxc.payload.Blog
total 0
drwxrwxr-x 3 root 1000000 0 Mar 30 14:03 .
dr-xr-xr-x 15 root root 0 Mar 30 14:19 ..
drwxr-xr-x 2 1000000 1000000 0 Mar 30 14:03 .lxc
-r--r--r-- 1 root root 0 Mar 30 14:45 cgroup.controllers
-r--r--r-- 1 root root 0 Mar 30 14:45 cgroup.events
-rw-rw-r-- 1 root 1000000 0 Mar 30 14:03 cgroup.procs
-rw-rw-r-- 1 root 1000000 0 Mar 30 14:03 cgroup.subtree_control
yes, this is brand new container. But other have same error
Huh, can you show:
cat /sys/fs/cgroup/unified/lxc.payload.Blog/cgroup.procs
ls -al /sys/fs/cgroup/unified/lxc.payload.Blog/.lxc
please?
root@openmediavault:~# cat /sys/fs/cgroup/unified/lxc.payload.Blog/cgroup.procs
1536
1396
1498
1535
1795
1823
root@openmediavault:~# ls -al /sys/fs/cgroup/unified/lxc.payload.Blog/.lxc
total 0
drwxr-xr-x 2 1000000 1000000 0 Mar 30 14:03 .
drwxrwxr-x 3 root 1000000 0 Mar 30 14:03 ..
-r--r--r-- 1 1000000 1000000 0 Mar 30 14:03 cgroup.controllers
-r--r--r-- 1 1000000 1000000 0 Mar 30 14:03 cgroup.events
-rw-r--r-- 1 1000000 1000000 0 Mar 30 14:03 cgroup.procs
-rw-r--r-- 1 1000000 1000000 0 Mar 30 14:03 cgroup.subtree_control
=( Do you have any idea?
ls -al /sys/fs/cgroup/unified/ total 0 dr-xr-xr-x 15 root root 0 Mar 30 14:19 . drwxr-xr-x 13 root root 340 Mar 30 14:03 .. -r--r--r-- 1 root root 0 Mar 30 14:19 cgroup.controllers -rw-r--r-- 1 root root 0 Mar 30 14:03 cgroup.procs -rw-r--r-- 1 root root 0 Mar 30 14:19 cgroup.subtree_control drwxr-xr-x 2 root root 0 Mar 30 14:19 init.scope drwxr-xr-x 2 root root 0 Mar 30 14:19 lxc.monitor.Blog drwxr-xr-x 2 root root 0 Mar 30 14:19 lxc.monitor.Emby drwxr-xr-x 2 root root 0 Mar 30 14:19 lxc.monitor.Gitea drwxr-xr-x 2 root root 0 Mar 30 14:19 lxc.monitor.NextCloud drwxr-xr-x 2 root root 0 Mar 30 14:19 lxc.monitor.NginxReverseProxy drwxrwxr-x 3 root 1000000 0 Mar 30 14:03 lxc.payload.Blog drwxrwxr-x 2 root 1000000 0 Mar 30 14:03 lxc.payload.Emby drwxrwxr-x 2 root 1000000 0 Mar 30 14:03 lxc.payload.Gitea drwxrwxr-x 2 root 1000000 0 Mar 30 14:03 lxc.payload.NextCloud drwxrwxr-x 2 root 1000000 0 Mar 30 14:03 lxc.payload.NginxReverseProxy drwxr-xr-x 57 root root 0 Mar 30 14:19 system.slice drwxr-xr-x 2 root root 0 Mar 30 14:03 user.slice
How to connect to Blog using console?
lxc-console -n Blog
? It says that Blog is not running
lxc console Blog
but without exec i can't set default passwd right?
What container is that you're running? I've just created a new Debian stretch vm, install lxd from the snap and ran an ubuntu container and attached to it just fine
And what systemd version are you running on the host:
systemctl --version
?
root@openmediavault:~# systemctl --version
systemd 241 (241)
+PAM +AUDIT +SELINUX +IMA +APPARMOR +SMACK +SYSVINIT +UTMP +LIBCRYPTSETUP +GCRYPT +GNUTLS +ACL +XZ +LZ4 +SECCOMP +BLKID +ELFUTILS +KMOD -IDN2 +IDN -PCRE2 default-hierarchy=hybrid
root@openmediavault:~# lxc list
+-------------------+---------+-----------------------+------+-----------+-----------+
| NAME | STATE | IPV4 | IPV6 | TYPE | SNAPSHOTS |
+-------------------+---------+-----------------------+------+-----------+-----------+
| Blog | RUNNING | 192.168.31.120 (eth0) | | CONTAINER | 0 |
+-------------------+---------+-----------------------+------+-----------+-----------+
| Cups | STOPPED | | | CONTAINER | 0 |
+-------------------+---------+-----------------------+------+-----------+-----------+
| Emby | RUNNING | 192.168.31.53 (eth0) | | CONTAINER | 0 |
+-------------------+---------+-----------------------+------+-----------+-----------+
| Gitea | RUNNING | 192.168.31.52 (eth0) | | CONTAINER | 6 |
+-------------------+---------+-----------------------+------+-----------+-----------+
| NextCloud | RUNNING | 192.168.31.57 (eth0) | | CONTAINER | 0 |
+-------------------+---------+-----------------------+------+-----------+-----------+
| NginxReverseProxy | RUNNING | 192.168.31.51 (eth0) | | CONTAINER | 0 |
+-------------------+---------+-----------------------+------+-----------+-----------+
| test | STOPPED | | | CONTAINER | 0 |
+-------------------+---------+-----------------------+------+-----------+-----------+
All containters have same error
Why do you have systemd 241 installed?
Or rather, how?
i don't know...never touched it should i update it to latest?
i don't know...never touched it should i update it to latest?
No, let me try and reproduce with this setup.
thanks...i'm close to reinstall system to get it work (
Ok, managed to reproduce this and it's sucky but I have an idea.
I'm full of attention =)
The issue is with the kernel you're using. On this kernel the restrictions to move processes between cgroups are different than they are on newer kernels. Specifically, you're running into the following check:
if (!uid_eq(cred->euid, GLOBAL_ROOT_UID) &&
!uid_eq(cred->euid, tcred->uid) &&
!uid_eq(cred->euid, tcred->suid))
ret = -EACCES;
which dictates that in order to move a process into a cgroup you either need to be global root (no restrictions apply) or the effective uid of the process trying to move the process and the {saved}uid of the process that is supposed to be mvoed need to be identical. The new attaching logic we did doesn't fulfill this criterion for various reasons. I can likely fix this but I'm starting to think about placing a requirement on the kernel version for which we guarantee cgroup2 support. Mainly because cgroup2 has changed quite a bit.
so, in this condition what can you advice? rollback on early version of lxd? i can't update disto right now. How can i login into container with lxd-console if there is no passwd?
I'll send a fix soon and then @stgraber will cherry-pick it into the snap and you should have it in a few hours (@stgraber, right?).
If the Qt conference had not been rescheduled into USA and it remained in Berlin i could buy you some beer ) Thank you
Yep, cherry-pick is pretty quick
what should i do after this? snap refresh lxc to candidate?
Required information
` config: {} api_extensions:
Issue description
Executing
lxc exec Container bash
returns "EOF" error. In container log there are cgoup errors. This happens on every container i have. This started after UPS failed to powerup server and it shutted downSteps to reproduce
Information to attach
lxc info NAME --show-log
)Name: Blog Location: none Remote: unix:// Architecture: x86_64 Created: 2020/03/28 16:48 UTC Status: Running Type: container Profiles: default Pid: 2826 Ips: eth0: inet 192.168.31.120 veth6bed81f8 eth0: inet6 fe80::216:3eff:fe0e:98a veth6bed81f8 lo: inet 127.0.0.1 lo: inet6 ::1 Resources: Processes: 6 CPU usage: CPU usage (in seconds): 0 Memory usage: Memory (current): 14.45MB Network usage: eth0: Bytes received: 2.58MB Bytes sent: 1.99kB Packets received: 8619 Packets sent: 21 lo: Bytes received: 0B Bytes sent: 0B Packets received: 0 Packets sent: 0
Log:
lxc Blog 20200330091342.274 WARN cgfsng - cgroups/cgfsng.c:cg_unified_delegate:2906 - No such file or directory - Failed to read /sys/kernel/cgroup/delegate lxc Blog 20200330091342.291 ERROR cgfsng - cgroups/cgfsng.c:mkdir_eexist_on_last:1136 - File exists - Failed to create directory "/sys/fs/cgroup/cpuset//lxc.monitor.Blog" lxc Blog 20200330091342.312 ERROR cgfsng - cgroups/cgfsng.c:mkdir_eexist_on_last:1136 - File exists - Failed to create directory "/sys/fs/cgroup/cpuset//lxc.payload.Blog" lxc Blog 20200330091342.334 ERROR utils - utils.c:lxc_can_use_pidfd:1834 - Kernel does not support pidfds lxc Blog 20200330091342.375 WARN cgfsng - cgroups/cgfsng.c:fchowmodat:1448 - No such file or directory - Failed to fchownat(17, cgroup.threads, 1000000000, 0, AT_EMPTY_PATH | AT_SYMLINK_NOFOLLOW ) lxc Blog 20200330091909.468 WARN cgfsng - cgroups/cgfsng.c:cg_unified_delegate:2906 - No such file or directory - Failed to read /sys/kernel/cgroup/delegate lxc Blog 20200330091909.471 ERROR cgfsng - cgroups/cgfsng.c:cgroup_attach_leaf:2081 - Permission denied - Failed to attach to unified cgroup lxc Blog 20200330091909.471 ERROR conf - conf.c:userns_exec_minimal:4194 - Permission denied - Running function in new user namespace failed
[x] Container configuration (
lxc config show NAME --expanded
) architecture: x86_64 config: image.architecture: amd64 image.description: Debian stretch amd64 (20200328_05:24) image.os: Debian image.release: stretch image.serial: "20200328_05:24" image.type: squashfs volatile.base_image: e25e091d33fdd9d522db02c07757e10c448a85681402677d5b9f3ce4e040048a volatile.eth0.host_name: veth6bed81f8 volatile.eth0.hwaddr: 00:16:3e:0e:09:8a volatile.idmap.base: "0" volatile.idmap.current: '[{"Isuid":true,"Isgid":false,"Hostid":1000000,"Nsid":0,"Maprange":1000000000},{"Isuid":false,"Isgid":true,"Hostid":1000000,"Nsid":0,"Maprange":1000000000}]' volatile.idmap.next: '[{"Isuid":true,"Isgid":false,"Hostid":1000000,"Nsid":0,"Maprange":1000000000},{"Isuid":false,"Isgid":true,"Hostid":1000000,"Nsid":0,"Maprange":1000000000}]' volatile.last_state.idmap: '[{"Isuid":true,"Isgid":false,"Hostid":1000000,"Nsid":0,"Maprange":1000000000},{"Isuid":false,"Isgid":true,"Hostid":1000000,"Nsid":0,"Maprange":1000000000}]' volatile.last_state.power: RUNNING devices: eth0: name: eth0 nictype: bridged parent: br0 type: nic root: path: / pool: default type: disk ephemeral: false profiles:[x] Main daemon log (at /var/log/lxd/lxd.log or /var/snap/lxd/common/lxd/logs/lxd.log) t=2020-03-30T12:11:09+0300 lvl=info msg="LXD 3.23 is starting in normal mode" path=/var/snap/lxd/common/lxd t=2020-03-30T12:11:09+0300 lvl=info msg="Kernel uid/gid map:" t=2020-03-30T12:11:09+0300 lvl=info msg=" - u 0 0 4294967295" t=2020-03-30T12:11:09+0300 lvl=info msg=" - g 0 0 4294967295" t=2020-03-30T12:11:09+0300 lvl=info msg="Configured LXD uid/gid map:" t=2020-03-30T12:11:09+0300 lvl=info msg=" - u 0 1000000 1000000000" t=2020-03-30T12:11:09+0300 lvl=info msg=" - g 0 1000000 1000000000" t=2020-03-30T12:11:09+0300 lvl=warn msg="AppArmor support has been disabled because of lack of kernel support" t=2020-03-30T12:11:09+0300 lvl=info msg="Kernel features:" t=2020-03-30T12:11:09+0300 lvl=info msg=" - netnsid-based network retrieval: no" t=2020-03-30T12:11:09+0300 lvl=info msg=" - uevent injection: no" t=2020-03-30T12:11:09+0300 lvl=info msg=" - seccomp listener: no" t=2020-03-30T12:11:09+0300 lvl=info msg=" - seccomp listener continue syscalls: no" t=2020-03-30T12:11:09+0300 lvl=info msg=" - unprivileged file capabilities: no" t=2020-03-30T12:11:09+0300 lvl=info msg=" - cgroup layout: hybrid" t=2020-03-30T12:11:09+0300 lvl=warn msg=" - Couldn't find the CGroup hugetlb controller, hugepage limits will be ignored" t=2020-03-30T12:11:09+0300 lvl=warn msg=" - Couldn't find the CGroup memory swap accounting, swap limits will be ignored" t=2020-03-30T12:11:09+0300 lvl=info msg=" - shiftfs support: disabled" t=2020-03-30T12:11:09+0300 lvl=info msg="Initializing local database" t=2020-03-30T12:11:10+0300 lvl=info msg="Starting /dev/lxd handler:" t=2020-03-30T12:11:10+0300 lvl=info msg=" - binding devlxd socket" socket=/var/snap/lxd/common/lxd/devlxd/sock t=2020-03-30T12:11:10+0300 lvl=info msg="REST API daemon:" t=2020-03-30T12:11:10+0300 lvl=info msg=" - binding Unix socket" inherited=true socket=/var/snap/lxd/common/lxd/unix.socket t=2020-03-30T12:11:10+0300 lvl=info msg="Initializing global database" t=2020-03-30T12:11:10+0300 lvl=info msg="Firewall loaded driver \"xtables\"" t=2020-03-30T12:11:10+0300 lvl=info msg="Initializing storage pools" t=2020-03-30T12:11:12+0300 lvl=info msg="Initializing daemon storage mounts" t=2020-03-30T12:11:12+0300 lvl=info msg="Initializing networks" t=2020-03-30T12:11:12+0300 lvl=info msg="Pruning leftover image files" t=2020-03-30T12:11:12+0300 lvl=info msg="Done pruning leftover image files" t=2020-03-30T12:11:12+0300 lvl=info msg="Loading daemon configuration" t=2020-03-30T12:11:12+0300 lvl=info msg="Pruning expired images" t=2020-03-30T12:11:12+0300 lvl=info msg="Done pruning expired images" t=2020-03-30T12:11:12+0300 lvl=info msg="Pruning expired instance backups" t=2020-03-30T12:11:12+0300 lvl=info msg="Done pruning expired instance backups" t=2020-03-30T12:11:12+0300 lvl=info msg="Updating instance types" t=2020-03-30T12:11:12+0300 lvl=info msg="Expiring log files" t=2020-03-30T12:11:12+0300 lvl=info msg="Done updating instance types" t=2020-03-30T12:11:12+0300 lvl=info msg="Done expiring log files" t=2020-03-30T12:11:12+0300 lvl=info msg="Updating images" t=2020-03-30T12:11:12+0300 lvl=info msg="Done updating images" t=2020-03-30T12:11:12+0300 lvl=info msg="Starting container" action=start created=2020-03-28T19:48:45+0300 ephemeral=false name=Blog project=default stateful=false used=2020-03-30T12:05:14+0300 t=2020-03-30T12:11:12+0300 lvl=info msg="Started container" action=start created=2020-03-28T19:48:45+0300 ephemeral=false name=Blog project=default stateful=false used=2020-03-30T12:05:14+0300 t=2020-03-30T12:11:12+0300 lvl=info msg="Starting container" action=start created=2019-11-18T14:04:54+0300 ephemeral=false name=Emby project=default stateful=false used=2020-03-30T12:05:15+0300 t=2020-03-30T12:11:13+0300 lvl=info msg="Started container" action=start created=2019-11-18T14:04:54+0300 ephemeral=false name=Emby project=default stateful=false used=2020-03-30T12:05:15+0300 t=2020-03-30T12:11:13+0300 lvl=info msg="Starting container" action=start created=2019-11-18T06:37:02+0300 ephemeral=false name=Gitea project=default stateful=false used=2020-03-30T12:05:15+0300 t=2020-03-30T12:11:13+0300 lvl=info msg="Started container" action=start created=2019-11-18T06:37:02+0300 ephemeral=false name=Gitea project=default stateful=false used=2020-03-30T12:05:15+0300 t=2020-03-30T12:11:13+0300 lvl=info msg="Starting container" action=start created=2020-01-07T12:59:31+0300 ephemeral=false name=NextCloud project=default stateful=false used=2020-03-30T12:05:15+0300 t=2020-03-30T12:11:14+0300 lvl=info msg="Started container" action=start created=2020-01-07T12:59:31+0300 ephemeral=false name=NextCloud project=default stateful=false used=2020-03-30T12:05:15+0300 t=2020-03-30T12:11:14+0300 lvl=info msg="Starting container" action=start created=2019-11-18T13:30:04+0300 ephemeral=false name=NginxReverseProxy project=default stateful=false used=2020-03-30T12:05:16+0300 t=2020-03-30T12:11:14+0300 lvl=info msg="Started container" action=start created=2019-11-18T13:30:04+0300 ephemeral=false name=NginxReverseProxy project=default stateful=false used=2020-03-30T12:05:16+0300 t=2020-03-30T12:11:26+0300 lvl=eror msg="Failed to retrieve PID of executing child process: EOF" t=2020-03-30T12:12:28+0300 lvl=info msg="Shutting down container" action=shutdown created=2020-03-28T19:48:45+0300 ephemeral=false name=Blog project=default timeout=-1s used=2020-03-30T12:11:12+0300 t=2020-03-30T12:12:29+0300 lvl=warn msg="Failed getting list of tables from \"/proc/self/net/ip6_tables_names\", assuming all requested tables exist" t=2020-03-30T12:12:30+0300 lvl=info msg="Shut down container" action=shutdown created=2020-03-28T19:48:45+0300 ephemeral=false name=Blog project=default timeout=-1s used=2020-03-30T12:11:12+0300 t=2020-03-30T12:13:41+0300 lvl=info msg="Starting container" action=start created=2020-03-28T19:48:45+0300 ephemeral=false name=Blog project=default stateful=false used=2020-03-30T12:11:12+0300 t=2020-03-30T12:13:42+0300 lvl=info msg="Started container" action=start created=2020-03-28T19:48:45+0300 ephemeral=false name=Blog project=default stateful=false used=2020-03-30T12:11:12+0300 t=2020-03-30T12:19:09+0300 lvl=eror msg="Failed to retrieve PID of executing child process: EOF" t=2020-03-30T12:42:52+0300 lvl=info msg="Creating container" ephemeral=false name=test project=default t=2020-03-30T12:42:52+0300 lvl=info msg="Created container" ephemeral=false name=test project=default t=2020-03-30T12:42:52+0300 lvl=warn msg="The backing filesystem doesn't support quotas, skipping set quota" driver=dir path=/var/snap/lxd/common/lxd/storage-pools/default/containers/test pool=default size=10GB volID=55