kraj / meta-openwrt

OE/Yocto metadata layer for OpenWRT
MIT License
103 stars 78 forks source link

Not able to start Docker daemon inside LXC container in OpenWRT image build using Yocto. #84

Open satishnaidu opened 6 years ago

satishnaidu commented 6 years ago

Required information

--- Control groups --- Cgroup: enabled Cgroup clone_children flag: enabled Cgroup device: enabled Cgroup sched: enabled Cgroup cpu account: enabled Cgroup memory controller: enabled Cgroup cpuset: enabled

--- Misc --- Veth pair device: enabled Macvlan: enabled Vlan: enabled Bridges: enabled Advanced netfilter: enabled CONFIG_NF_NAT_IPV4: enabled CONFIG_NF_NAT_IPV6: enabled CONFIG_IP_NF_TARGET_MASQUERADE: enabled CONFIG_IP6_NF_TARGET_MASQUERADE: enabled CONFIG_NETFILTER_XT_TARGET_CHECKSUM: enabled FUSE (for use with lxcfs): enabled

--- Checkpoint/Restore --- checkpoint restore: missing CONFIG_FHANDLE: enabled CONFIG_EVENTFD: enabled CONFIG_EPOLL: enabled CONFIG_UNIX_DIAG: missing CONFIG_INET_DIAG: enabled CONFIG_PACKET_DIAG: missing CONFIG_NETLINK_DIAG: missing File capabilities: enabled

Issue description

Not able to start docker inside LXC Ubuntu container, even though I enabled cgroup configuration in container config file. I can start docker on host OpenWRT image, but when I try to start docker inside LXC container, it is failed with error "Device's cgroup isn't mounted".

I raised this issue with LXC Github, they responded, it's because of "Mounting all cgroups into a single hierarchy" in OpenWRT system. https://github.com/lxc/lxc/issues/2483#issuecomment-406864702.

NOTE: In raspbian stretch armhf architecture, I am able to run docker inside LXC without any issues, only facing issues with OpenWRT image.

Do we have any solution on OpenWRT to mount cgroups as multiple hierarchies inside LXC container, in order to run docker inside LXC.

Error message: root@c1:/# dockerd -s vfs INFO[0000] libcontainerd: new containerd process, pid: 18 WARN[0000] containerd: low RLIMIT_NOFILE changing to max current=1024 max=4096 INFO[0001] Graph migration to content-addressability took 0.00 seconds WARN[0001] Your kernel does not support cgroup memory limit WARN[0001] Unable to find cpu cgroup in mounts WARN[0001] Unable to find blkio cgroup in mounts WARN[0001] Unable to find cpuset cgroup in mounts WARN[0001] mountpoint for pids not found Error starting daemon: Devices cgroup isn't mounted

Steps to reproduce

  1. lxc-start -n c1 --logfile test.log --logpriority DEBUG ( Ubuntu container)
  2. lxc-attach -n c1
  3. apt-get update , apt-get install docker.io
  4. dockerd -s vfs

Information to attach

Template used to create this container: /usr/share/lxc/templates/lxc-download

Parameters passed to the template:

Template script checksum (SHA-1): 740c51206e35463362b735e68b867876048a8baf

For additional config options, please look at lxc.container.conf(5)

Uncomment the following line to support nesting containers:

lxc.include = /usr/share/lxc/config/nesting.conf

(Be aware this has security implications)

Distribution configuration

lxc.include = /usr/share/lxc/config/ubuntu.common.conf lxc.arch = linux32

Container specific configuration

lxc.rootfs = /var/lib/lxc/c1/rootfs lxc.rootfs.backend = dir lxc.utsname = c1

Network configuration

lxc.network.type = veth lxc.network.link = lxcbr0 lxc.network.flags = up

Cgroup configuration

lxc.aa_profile = unconfined lxc.mount.auto = proc:rw sys:rw cgroup:rw lxc.autodev = 1 lxc.cgroup.devices.allow = a lxc.cap.drop =

lxc.mount.entry = proc proc proc nosuid,nodev,noexec 0 0

lxc.mount.entry = sysfs sys sysfs nosuid,nodev,noexec 0 0

satishnaidu commented 6 years ago

Hi Team,

I am able to resolve this issue to run Docker inside LXC container on OpenWRT by resolving cgroup issues using the steps below:

Please let me know if there is a better way to do this or any configuration to avoid manual steps.

Thanks, Satish Kumar Andey,

SeriousM commented 6 years ago

@satishnaidu sorry but I don't understand all your commands. would you be so kind to write the full commands down?

oxr463 commented 5 years ago

@satishnaidu were you able to figure this out?