ioi / isolate

Sandbox for securely executing untrusted programs
Other
1.04k stars 154 forks source link

Error Running isolate in Ubuntu:22.04 with Systemd #150

Closed raviprakash007 closed 3 months ago

raviprakash007 commented 3 months ago

I have a docker container from "jrei/systemd-ubuntu:22.04". I added the required packages to run isolate in the build. I followed the same process(https://hub.docker.com/r/jrei/systemd-ubuntu) to run the container. But facing the issue,

service isolate start

Job for isolate.service failed because the control process exited with error code. See "systemctl status isolate.service" and "journalctl -xeu isolate.service" for details.

systemctl status isolate.service

x isolate.service - A trivial daemon to keep Isolate's control group hierarchy Loaded: loaded (/etc/systemd/system/isolate.service; disabled; vendor preset: enabled) Active: failed (Result: exit-code) since Sun 2024-03-10 18:33:40 UTC; 6s ago Process: 9414 ExecStart=/usr/local/sbin/isolate-cg-keeper (code=exited, status=1/FAILURE) Main PID: 9414 (code=exited, status=1/FAILURE)

Mar 10 18:33:40 4ea799643f3d systemd[1]: Starting A trivial daemon to keep Isolate's control group hierarchy... Mar 10 18:33:40 4ea799643f3d isolate-cg-keeper[9414]: Cannot create subgroup /sys/fs/cgroup/docker/4ea799643f3dcf2d973753ec6c76ed6d838028fb2817862126128cb9ae49> Mar 10 18:33:40 4ea799643f3d systemd[1]: isolate.service: Main process exited, code=exited, status=1/FAILURE Mar 10 18:33:40 4ea799643f3d systemd[1]: isolate.service: Failed with result 'exit-code'. Mar 10 18:33:40 4ea799643f3d systemd[1]: Failed to start A trivial daemon to keep Isolate's control group hierarchy.

And

systemctl status isolate.slice

Mar 10 18:16:11 4ea799643f3d systemd[1]: Created slice Slice for Isolate's sandboxes. Mar 10 18:16:11 4ea799643f3d isolate-cg-keeper[9380]: Cannot create subgroup /sys/fs/cgroup/docker/4ea799643f3dcf2d973753ec6c76ed6d838028fb2817862126128cb9ae49>

/usr/local/sbin/isolate-cg-keeper

Cannot create subgroup /sys/fs/cgroup/docker/4ea799643f3dcf2d973753ec6c76ed6d838028fb2817862126128cb9ae496157/init.scope/daemon: No such file or directory

Any guidance would be appreciated.

gollux commented 3 months ago

The main guidance we can give is to avoid using Docker with Isolate. We do not support this configuration and you are likely to get in trouble.

A couple of quick hints: First, have you tried to run the container in privileged mode? Second, if you are pasting error messages, please do not truncate them -- the > at the end means that the rest was truncated by systemctl.

raviprakash007 commented 3 months ago

ls -l /sys/fs/cgroup

total 0 dr-xr-xr-x 19 root root 0 Mar 10 14:56 blkio dr-xr-xr-x 19 root root 0 Mar 10 14:56 cpu dr-xr-xr-x 19 root root 0 Mar 10 14:56 cpuacct dr-xr-xr-x 19 root root 0 Mar 10 14:56 cpuset dr-xr-xr-x 19 root root 0 Mar 10 14:56 devices dr-xr-xr-x 20 root root 0 Mar 10 14:56 freezer dr-xr-xr-x 19 root root 0 Mar 10 14:56 memory dr-xr-xr-x 19 root root 0 Mar 10 14:56 net_cls dr-xr-xr-x 19 root root 0 Mar 10 14:56 net_prio dr-xr-xr-x 19 root root 0 Mar 10 14:56 perf_event dr-xr-xr-x 19 root root 0 Mar 10 14:56 pids dr-xr-xr-x 19 root root 0 Mar 10 14:56 rdma dr-xr-xr-x 19 root root 0 Mar 10 14:56 systemd dr-xr-xr-x 20 root root 0 Mar 10 14:56 unified

gollux commented 3 months ago

Are you sure you have cgroup v2 active? This looks like v1.

What does mount | grep cgroup print?

raviprakash007 commented 3 months ago

I have latest master branch. I think v2 is merged already.

raviprakash007 commented 3 months ago

mount | grep cgroup

cgroup on /sys/fs/cgroup type tmpfs (ro,nosuid,nodev,noexec,size=4096k,nr_inodes=1024,mode=755) cgroup2 on /sys/fs/cgroup/unified type cgroup2 (rw,nosuid,nodev,noexec,relatime) cpuset on /sys/fs/cgroup/cpuset type cgroup (rw,nosuid,nodev,noexec,relatime,cpuset) cpu on /sys/fs/cgroup/cpu type cgroup (rw,nosuid,nodev,noexec,relatime,cpu) cpuacct on /sys/fs/cgroup/cpuacct type cgroup (rw,nosuid,nodev,noexec,relatime,cpuacct) blkio on /sys/fs/cgroup/blkio type cgroup (rw,nosuid,nodev,noexec,relatime,blkio) memory on /sys/fs/cgroup/memory type cgroup (rw,nosuid,nodev,noexec,relatime,memory) devices on /sys/fs/cgroup/devices type cgroup (rw,nosuid,nodev,noexec,relatime,devices) freezer on /sys/fs/cgroup/freezer type cgroup (rw,nosuid,nodev,noexec,relatime,freezer) net_cls on /sys/fs/cgroup/net_cls type cgroup (rw,nosuid,nodev,noexec,relatime,net_cls) perf_event on /sys/fs/cgroup/perf_event type cgroup (rw,nosuid,nodev,noexec,relatime,perf_event) net_prio on /sys/fs/cgroup/net_prio type cgroup (rw,nosuid,nodev,noexec,relatime,net_prio) pids on /sys/fs/cgroup/pids type cgroup (rw,nosuid,nodev,noexec,relatime,pids) rdma on /sys/fs/cgroup/rdma type cgroup (rw,nosuid,nodev,noexec,relatime,rdma) cgroup on /sys/fs/cgroup/systemd type cgroup (rw,relatime,name=systemd)

gollux commented 3 months ago

I have latest master branch. I think v2 is merged already.

I didn't mean v2 version of Isolate, but support for cgroup v2 on your system :)

It seems that your system is running in hybrid mode with v2 mounted on /sys/fs/cgroup/unified. Can you switch it to pure v2?

Also, having non-truncated error messages would be nice.

raviprakash007 commented 3 months ago

I have CentOS 9 host machine with cgroup v2 support. And I am running the docker image (ubuntu 22 with systemd enabled) of my application with isolate master branch installed on docker image.

Question: Do we need Host machine with cgroup v2 enabled OR the image itself?

gollux commented 3 months ago

This however doesn't answer my questions.

raviprakash007 commented 3 months ago

Fixed all issues with following set of steps:

  1. Started Ubuntu 22.04 with systemd
  2. Made sure that host and container both are using cgroupv2
  3. Everything worked.
raviprakash007 commented 3 months ago

Ticket can be closed.