1587 / 1587.github.com

0 stars 0 forks source link

lxc非特权容器 #199

Open 1587 opened 7 years ago

1587 commented 7 years ago

http://www.stgraber.org/2014/01/17/lxc-1-0-unprivileged-containers/

LXC 1.0: Unprivileged containers [7/10] Posted on 2014/01/17 by Stéphane Graber This is post 7 out of 10 in the LXC 1.0 blog post series.

Introduction to unprivileged containers

The support of unprivileged containers is in my opinion one of the most important new features of LXC 1.0.

You may remember from previous posts that I mentioned that LXC should be considered unsafe because while running in a separate namespace, uid 0 in your container is still equal to uid 0 outside of the container, meaning that if you somehow get access to any host resource through proc, sys or some random syscalls, you can potentially escape the container and then you’ll be root on the host.

That’s what user namespaces were designed for and implemented. It was a multi-year effort to think them through and slowly push the hundreds of patches required into the upstream kernel, but finally with 3.12 we got to a point where we can start a full system container entirely as a user.

So how do those user namespaces work? Well, simply put, each user that’s allowed to use them on the system gets assigned a range of unused uids and gids, ideally a whole 65536 of them. You can then use those uids and gids with two standard tools called newuidmap and newgidmap which will let you map any of those uids and gids to virtual uids and gids in a user namespace.

That means you can create a container with the following configuration:

lxc.id_map = u 0 100000 65536 lxc.id_map = g 0 100000 65536 The above means that I have one uid map and one gid map defined for my container which will map uids and gids 0 through 65536 in the container to uids and gids 100000 through 165536 on the host.

For this to be allowed, I need to have those ranges assigned to my user at the system level with:

stgraber@castiana:~$ grep stgraber /etc/sub* 2>/dev/null /etc/subgid:stgraber:100000:65536 /etc/subuid:stgraber:100000:65536 LXC has now been updated so that all the tools are aware of those unprivileged containers. The standard paths also have their unprivileged equivalents:

/etc/lxc/lxc.conf => ~/.config/lxc/lxc.conf /etc/lxc/default.conf => ~/.config/lxc/default.conf /var/lib/lxc => ~/.local/share/lxc /var/lib/lxcsnaps => ~/.local/share/lxcsnaps /var/cache/lxc => ~/.cache/lxc Your user, while it can create new user namespaces in which it’ll be uid 0 and will have some of root’s privileges against resources tied to that namespace will obviously not be granted any extra privilege on the host.

One such thing is creating new network devices on the host or changing bridge configuration. To workaround that, we wrote a tool called “lxc-user-nic” which is the only SETUID binary part of LXC 1.0 and which performs one simple task. It parses a configuration file and based on its content will create network devices for the user and bridge them. To prevent abuse, you can restrict the number of devices a user can request and to what bridge they may be added.

An example is my own /etc/lxc/lxc-usernet file:

stgraber veth lxcbr0 10 This declares that the user “stgraber” is allowed up to 10 veth type devices to be created and added to the bridge called lxcbr0.

Between what’s offered by the user namespace in the kernel and that setuid tool, we’ve got all that’s needed to run most distributions unprivileged.

Pre-requirements

All examples and instructions I’ll be giving below are expecting that you are running a perfectly up to date version of Ubuntu 14.04 (codename trusty). That’s a pre-release of Ubuntu so you may want to run it in a VM or on a spare machine rather than upgrading your production computer.

The reason to want something that recent is because the rough requirements for well working unprivileged containers are:

Kernel: 3.13 + a couple of staging patches (which Ubuntu has in its kernel) User namespaces enabled in the kernel A very recent version of shadow that supports subuid/subgid Per-user cgroups on all controllers (which I turned on a couple of weeks ago) LXC 1.0 beta2 or higher (released two days ago) A version of PAM with a loginuid patch that’s yet to be in any released version Those requirements happen to all be true of the current development release of Ubuntu as of two days ago.

LXC pre-built containers

User namespaces come with quite a few obvious limitations. For example in a user namespace you won’t be allowed to use mknod to create a block or character device as being allowed to do so would let you access anything on the host. Same thing goes with some filesystems, you won’t for example be allowed to do loop mounts or mount an ext partition, even if you can access the block device.

Those limitations while not necessarily world ending in day to day use are a big problem during the initial bootstrap of a container as tools like debootstrap, yum, … usually try to do some of those restricted actions and will fail pretty badly.

Some templates may be tweaked to work and workaround such as a modified fakeroot could be used to bypass some of those limitations but the goal of the LXC project isn’t to require all of our users to be distro engineers, so we came up with a much simpler solution.

I wrote a new template called “download” which instead of assembling the rootfs and configuration locally will instead contact a server which contains daily pre-built rootfs and configuration for most common templates.

Those images are built from our Jenkins server using a few machines I have on my home network (a set of powerful x86 builders and a quadcore ARM board). The actual build process is pretty straightforward, a basic chroot is assembled, then the current git master is downloaded, built and the standard templates are run with the right release and architecture, the resulting rootfs is compressed, a basic config and metadata (expiry, files to template, …) is saved, the result is pulled by our main server, signed with a dedicated GPG key and published on the public web server.

The client side is a simple template which contacts the server over https (the domain is also DNSSEC enabled and available over IPv6), grabs signed indexes of all the available images, checks if the requested combination of distribution, release and architecture is supported and if it is, grabs the rootfs and metadata tarballs, validates their signature and stores them in a local cache. Any container creation after that point is done using that cache until the time the cache entries expires at which point it’ll grab a new copy from the server.

The current list of images is (as can be requested by passing –list):


DIST RELEASE ARCH VARIANT BUILD

debian wheezy amd64 default 20140116_22:43 debian wheezy armel default 20140116_22:43 debian wheezy armhf default 20140116_22:43 debian wheezy i386 default 20140116_22:43 debian jessie amd64 default 20140116_22:43 debian jessie armel default 20140116_22:43 debian jessie armhf default 20140116_22:43 debian jessie i386 default 20140116_22:43 debian sid amd64 default 20140116_22:43 debian sid armel default 20140116_22:43 debian sid armhf default 20140116_22:43 debian sid i386 default 20140116_22:43 oracle 6.5 amd64 default 20140117_11:41 oracle 6.5 i386 default 20140117_11:41 plamo 5.x amd64 default 20140116_21:37 plamo 5.x i386 default 20140116_21:37 ubuntu lucid amd64 default 20140117_03:50 ubuntu lucid i386 default 20140117_03:50 ubuntu precise amd64 default 20140117_03:50 ubuntu precise armel default 20140117_03:50 ubuntu precise armhf default 20140117_03:50 ubuntu precise i386 default 20140117_03:50 ubuntu quantal amd64 default 20140117_03:50 ubuntu quantal armel default 20140117_03:50 ubuntu quantal armhf default 20140117_03:50 ubuntu quantal i386 default 20140117_03:50 ubuntu raring amd64 default 20140117_03:50 ubuntu raring armhf default 20140117_03:50 ubuntu raring i386 default 20140117_03:50 ubuntu saucy amd64 default 20140117_03:50 ubuntu saucy armhf default 20140117_03:50 ubuntu saucy i386 default 20140117_03:50 ubuntu trusty amd64 default 20140117_03:50 ubuntu trusty armhf default 20140117_03:50 ubuntu trusty i386 default 20140117_03:50 The template has been carefully written to work on any system that has a POSIX compliant shell with wget. gpg is recommended but can be disabled if your host doesn’t have it (at your own risks).

The same template can be used against your own server, which I hope will be very useful for enterprise deployments to build templates in a central location and have them pulled by all the hosts automatically using our expiry mechanism to keep them fresh.

While the template was designed to workaround limitations of unprivileged containers, it works just as well with system containers, so even on a system that doesn’t support unprivileged containers you can do:

lxc-create -t download -n p1 -- -d ubuntu -r trusty -a amd64 And you’ll get a new container running the latest build of Ubuntu 14.04 amd64.

Using unprivileged LXC

Right, so let’s get you started, as I already mentioned, all the instructions below have only been tested on a very recent Ubuntu 14.04 (trusty) installation. You may want to grab a daily build and run it in a VM.

Install the required packages:

sudo apt-get update sudo apt-get dist-upgrade sudo apt-get install lxc systemd-services uidmap Then, assign yourself a set of uids and gids with:

sudo usermod --add-subuids 100000-165536 $USER sudo usermod --add-subgids 100000-165536 $USER sudo chmod +x $HOME That last one is required because LXC needs it to access ~/.local/share/lxc/ after it switched to the mapped UIDs. If you’re using ACLs, you may instead use “u:100000:x” as a more specific ACL.

Now create ~/.config/lxc/default.conf with the following content:

lxc.network.type = veth lxc.network.link = lxcbr0 lxc.network.flags = up lxc.network.hwaddr = 00:16:3e:xx:xx:xx lxc.id_map = u 0 100000 65536 lxc.id_map = g 0 100000 65536 And /etc/lxc/lxc-usernet with:

veth lxcbr0 10 And that’s all you need. Now let’s create our first unprivileged container with: lxc-create -t download -n p1 -- -d ubuntu -r trusty -a amd64 You should see the following output from the download template: Setting up the GPG keyring Downloading the image index Downloading the rootfs Downloading the metadata The image cache is now ready Unpacking the rootfs --- You just created an Ubuntu container (release=trusty, arch=amd64). The default username/password is: ubuntu / ubuntu To gain root privileges, please use sudo. So looks like your first container was created successfully, now let’s see if it starts: ubuntu@trusty-daily:~$ lxc-start -n p1 -d ubuntu@trusty-daily:~$ lxc-ls --fancy NAME STATE IPV4 IPV6 AUTOSTART ------------------------------------------ p1 RUNNING UNKNOWN UNKNOWN NO It’s running! At this point, you can get a console using lxc-console or can SSH to it by looking for its IP in the ARP table (arp -n). One thing you probably noticed above is that the IP addresses for the container aren’t listed, that’s because unfortunately LXC currently can’t attach to an unprivileged container’s namespaces. That also means that some fields of lxc-info will be empty and that you can’t use lxc-attach. However we’re looking into ways to get that sorted in the near future. There are also a few problems with job control in the kernel and with PAM, so doing a non-detached lxc-start will probably result in a rather weird console where things like sudo will most likely fail. SSH may also fail on some distros. A patch has been sent upstream for this, but I just noticed that it doesn’t actually cover all cases and even if it did, it’s not in any released version yet. Quite a few more improvements to unprivileged containers are to come until the final 1.0 release next month and while we certainly don’t expect all workloads to be possible with unprivileged containers, it’s still a huge improvement on what we had before and a very good building block for a lot more interesting use cases. This entry was posted in Canonical voices, LXC, Planet Ubuntu and tagged containers. Bookmark the permalink. ← LXC 1.0: Security features [6/10]LXC 1.0: Scripting with the API [8/10] → 93 Responses to LXC 1.0: Unprivileged containers [7/10] kevin wilson says: 2014/01/26 at 3:07 PM Hi, What do you mean by : >This declares that the user “stgraber” is allowed up to 10 veth type devices to be created and added to the bridge called lxcbr0. Is it that user stgraber can create 10 containers, and in each container there will be a single veth interface, and if he will try to start an eleventh container which is configured with a single veth this will fail ? Regards Kevin Reply Stéphane Graber says: 2014/01/27 at 10:53 AM It means you can have up to 10 interfaces on the host that are bridged in lxcbr0. Whether you use all 10 for a single container or you use a single one per container doesn’t matter. Since that setting is per-bridge and there’s little point in having two interfaces in the same bridge for a given container, that typically means 10 containers. Containers then can do whatever they want with their network inside them and no restriction is applied to that (other than any limit the kernel may have). Reply Jeremiah says: 2014/01/27 at 3:00 PM Hello Stéphane, Thanks for the great instructional posts. Can you add Centos 5 to your image repository? Thanks! Reply kevin wilson says: 2014/01/27 at 5:03 PM Hi Yet I have another question about another point in your blog. You mention “you somehow get access to any host resource”: By saying “somehow” this sounds like you are talking about some security breach which is not known or hard to find. Do I get you right ? I made the following test – maybe i misunderstand something: – created a fedora container as root – start the conrainer, not as a daemon. Run from within the conrainer – mknod /dev/sda5 b majorNumber minorNumber mount /dev/sda5 /mnt/sda5 And then I can access the host filesystem. Are there any protections against this ? Or are you talking about the case when the container was crreated by a non root and then the behavior will be (maybe) different ? Regards Kevin Reply Andrew says: 2014/03/19 at 1:09 PM This is no longer allowed. I’m on 1.0.1 Reply vasilisc says: 2014/01/28 at 11:18 PM > sudo usermod –add-subuids 100000-165536 $USER sudo usermod --add-sub-uids ? Reply Stéphane Graber says: 2014/01/29 at 4:42 AM stgraber@castiana:~$ usermod --help | grep sub -v, --add-subuids FIRST-LAST add range of subordinate uids -V, --del-subuids FIRST-LAST remvoe range of subordinate uids -w, --add-subgids FIRST-LAST add range of subordinate gids -W, --del-subgids FIRST-LAST remvoe range of subordinate gids Reply vasilisc says: 2014/01/29 at 11:43 AM very strange man usermod|col -bx|grep sub -v, –add-sub-uids FIRST-LAST Add a range of subordinate uids to the users account. -V, –del-sub-uids FIRST-LAST Remove a range of subordinate uids from the users account. –del-sub-uids and –add-sub-uids are specified remove of all subordinate uid ranges happens before any subordinate uid ranges are added. -w, –add-sub-gids FIRST-LAST Add a range of subordinate gids to the users account. -W, –del-sub-gids FIRST-LAST Remove a range of subordinate gids from the users account. –del-sub-gids and –add-sub-gids are specified remove of all subordinate gid ranges happens before any subordinate gid ranges are added. usermod --help | grep sub -v, –add-subuids FIRST-LAST add range of subordinate uids -V, –del-subuids FIRST-LAST remvoe range of subordinate uids -w, –add-subgids FIRST-LAST add range of subordinate gids -W, –del-subgids FIRST-LAST remvoe range of subordinate gids Reply Pingback: codescaling | LXC’s 1.0, Thrift opened again, WhatsApp serving and more – Snippets Paul Thomson says: 2014/03/05 at 11:42 AM Hey, great work. Can you provide how you build the downloadable ubuntu images? Would be great to build my own release/image server. Thanks a lot. Reply Stéphane Graber says: 2014/03/06 at 11:35 AM You can get the actual list of actions from the build logs at https://jenkins.linuxcontainers.org Technically, it’s really just a run of the standard template with a fixed container name put into rootfs.tar.xz, then a few plain text metadata files put into the meta.tar.xz tarball. Reply Paul Thomson says: 2014/03/07 at 2:06 AM Okay, thanks for the hint. Reply The NeverGone says: 2014/03/06 at 9:55 AM Very good article! How can we migrate existing privileged container to unprivileged? Thanks answer! Reply Stéphane Graber says: 2014/03/06 at 11:32 AM There’s no easy way to do that unfortunately, you’d need to update your container config to match that from an unprivileged container, move the container’s directory over to the unprivileged user you want it to run as, then use Serge’s uidshift program to change the ownership of all files. Reply The NeverGone says: 2014/03/06 at 1:06 PM uidshift == uidmapshift ? this? https://launchpad.net/~serge-hallyn/+archive/nsexec Reply Stéphane Graber says: 2014/03/06 at 4:40 PM Yeah, that’s the one. Reply Alan Pater says: 2014/03/14 at 3:32 PM A couple of questions. I am running up-to-date Ubuntu 14.04: 1) I get an error when running lxc-start: lxc_container: Executing ‘/sbin/init’ with no configuration file may crash the host 2) On priviledged containers, I was able to use --bindhome $LOGNAME to give a logon id the same as my current one rather then ubuntu:ubuntu. Is something like that possible with unpriviledged containers? Reply Alan Pater says: 2014/03/25 at 1:09 PM Ok, the lxc-start was my fault, I had a typo in the setup. Still wondering how to make the container use my userid rather then ubuntu:ubuntu. Reply Noah F. SanTsorbutz says: 2015/07/27 at 5:47 PM I get the same error, but have no typo, AFAICT. Please describe what you found, and how you fixed it. Reply Andy Johnson says: 2014/03/23 at 2:42 AM >1) I get an error when running lxc-start: lxc_container: Executing ‘/sbin/init’ >with no configuration file may crash the host I tried today with the latest daily Ubuntu release of 14.04, exactly as in the post, and I got the same error about “Executing ‘/sbin/init’ “. Any ideas ? Andy Reply Pingback: Introducing cgmanager | S3hh's Blog KeeperB5 says: 2014/03/27 at 11:21 AM Hi Stéphane! I followed your guide with the intention of using user account called “lxc” that does not have sudo permissions to create and run containers. After I change from my regular user account to the lxc user account to run lxc-create, I get permission denied issues. lxc@ns3095882:~$ lxc-create -t download -n nsd — -d ubuntu -r trusty -a amd64 WARN: could not reopen tty: Permission denied lxc_container: Error opening /tmp/1001/lxc//srv/lxc/unprivileged/.local/share/lxc/nsd lxc_container: failed to save starting configuration for nsd lxc_container: Error creating container nsdlxc@ns3095882:~$: command not found lxc@ns3095882:~$ lxc-create: Permission denied – failed to create directory ‘/run/user/1000/lock/’ So I wonder, if I followed your guide, is sudo still needed to run lxc-create? Reply Robin Harvey says: 2014/04/21 at 4:09 PM Hi, I had the same issue and solved it by SSH’ing in to the host machine as the lxc user directly. It seems that using sudo (either as sudo -i -u lxc or sudo -i and then su lxc) doesn’t work because there’s still some reference to your original user’s $UID hanging around, this gets picked up and confuses things. HTH. –Robin Reply Chris says: 2014/08/25 at 11:12 AM Sure, logging in through ssh works but what if you want to auto-start these containers or otherwise start them during boot? Starting them from the user’s crontab (@reboot) has the exact same problem. Reply Thomas says: 2015/03/09 at 4:26 AM Hey, Does someone has a fix for this? I also want to autostart some unprivileged containers on boot, but have the same problem. Thomas Reply Yves says: 2015/04/07 at 8:18 AM For me unsetting the XDG_* environment variables did the trick: $ unset XDG_RUNTIME_DIR $ unset XDG_SESSION_ID The warnings about reopening the tty remained, but creation succeeded: $ lxc-create -t download -n p1 -- -d ubuntu -r trusty -a amd64 WARN: could not reopen tty: Permission denied WARN: could not reopen tty: Permission denied WARN: could not reopen tty: Permission denied WARN: could not reopen tty: Permission denied Using image from local cache Unpacking the rootfs --- You just created an Ubuntu container (release=trusty, arch=amd64, variant=default) To enable sshd, run: apt-get install openssh-server For security reason, container images ship without user accounts and without a root password. Use lxc-attach or chroot directly into the rootfs to set a root password or create user accounts If you also want to get rid of those, you need to replace the /dev/pts/?? with something writable by the user. The script utility can be used for this (although it is a really crude hack): $ script /dev/null $ lxc-create -t download -n p2 -- -d ubuntu -r trusty -a amd64 Using image from local cache Unpacking the rootfs Nevertheless this only helps me with creation. Starting still raises some issues. Julian Lam says: 2015/04/28 at 10:42 PM Thomas, I spent much of today wrestling with lxc trying to figure this out. Here are my results: https://gist.github.com/julianlam/4e2bd91d8dedee21ca6f In short, use cgm! 🙂 Pingback: Nested lxc | S3hh's Blog Pingback: Trusty Painless Ubuntu…and Candy! | Team Dave's Blog Fabien C. says: 2014/04/12 at 5:08 PM Hi Stéphane, I can’t get user namespace isolation to work here. If I add this within the container configuration file: lxc.id_map = u 0 100000 65536 lxc.id_map = g 0 100000 65536 and then lxc-start the container as root (I’m not trying full unprivileged for now), I get the following error messages: lxc-start: No such file or directory – failed to mount ‘/sys/fs/fuse/connections’ on ‘/usr/lib/x86_64-linux-gnu/lxc/rootfs/sys/fs/fuse/connections’ lxc-start: Permission denied – error unlinking /usr/lib/x86_64-linux-gnu/lxc/rootfs/dev/kmsg lxc-start: failed to setup kmsg for ‘m1’ lxc-start: Permission denied – failed to create directory ‘/usr/lib/x86_64-linux-gnu/lxc/rootfs/lxc_putold’ lxc-start: Permission denied – failed to create pivotdir ‘/usr/lib/x86_64-linux-gnu/lxc/rootfs/lxc_putold’ lxc-start: failed to setup pivot root lxc-start: failed to set rootfs for ‘m1’ lxc-start: failed to setup the container lxc-start: invalid sequence number 1. expected 2 lxc-start: failed to spawn ‘m1’ I don’t know what I’m missing: I think my system is capable of doing this, lxc-usernsexec is also working fine, and the container starts without the lxc.id_map configuration. I’m on Debian Wheezy using lxc 1.0.3 which I backported, based on the sid lxc 1.0.0 package (the latter not working either). I’m also using the Debian backported 3.13 kernel. Any clue? Reply ragavaluvijay says: 2016/08/26 at 7:17 AM Hi Fabien & Stéphane , If I understood correctly from Fabien comment , I am also trying similar method . i am not trying full unprivileged container now. instead i am just trying to change user name space config for the container & start accordingly . so i changed container config with below key values , lxc.id_map = u 0 201000 10 lxc.id_map = g 0 201000 10 and created/started the container from root . but not able to start and facing below error *************************************************************** lxc-start: cgfsng.c: cgfsng_create: 1072 No such file or directory – Failed to create /sys/fs/cgroup/systemd//lxc/testecho: No such file or directory lxc-start: cgfsng.c: cgfsng_create: 1072 No such file or directory – Failed to create /sys/fs/cgroup/systemd//lxc/testecho-1: No such file or directory lxc-start: cgfsng.c: cgfsng_create: 1072 No such file or directory – Failed to create /sys/fs/cgroup/systemd//lxc/testecho-2: No such file or directory lxc-start: cgfsng.c: cgfsng_create: 1072 No such file or directory – Failed to create /sys/fs/cgroup/systemd//lxc/testecho-3: No such file or directory lxc-start: cgfsng.c: cgfsng_create: 1072 No such file or directory – Failed to create /sys/fs/cgroup/systemd//lxc/testecho-4: No such file or directory lxc-start: cgfsng.c: cgfsng_create: 1072 No such file or directory – Failed to create /sys/fs/cgroup/systemd//lxc/testecho-5: No such file or directory lxc-start: cgfsng.c: cgfsng_create: 1072 No such file or directory – Failed to create /sys/fs/cgroup/systemd//lxc/testecho-6: No such file or directory lxc-start: cgfsng.c: cgfsng_create: 1072 No such file or directory – Failed to create /sys/fs/cgroup/systemd//lxc/testecho-7: No such file or directory lxc-start: cgfsng.c: cgfsng_create: 1072 No such file or directory – Failed to create /sys/fs/cgroup/systemd//lxc/testecho-8: No such file or directory lxc-start: cgfsng.c: cgfsng_create: 1072 No such file or directory – Failed to create /sys/fs/cgroup/systemd//lxc/testecho-9: No such file or directory lxc-start: cgfsng.c: cgfsng_create: 1072 No such file or directory – Failed to create /sys/fs/cgroup/systemd//lxc/testecho-10: No such file or directory lxc-start: cgfsng.c: cgfsng_create: 1072 No such file or directory – Failed to create /sys/fs/cgroup/systemd//lxc/testecho-11: No such file or directory newuidmap: uid range [0-10) -> [200000-200010) not allowed lxc-start: start.c: lxc_spawn: 1161 failed to set up id mapping lxc-start: start.c: __lxc_start: 1353 failed to spawn ‘testecho’ newuidmap: uid range [0-10) -> [200000-200010) not allowed lxc-start: conf.c: userns_exec_1: 4315 Error setting up child mappings lxc-start: cgfsng.c: recursive_destroy: 983 Error destroying /sys/fs/cgroup/systemd//lxc/testecho-12 newuidmap: uid range [0-10) -> [200000-200010) not allowed lxc-start: conf.c: userns_exec_1: 4315 Error setting up child mappings lxc-start: cgfsng.c: recursive_destroy: 983 Error destroying /sys/fs/cgroup/blkio//lxc/testecho-12 newuidmap: uid range [0-10) -> [200000-200010) not allowed lxc-start: conf.c: userns_exec_1: 4315 Error setting up child mappings lxc-start: cgfsng.c: recursive_destroy: 983 Error destroying /sys/fs/cgroup/perf_event//lxc/testecho-12 newuidmap: uid range [0-10) -> [200000-200010) not allowed lxc-start: conf.c: userns_exec_1: 4315 Error setting up child mappings lxc-start: cgfsng.c: recursive_destroy: 983 Error destroying /sys/fs/cgroup/freezer//lxc/testecho-12 newuidmap: uid range [0-10) -> [200000-200010) not allowed lxc-start: conf.c: userns_exec_1: 4315 Error setting up child mappings lxc-start: cgfsng.c: recursive_destroy: 983 Error destroying /sys/fs/cgroup/cpu//lxc/testecho-12 newuidmap: uid range [0-10) -> [200000-200010) not allowed lxc-start: conf.c: userns_exec_1: 4315 Error setting up child mappings lxc-start: cgfsng.c: recursive_destroy: 983 Error destroying /sys/fs/cgroup/hugetlb//lxc/testecho-12 newuidmap: uid range [0-10) -> [200000-200010) not allowed lxc-start: conf.c: userns_exec_1: 4315 Error setting up child mappings lxc-start: cgfsng.c: recursive_destroy: 983 Error destroying /sys/fs/cgroup/devices//lxc/testecho-12 newuidmap: uid range [0-10) -> [200000-200010) not allowed lxc-start: conf.c: userns_exec_1: 4315 Error setting up child mappings lxc-start: cgfsng.c: recursive_destroy: 983 Error destroying /sys/fs/cgroup/net_cls//lxc/testecho-12 newuidmap: uid range [0-10) -> [200000-200010) not allowed lxc-start: conf.c: userns_exec_1: 4315 Error setting up child mappings lxc-start: cgfsng.c: recursive_destroy: 983 Error destroying /sys/fs/cgroup/memory//lxc/testecho-12 newuidmap: uid range [0-10) -> [200000-200010) not allowed lxc-start: conf.c: userns_exec_1: 4315 Error setting up child mappings lxc-start: cgfsng.c: recursive_destroy: 983 Error destroying /sys/fs/cgroup/cpuset//lxc/testecho-12 newuidmap: uid range [0-10) -> [200000-200010) not allowed lxc-start: conf.c: userns_exec_1: 4315 Error setting up child mappings lxc-start: cgfsng.c: recursive_destroy: 983 Error destroying /sys/fs/cgroup/pids//lxc/testecho-12 lxc-start: lxc_start.c: main: 344 The container failed to start. lxc-start: lxc_start.c: main: 348 Additional information can be obtained by setting the –logfile and –logpriority options. test@test:~$ *************************************************************************** I am using ubuntu 16.04(kernel 4.4.0.34) which is systemd init system and also i changed below key value to start systemd init. however the same issue is seen for init.d(systemV) init also lxc.init_cmd=/lib/systemd/systemd ******************************************************************** i also requested support for the same in below forum, waiting for feed back http://unix.stackexchange.com/questions/305428/uid-gid-privileged-lxc-container-systemd-lxc-start-failed-on-ubuntu-16-04 and https://ubuntuforums.org/showthread.php?t=2334953 if you know any solution already , let me know Reply ragavaluvijay says: 2016/08/26 at 7:18 AM i am using lxc 2.0.3 version and , lxc 2.0.3 Man page of lxc.container.conf says range is from 200000 to 220000 , but same man page example configured as below lxc.id_map = u 0 100000 10000 lxc.id_map = g 0 100000 10000 please let me know which is correct Reply Alan Pater says: 2014/04/30 at 8:46 PM Undo? Is there a way to undo the uid configuration? I suspect that changing the uid’s is not compatible with the recent change to cgmanager (from cgroup-lite) in Ubuntu 14.04. At least on this system, the depreciation and automatic removal of cgroup-lite results in all kinds of weird behaviour. Reply wonderwoman says: 2014/05/14 at 11:38 AM A quick question… when doing useradd, sub uid/gids are enumerated, and then if I do usermod some more, eg.: cat /etc/subuid test1:150000:50001 test1:165537:65536 test2:231073:65536 test2:150000:50001 SHould there not be constraints or is this as expected? I am just messing around to see if the logic is consistent and I am not sure. Now two users map same subuids and also each user has redundant (in this case) mappings…. Reply Malina says: 2014/05/15 at 11:55 AM Heya.. I notice that if one starts a container (and daemonising it, although I am not sure if that has an effect); and attaches to it, the user’s env is preserved , even if one is “within the container, as root” eg. if one does cp something ~, it sets home to /home/outsidecontaineruser, rather than root 😮 So cd -> can’t change to directory, (as I don’t have the same user in the container)… this surely is a bug? Everything is swell, if one ssh’s into the box rather. Reply Jennifer Bell says: 2014/05/22 at 6:04 PM Hi, Thanks for your great instructions. They were invaluable in getting unprivileged LXCs working with Ubuntu 14.04. I have one problem though that I haven’t been able to fix. I sometimes have two bridges that I put LXC containers on, and often one container will be on both bridges in order to route traffic between the subnets. With regular old LXC containers running as root this was never a problem, but when I try to specify two veth interfaces with unprivileged containers, I’m unable to start the container. It always shows the following message: $ lxc-start -d -n mycontainer lxc_container: command get_cgroup failed to receive response This is how I want the networking to be configured and it works fine in a regular privileged LXC: lxc.network.type = veth lxc.network.flags = up lxc.network.link = br0 lxc.network.hwaddr = 00:16:3e:c4:b4:ca lxc.network.type = veth lxc.network.flags = up lxc.network.link = br1 lxc.network.hwaddr = 00:16:3e:35:fe:6a If I comment out either one of the veth interfaces, then it works: #lxc.network.type = veth #lxc.network.flags = up #lxc.network.link = br0 #lxc.network.hwaddr = 00:16:3e:c4:b4:ca lxc.network.type = veth lxc.network.flags = up lxc.network.link = br1 lxc.network.hwaddr = 00:16:3e:35:fe:6a or: lxc.network.type = veth lxc.network.flags = up lxc.network.link = br0 lxc.network.hwaddr = 00:16:3e:c4:b4:ca #lxc.network.type = veth #lxc.network.flags = up #lxc.network.link = br1 #lxc.network.hwaddr = 00:16:3e:35:fe:6a my /etc/lxc/lxc-usernet file looks like this: $ cat /etc/lxc/lxc-usernet # USERNAME TYPE BRIDGE COUNT lxcuser veth br0 10 lxcuser veth br1 10 There are no other containers running at the time I try to start this one, and anyway, I don’t believe I am hitting the limit referenced in /etc/lxc/lxc-usernet since either interface can work alone. Are there any existing problems that you know of preventing the use of two veth interfaces in one unprivileged LXC container? Or, are there any additional configuration changes required to LXC or cgroups required to make it work? Or am I missing something else very basic? Thanks, Jennifer Reply Jennifer Bell says: 2014/05/22 at 8:39 PM Yay! I found the solution. Apparently you are allowed to specify a name (like eth0, eth1) in the set of network config lines, and it didn’t used to be required for multiple interfaces but apparently is in this case. This works perfectly: lxc.network.type = veth lxc.network.flags = up lxc.network.link = br0 lxc.network.hwaddr = 00:16:3e:c4:b4:ca lxc.network.name = eth0 lxc.network.type = veth lxc.network.flags = up lxc.network.link = br1 lxc.network.hwaddr = 00:16:3e:35:fe:6a lxc.network.name = eth1 Reply Arthur says: 2014/05/28 at 5:05 PM Hi, Thanks for this great post, however is it possible to use “phys” network type with unprivileged containers? I have created an unprivileged container with below config without any issue but the 2nd/phys interface isn’t there after the container is started – ~$ cat .config/lxc/eo1.conf lxc.network.type = veth lxc.network.flags = up lxc.network.link = br0 lxc.network.hwaddr = 00:16:3e:xx:xx:xx lxc.network.name = eth0 lxc.network.type = phys lxc.network.flags = up lxc.network.link = eth5 lxc.network.hwaddr = 00:1b:21:00:00:01 lxc.network.ipv4 = 192.168.56.1/24 lxc.network.name = eth1 lxc.id_map = u 0 100000 65536 lxc.id_map = g 0 100000 65536 Any advice is appreciated. Reply Ivan Ogai says: 2014/06/10 at 7:38 AM In a private container, I have tried to enable FUSE, but when trying to mount with sshfs I get the error: fusermount: mount failed: Operation not permitted The fuse device is present in the container and has the proper permissions. I have this in its config file: lxc.cgroup.devices.allow = c 10:229 rwm lxc.mount.entry = /dev/fuse dev/fuse none bind,optional,create=file lxc.loglevel = 2 lxc.logfile = /home/ivan/.local/share/lxc/wuala/lxc.log lxc.cap.keep = CAP_SYS_ADMIN In the host I have added following line to /etc/apparmor.d/lxc/lxc-default. mount fstype=fuse options=(rw, bind, ro, nosuid, nodev, user), Unfortunately nothing is logged in the lxc.log file (not anywhere else either), and the -d option in sshfs doesn’t output more than without. Reply Richard says: 2014/06/11 at 11:10 AM hi, thanks for your work, thats awesome especialy for unprivilege containers. i want to build a server like images.linuxcontainers.org, i read the post and you wrote that all stuff in in the jenkins output, but it s not very clear. i understand we build a root.tar.xz, and meta.tar.xz from images folders , and also the meta folders with descritpions. can you explain or put the jenkins config somewhere? it will be easier with sources than with output Reply Christoph Willing says: 2014/06/16 at 7:40 PM A patched PAM is mentioned as a prerequisite. Can we presume that that prerequisite could be ignored for non-PAM systems? chris Reply Garrett says: 2014/07/10 at 10:20 PM I am having an issue when starting the container: gauthig@main:~$ lxc-start -n test chown: changing ownership of `/dev/pts/10′: Operation not permitted lxc_container: Failed to chown /dev/pts/10 lxc_container: Failed to shift tty into container lxc_container: failed to initialize the container lxc_container: The container failed to start. I have walked through the pre-requisites several times and ensured all steps completed. Any ideals? Host is 14.04 Reply Garrett says: 2014/07/15 at 3:36 PM Found a solution from Chris at Linuxquestions.org: Something in kernel updates after 3.14.5 broke it, but lxc 1.0.5 resolved it. The base Trusty LXC was 1.0.4. Reply Ashvin Goel says: 2014/07/11 at 9:31 AM Hi, I was wondering if there is any security benefit with dropping Linux capabilities with unprivileged lxc containers, and whether it is possible to do so. I tried dropping some capabilities, but the output of /proc/$$/status in a container shows the following, which seems to suggest that all the capabilities are enabled. CapInh: 0000000000000000 CapPrm: 0000001fffffffff CapEff: 0000001fffffffff CapBnd: 0000001fffffffff Thanks Ashvin Reply norwood sisson says: 2014/09/07 at 11:30 AM could you publish a template for an ephemeral, unprivileged container for a gui which would start when a library patron logged in Reply The NeverGone says: 2014/09/07 at 3:36 PM Something is wrong… I made the Ubuntu lucid unprivileged container with Ubuntu 14.04.1 (LXC 1.0.5): $ lxc-create -t download -n ubuntu_10.04 — -d ubuntu -r lucid -a amd64 Setting up the GPG keyring Downloading the image index Downloading the rootfs Downloading the metadata The image cache is now ready Unpacking the rootfs — You just created an Ubuntu container (release=lucid, arch=amd64, variant=default) The default username/password is: ubuntu / ubuntu To gain root privileges, please use sudo. And container start… $ lxc-start -n ubuntu_10.04 init: hostname main process (4) terminated with status 1 init: hwclock main process (6) terminated with status 77 init: ureadahead main process (8) terminated with status 5 init: ureadahead-other main process (58) terminated with status 4 mount: mount point /dev/shm is a symbolic link to nowhere mountall: mount /dev/shm [59] terminated with status 32 mountall: Filesystem could not be mounted: /dev/shm Ubuntu 14.04 is fresh install. Reply The NeverGone says: 2014/09/07 at 4:57 PM Workaround: – Delete /dev/shm inside container. – Create /run/shm inside container. – Symlink: /dev/shm -> /run/shm (inside container) Reply The NeverGone says: 2014/09/08 at 5:29 AM Hm, sshd does not work 🙁 $ lxc-start -n ubuntu_10.04 init: hostname main process (4) terminated with status 1 init: hwclock main process (6) terminated with status 77 init: ureadahead main process (8) terminated with status 5 init: ureadahead-other main process (58) terminated with status 4 init: console-setup main process (60) terminated with status 1 init: procps (virtual-filesystems) main process (61) terminated with status 255 init: Failed to spawn ssh pre-start process: unable to set oom adjustment: Permission denied udev: starting version 151 Ubuntu 10.04.4 LTS ubuntu_10.04 /dev/console Reply Esa-Matti Suuronen says: 2014/11/13 at 7:25 AM I was able to get Lucid container to boot by just removing the /dev/shm symlink from the container. Reply anonima says: 2014/09/26 at 9:22 AM In debian 7 doesn’t exist uidmap. Any solution for unprivileged container in D7? Thank you Reply Ovanes says: 2014/10/19 at 2:51 PM I just wonder if there is a way to clone from a global/privileged LXC container into unprivileged one? I currently use Ubuntu 14.04.1 with LXC 1.0.5 on it. Reply Adam says: 2014/11/04 at 11:31 AM Does sudo work on unprivileged lxc container? After adding a user with adduser adam sudo and logging into the container with ssh, when I tried to do “sudo su” I’ve got failure with a responce: adam@p1:~$ sudo su sudo: effective uid is not 0, is /usr/bin/sudo on a file system with the ‘nosuid’ option set or an NFS file system without root privileges? Is it by design, or is it a bug? Anyway, I filed a bug against ubuntu on https://bugs.launchpad.net/ubuntu/+source/lxc/+bug/1389305 Reply Badiane says: 2014/11/17 at 4:49 PM I use macvlan’s for my interfaces. It’s been a while since I’ve built any new containers or upgrade the ones I’m using. How will the user namespace affect macvlan based interfaces? Reply EY says: 2014/11/24 at 12:43 AM Any pre-buildt image for Arch linux ? Reply joda says: 2014/12/31 at 2:59 PM On Ubuntu 14.04 I encountered the error message(s): lxc_container: The container failed to start. And from the logs: lxc-start ERROR lxc_cgmanager – call to cgmanager_create_sync failed: invalid request lxc-start ERROR lxc_cgmanager – Failed to create hugetlb:u1 lxc-start ERROR lxc_cgmanager – Error creating cgroup hugetlb:u1 Root cause my cgroups are not setup if you’re logging in over ssh. If your cgroups look like this: lxc@xen-jd1:~$ cat /proc/self/cgroup 11:hugetlb:/ 10:perf_event:/ 9:blkio:/ 8:freezer:/ 7:devices:/ 6:memory:/ 5:cpuacct:/ 4:cpu:/ 3:cpuset:/ 2:name=systemd:/ Check if you’ve somehow disabled or uninstalled the PAM module (libpam-systemd) used to setup cgroups. My problem was I disabled PAM for SSH (/etc/ssh/sshd_config): “UsePAM No” Reply anon says: 2015/01/30 at 6:04 PM Thanks for the article! On debian jessie amd64 (with systemd) as a host, I got the following errors: unshare: Operation not permitted read pipe: No such file or directory lxc-create: Failed to chown container dir lxc-create: Error creating container tor ——————— lxc-create 1422657281.890 WARN lxc_log – lxc_log_init called with log already initialized lxc-create 1422657281.891 INFO lxc_confile – read uid map: type u nsid 0 hostid 100000 range 65536 lxc-create 1422657281.891 INFO lxc_confile – read uid map: type g nsid 0 hostid 100000 range 65536 lxc-create 1422657281.894 ERROR lxc_container – Failed to chown container dir lxc-create 1422657281.895 ERROR lxc_create_ui – Error creating container tor This was solved by: echo 1 > /sys/fs/cgroup/cpuset/cgroup.clone_children echo 1 > /proc/sys/kernel/unprivileged_userns_clone Also I needed to apt-get install lxc libpam-systemd cgroup-bin cgmanager All of this only worked when in an login shell, sudo or su failed somehow. Reply anon says: 2015/01/31 at 9:28 AM :/ sorry for the false info, cgmanager was causing problems with systemd. Removed that. After all these steps cgroups were still giving errors: lxc-start: Permission denied - Could not create cgroup '/tor' in '/sys/fs/cgroup/perf_event'. lxc-start: Read-only file system - cgroup_rmdir: failed to delete /sys/fs/cgroup/perf_event/ lxc-start: Read-only file system - cgroup_rmdir: failed to delete /sys/fs/cgroup/blkio/ lxc-start: Read-only file system - cgroup_rmdir: failed to delete /sys/fs/cgroup/net_cls,net_prio/ lxc-start: Read-only file system - cgroup_rmdir: failed to delete /sys/fs/cgroup/freezer/ lxc-start: Permission denied - cgroup_rmdir: failed to delete /sys/fs/cgroup/devices/user.slice lxc-start: Read-only file system - cgroup_rmdir: failed to delete /sys/fs/cgroup/cpu,cpuacct/ lxc-start: Read-only file system - cgroup_rmdir: failed to delete /sys/fs/cgroup/cpuset/ lxc-start: failed creating cgroups This was fixed by manually changing the directory structure with the following script: https://www.mail-archive.com/lxc-devel@lists.linuxcontainers.org/msg01660.html After this the container starts as well. PS: I think I also needed to do `service apparmor stop` while creating. Reply Pingback: 吐槽:Docker真的好吗? | 我爱互联网 Pingback: Are LXC and Docker secure? | Andrea Corbellini ¯(°_o)/¯ says: 2015/02/25 at 6:12 AM Is it right that for unprivileged LXC I need to start above “lxc-create” and “lxc-start” commands under my user (not root)? The problem is that I don’t see such commands under user in Gentoo (only root sees them). Is it a bug or there’s a way to start unpriviledged LXC under root? Reply ¯(°_o)/¯ says: 2015/02/25 at 2:56 PM It looks like I was missing /usr/sbin inside PATH. After adding it I created unprivileded lxc under user. Reply Pingback: Unprivileged containers in Debian sid | Aikidoka Technologies Eugene says: 2015/03/28 at 9:09 PM I have followed the steps as described and i got the container up and running. Th eonly problem is that I can’t login via lxc-console and i can’t SSH to the container. I’m not sure what is the username/password that I should be using here? I tired the same pair as the user that starts the container but the console does’t accept it. The SSH doesn’t work at all – ~ $ lxc-start -n c1 ~ $ lxc-ls –fancy NAME STATE IPV4 IPV6 GROUPS AUTOSTART ————————————————- c1 RUNNING 10.0.3.57 – – NO p1 STOPPED – – – NO ~ $ ssh 10.0.3.57 ssh: connect to host 10.0.3.57 port 22: Connection refused Reply Eugene says: 2015/03/28 at 9:14 PM I probably should add that i used the download template and selected the ubuntu utopic i386 architecture Reply Manuel says: 2015/04/22 at 4:13 PM Hi Stéphane, it seems that the latest Debian/Jessie templates are not usable as unprivileged containers (at least they are not shown as candidates with –list, and cannot be installed). Might be worthwhile pointing out that systemd based distros (such as Debian/Jessie) don’t work with unprivileged containers. Reply Stéphane Graber says: 2015/04/22 at 9:27 PM They actually will appear in the list if you’re running LXC 1.1 or higher. The current list on my system (running as an unprivileged user) is: --- DIST RELEASE ARCH VARIANT BUILD --- centos 6 amd64 default 20150422_02:16 centos 6 i386 default 20150420_02:16 debian jessie amd64 default 20150421_22:42 debian jessie armel default 20150420_22:42 debian jessie armhf default 20150421_22:42 debian jessie i386 default 20150420_22:42 debian sid amd64 default 20150421_22:42 debian sid armel default 20150421_22:42 debian sid armhf default 20150421_22:42 debian sid i386 default 20150421_22:42 debian wheezy amd64 default 20150419_22:42 debian wheezy armel default 20150421_22:42 debian wheezy armhf default 20150420_22:42 debian wheezy i386 default 20150421_22:42 gentoo current amd64 default 20150421_14:12 gentoo current armhf default 20150422_14:12 gentoo current i386 default 20150421_14:12 oracle 6.5 amd64 default 20150421_11:40 oracle 6.5 i386 default 20150419_11:40 plamo 5.x amd64 default 20150421_21:36 plamo 5.x i386 default 20150420_21:36 ubuntu precise amd64 default 20150421_03:49 ubuntu precise armel default 20150421_03:49 ubuntu precise armhf default 20150422_03:49 ubuntu precise i386 default 20150422_03:49 ubuntu trusty amd64 default 20150420_03:49 ubuntu trusty armhf default 20150420_03:49 ubuntu trusty i386 default 20150422_03:49 ubuntu trusty ppc64el default 20150422_03:49 ubuntu utopic amd64 default 20150420_03:49 ubuntu utopic armhf default 20150422_03:49 ubuntu utopic i386 default 20150422_03:49 ubuntu utopic ppc64el default 20150422_03:49 ubuntu vivid amd64 default 20150422_03:49 ubuntu vivid armhf default 20150422_03:49 ubuntu vivid i386 default 20150421_03:49 ubuntu vivid ppc64el default 20150422_03:49 Reply itsso says: 2015/10/19 at 9:18 PM Stéphane, would you be able to help or point me to a location where I can find information about the following issue. Thank you. I have been running for months multiple ubuntu trusty and utopic unprivileged containers created with the download template. Today I tried to create a vivid machine but the create command failed. It seems the issue is that now I am only able to access the trusty and precise images, even utopic is gone, when running lxc-create as unprivileged user. Root is able to see all ubuntu images. The host is ubuntu 14.04.3, fully patched. user@ubuntu:~$ lxc-create -n x-vivid -t download -- --flush-cache -d ubuntu Setting up the GPG keyring Downloading the image index --- DIST RELEASE ARCH VARIANT BUILD --- ubuntu precise amd64 default 20151019_03:49 ubuntu precise armel default 20151018_03:49 ubuntu precise armhf default 20151018_03:49 ubuntu precise i386 default 20151019_03:49 ubuntu trusty amd64 default 20151019_03:49 ubuntu trusty arm64 default 20150604_03:49 ubuntu trusty armhf default 20151019_03:49 ubuntu trusty i386 default 20151019_03:49 ubuntu trusty ppc64el default 20151019_03:49 --- Release: Reply Pingback: Using LXC containers to (partially) replace Virtual Machines | Testpurposes grobbelaar says: 2015/05/07 at 4:06 AM Hi, a few questions: 1) do I need to have a unique uid/gid range in each container? 2) can you describe in details how to convert a basic (privileged) container to unprivileged one? thanks a lot! Reply Pingback: » Linux: Linux LXC vs FreeBSD jail tlangner says: 2015/06/15 at 2:12 PM Hey Stephane, thanks a lot for your great guide. I managed to set everything up as you said but I have one problem left and that concerns the access rights of files created inside the container. Ideally, I would love to be able to create the files such that they are owned by the user whose uid got mapped into the container through the virtual IDs, call him $USER with $UID. However, by design, I do not have access to $UID from within the container. Moreover, from outside of the container, I need root access to chown the files as they are not owned by $UID but rather by some virtual $UID. Is there any way how I can accomplish my goal? Thanks in advance, best, Tobias Reply COLABORATI says: 2015/07/03 at 6:12 PM How can I install a template for running alpine linux in an unprivileged container? Reply COLABORATI says: 2015/08/16 at 8:45 PM How to mount directories from the host into an unprivileged container? I just added a line like this lxc.mount.entry = /home/user/sites home/ubuntu/sites none bind 0 0 to the file /home/user/.local/share/lxc/ubuntu-14-dev/config it seems to work, however all files and directories belong to nobody/nogroup – this makes usage a little bit complicated 🙂 How do I get the mapping right? I tried to figure, but got lost. I want the sites directory to be the same owner/group like on the host. Security is not of concern, this is a dev and test environment, however I would like to have a shared directory. And thank you very much for your nice guides! Reply COLABORATI says: 2015/08/17 at 1:01 PM GOT IT! This works: lxc.id_map = u 0 100000 1000 lxc.id_map = g 0 100000 1000 lxc.id_map = u 1000 1000 1 lxc.id_map = g 1000 1000 1 lxc.id_map = u 1001 101001 64535 lxc.id_map = g 1001 101001 64535 lxc.mount.entry = /home/user/sites home/ubuntu/sites none bind 0 0 Reply Noah F. San Tsorbutz says: 2015/08/19 at 6:02 PM It would be welcome to see less cryptic examples. Since Linux/Unix allow colons in file names, the one at the end of this comment makes it seem as if there should be two files found in the /etc/ directory named: subgid:user_name:100000:65536 subuid:user_name:100000:65536 rather than two files named: subgid subuid with identical contents: user_name:100000:65536 It took quite a bit of time to grok through that, and figure out what wasn’t configured correctly. A few of your readers may be Unix zen gurus, but as for “the rest of us”, well … However, thanks for the effort, any help getting LXC working is valued, since it doesn’t exactly run itself! stgraber@castiana:~$ grep stgraber /etc/sub* 2>/dev/null /etc/subgid:stgraber:100000:65536 /etc/subuid:stgraber:100000:65536 Reply chris says: 2015/09/17 at 6:14 AM Cheers Stephane, Used this page as a reference a few times now. Chris Reply Pingback: LXC unprivileged containers on Ubuntu 14.04 LTS | Ice and Fire – by J‑C Berthon Pingback: TECNOLOGÍA » Parallel Universe Pingback: Capsule Shield: A Docker Alternative Tailor-Made for the JVM | Voxxed francois bussery says: 2015/11/04 at 1:09 PM Strange, it seems that in unprivileged containers, cap.keep & cap.drop is not working. Seems impossible to enable sys_admin & net_admin. Is it a feature? Reply Art says: 2015/11/23 at 3:16 AM What happened to the Lucid container, I’m trying to migrate and old system to trusty and need lucid as an interim container. If you no longer have a copy, how can I build the container myself. Thanks, Art Reply Pingback: 一年之后重新审视 Docker —— 根本性缺陷和炒作-IT大道 Pingback: How to Build an Ubuntu Container on Arch Linux | Medicine's Blog Pingback: Linux containers and user namespaces – Part 2 | Endocode AG Pingback: Why do I need to start services using sudo when logged in as root in an LXC container? – Internet and Tecnnology Answers for Geeks Pingback: [ASK] server - Why do I need to start services using sudo when logged in as root in an LXC container? | Some Piece of Information Pingback: Why do I need to start services using sudo when logged in as root in an LXC container? - TecHub Jason says: 2016/03/27 at 12:54 PM When I download a debian wheezy amd64 lxc container and log in from debian host to unprivileged container, when I run `cd` from the initial root directory, it says, bash: cd: /home/my_host_username: No such file or directory Why is it trying `cd` into my host username? What separation logic am I missing that isn’t getting added. Reply Stéphane Graber says: 2016/03/27 at 2:19 PM Unless you pass –clear-env to lxc-attach, your environment is kept when attaching into the container, which in this case includes the HOME environment variable. Reply Csaba Dobo says: 2016/04/01 at 4:01 AM Hi, I just started experimenting with lxc. I installed ganesha-nfs in there and would like non container-based hosts to mount a share, shared from a container. I am running into errors, like this: client side: http://pastebin.com/sUi5ZHLc root@virt0:/mnt/nfs# mount.nfs -vvv 172.19.8.25:/home/exports/company2 /mnt/nfs mount.nfs: timeout set for Thu Mar 31 15:22:54 2016 mount.nfs: trying text-based options ‘vers=4,addr=172.19.8.25,clientaddr=172.19.8.131’ mount.nfs: mount(2): Operation not permitted mount.nfs: trying text-based options ‘addr=172.19.8.25’ mount.nfs: prog 100003, trying vers=3, prot=6 mount.nfs: trying 172.19.8.25 prog 100003 vers 3 prot TCP port 2049 mount.nfs: prog 100005, trying vers=3, prot=17 mount.nfs: trying 172.19.8.25 prog 100005 vers 3 prot UDP port 33487 mount.nfs: mount(2): Invalid argument mount.nfs: an incorrect mount option was specified tail -f /var/log/syslog … Mar 31 15:22:49 virt0 kernel: [2504001.096982] NFS reply getattr: -22 Mar 31 15:22:49 virt0 kernel: [2504001.096984] nfs_create_server: getattr error = 22 server side: http://fpaste.org/348098/49704014/ so severeal lines like this: vfs_open_by_handle :FSAL :DEBUG :vfs_fs = / root_fd = 5 vfs_open_by_handle :FSAL :DEBUG :Failed with Operation not permitted openflags 0x00000000 vfs_fsal_open_and_stat :FSAL :DEBUG :Failed with Operation not permitted open_flags 0x000 [ ands with this: getattrs :FSAL :DEBUG :Failed with Operation not permitted, fsal_error other martinetd: so what does this tell you, what do I need to set? here is the complete log: http://fpaste.org/348098/49704014/ Someone suggested to set some capabilities-wise, but I have no clue as to what should I set. ould you direct me to the right direction? thx Csaba Dobo Reply jtlpa says: 2016/04/17 at 6:15 PM Debian Jessie 8.4 64b, LXC 1.0.6-6+deb8u2 unprivileged container issue Can create manage and use Privileged container just fine using this syntax: # lxc-create -t debian -n test Can create UnPrivileged container w/o any error using this syntax: # lxc-create -t download -n test2 — -d debian -r jessie -a amd64 When logged in as ROOT can use test2 privileged container just fine. # lxc-start -n test2 -d However,when logged in as USER and $ lxc-start -n test2 -d lxc_container: Executing ‘/sbin/init’ with no configuration file may crash the host Have tried the latest debian backports version of LXC, which chokes all over the place so that is not an option. As far as I can tell this IS NOT a syntax issue, CONFIG file issue or setup issue. Have verified ALL required files are available in the container folder and $home folders. Have spent days Googling for a solution to no avail. Would be grateful for any debugging solutions. Reply JTL says: 2016/04/18 at 7:26 PM After even more Googeling found these instruction will not work for Debian Jessie 8.3 64bit. See this article for more details: https://myles.sh/configuring-lxc-unprivileged-containers-in-debian-jessie/ Reply Pingback: How to create unprivileged LXC containers in Ubuntu 14.04 – I Learned How To…
1587 commented 7 years ago

https://jenkins.linuxcontainers.org/job/lxc-template-debian/arch=armel,release=jessie,restrict=lxc-priv,variant=default/1315/consoleText