Closed JedMeister closed 4 years ago
FWIW, this actually appears to be an LXC bug. The newer version of one of the LXC packages (sry forget the exact name) in Stretch resolves this, so once Stretch is released and Proxmox release their v5.0 then it should be resolved.
apt-get remove postfix
so; I believe, the main issue is the postfix package in Turnkey
Hi @jodumont - thanks for your input!
Very interesting! Perhaps there's some bug in postfix which makes it not happy to run in a non-privileged container? I'm hoping that we'll have something Stretch based to play with soon, perhaps it's been resolved in Stretch already?
solution from bogo22 in the proxmox forum works for me: https://forum.proxmox.com/threads/unprivileged-containers.26148/page-2
rm /var/spool/postfix/dev/random
rm /var/spool/postfix/dev/urandom
touch /var/spool/postfix/dev/random
touch /var/spool/postfix/dev/urandom
/etc/pve/lxc/ct100.conf
)
lxc.mount.entry: /dev/random dev/random none bind,ro 0 0
lxc.mount.entry: /dev/urandom dev/urandom none bind,ro 0 0
lxc.mount.entry: /dev/random var/spool/postfix/dev/random none bind,ro 0 0
lxc.mount.entry: /dev/urandom var/spool/postfix/dev/urandom none bind,ro 0 0
unprivileged container
checkedI just thought I'd update this old issue with some further info.
Firstly, I've removed the 'gitlab' tag and replaced it with 'core' as this issue actually affects all TurnKey appliances when attempting to run within an unprivileged container.
Personally I still consider this a limitation/shortcoming of LXC (if not a bug). According to a comment on the LXC forums the issue is resolved properly within the kernel in v4.18. Hopefully Debian Buster will ship with a v4.18+ kernel so that it works as is within an unprivileged container (once we get to Debian Buster based systems).
In the meantime, it's clear that the specific issue is the mknod command within the default application chroot that Postfix is set up within (I assume that is the default Debian Postfix install as I'm pretty sure we don't do anything special there). It might perhaps be worth trying to install Postfix in a vanilla Debian Stretch container and see what happens?!
It may be possible to work around it template side via a mount bind (in /etc/fstab) or perhaps removing the chroot Postfix config for LXC containers? Regardless, the above workaround still works.
A post on our forums prompted me to do a little more investigation of this issue.
It turns out that if you install Postfix in a running "unprivileged" container, it simply skips the creation of the /var/spool/postfix//dev/random
& /var/spool/postfix/dev/urandom
files within it's chroot.
So on face value that (removing the offending files) appears to be a decent workaround and one which we could consider including OOTB. That would allow our Proxmox/LXC builds to "just work" in unprivileged containers.
FWIW it also explains why installing Postfix after creation of a container "just works" (at least on face value), plus why the default Ubuntu and Debian LXC/LXD containers don't hit the same issue (they don't include Postfix OOTB AFAIK).
However, after a little more digging, it appears that these files are being created as part of the installer to resolve a Postfix bug relating to use with LDAP (see extended discussion on Ubuntu bug tracker here plus the relevant bug in Debian).
So to do it properly, the next step probably should be to see if we can somehow reproduce the same files, but by another path (perhaps a mount --bind
?) that will allow the Postfix bugfix to remain in place, so Postfix will continue to work with LDAP but won't stop the container from running as unprivileged.
Alternatively, we could consider just removing the files for now (only on LXC build; and carefully noting that so if any LXC users have issues with Postfix they will hopefully find the info) and reinstating the files (i.e. removing the code that removes them) once we can confirm that usage with the 4.18 kernel resolves this issue.
With this issue is it known when Buster would be released? It took me a few hours to locate about this after trying to install Syncthing. Just now seeing all these Turnkey apps, I think it's an awesome idea! But again, do we know when Buster would be released so we see this issue most likely become a new issue?
As noted above, there are a number of workarounds. The quickest and easiest is to install the containers as "privileged" (i.e. uncheck the "unprivileged container" checkbox during the creation process.
If you don't want to run it as a privileged container, then so long as you don't need LDAP integration for Postfix, you can remove the /var/spool/postfix/dev/random
& /var/spool/postfix/dev/urandom
files and all should be good. Either do that by modifying the filesystem prior to launch, or by launching as a privileged container, removing the offending files, taking a backup, and launching the backup to a new container as an "unprivileged container".
Another option (especially if you need LDAP Postfix integration) is to set up the Proxmox host as noted above.
The quickest and easiest is to install the containers as "privileged" (i.e. uncheck the "unprivileged container" checkbox during the creation process.
It should be mentioned that LXC's position is that unprivileged containers are not and cannot be root-safe, and only fix known vulnerabilities if the implementation is considered trivial.
@ddimick - For sure. Perhaps I should have been clearer on that.
FWIW the only time that is an issue is when you share the root login credentials of the container with someone you don't trust or a privilege escalation bug exists within the guest or host. So long as security updates are applied in a timely fashion (they're auto installed daily within TurnKey; you'll need to manually apply them to the host) then that should minimise any risks. Running a privileged container is not fundamentally different to running those same services directly on the host. No services should be running as root, ever!
The assumption is that when you install an LXC container, you are running as root (whether logged in as root or via sudo
/su
) on the host already. So there isn't really anything that you can do within the container that you couldn't do on the host. Obviously the additional services within the guest do broaden the potential attack vectors (hence why installing sec updates regularly is important).
I would argue that even as an "unprivileged" container, the LXC security model is not sufficient to allow untrusted users root access within a container. Back in the days of OpenVZ, things were different, but the kernel patches required for that were never merged into mainline (LXC leverages the ones that were). And sadly OpenVZ died...
Personally, I would not give anyone root access within a container who I wouldn't trust to have root access to the host. It's also worth noting, that running services within a container is not fundamentally different to running those same services on the host. It provides a degree of separation, but not real isolation. If you need to ensure full isolation, then I would advise you to use a "proper" VM instead! (And even then you need to ensure that you keep up with security patches for your VM software of choice)
It should be mentioned that LXC's position is that unprivileged containers are not and cannot be root-safe, and only fix known vulnerabilities if the implementation is considered trivial.
@ddimick: Am I misreading your link? Or does it say the opposite of what you said?
From the link: "we do consider [unprivileged containers] to be root-safe and so, as long as you keep on top of kernel security issues, those containers are safe."
Did you mean to say that LXC's position is that PRIVILEGED containers are unsafe?
@loneboat - Yes I'm pretty sure that's what @ddimick meant! :smile:
Yup, sorry about that.
Sent from my mobile device.
On Sun, Jun 23, 2019 at 5:07 PM -0700, "Jeremy Davis" notifications@github.com wrote:
@loneboat - Yes I'm pretty sure that's what @ddimick meant! 😄
— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub, or mute the thread.
Is there a permanent solution to this in order yet?
@joshuamallow - Yes, sort of...
If you need to have Postfix authenticate via LDAP, then there is no way to run a TurnKey (or Debian/Ubuntu/etc) LXC container unprivileged, as Postfix runs within a chroot, which needs access to /dev/random
& /dev/urandom
.
If you don't need that, then yes you can run a TurnKey LXC container as unprivileged, although unfortunately the setup is a little convoluted. (Postfix will still work fine, just that you won't be able to authenticate via LDAP).
We do plan to make that the default for our upcoming v16.0 release, although there isn't any ETA on that currently (it'll be ASAP).
To run TurnKey within an unprivileged, you need to first launch a privileged container. Then remove the random and urandom files from the Postfix chroot and then backup the container. You can then launch a new unprivileged container from the backup you created.
If you wish to leave the server uninitialised (so the firstboot scripts run at boot time) rather than logging in directly to remove the files, (assuming a Proxmox host) enter via pct
and run the following lines to remove random/urandom:
rm /var/spool/postfix/dev/random
rm /var/spool/postfix/dev/urandom
Thanks! I used the "pct mount ###" and went through the shell to remove those two files. Then restored the backup with unprivileged checked and all is well!
This is big nasty surprise that this issue remains in 2019.
I'm running fresh new Debian 9/Stretch (yes I know that Buster is out) with PVE 5.4 without Postfix because I'm using just ssmtp
. I'm trying to deploy _debian-9-turnkey-openvpn_15.1-1amd64.tar.gz from Turnkey template repo and naturally selecting ☑ unprivileged CT.
With disabled ☐ unprivileged CT ... everything is working.
While reading comments here in this issue ... all solutions doesn't sounds like "turn-key" so I think devs have to fix this properly.
@dmnc-net thanks for your input and feedback.
Whilst we do intend to implement a work around for this issue, it's a not super high priority item as the user-side workaround is well documented and pretty straight forward (albeit a bit of a PITA), both here and on Proxmox forums.
Please note that it's not that we don't care about this issue, just that we're a small team with a lot on our plate. Whilst I get that this is an annoyance, because of our small team and massive amount of competing priorities, we always need to carefully decide how we spend the limited time we have available. FWIW currently my main focus is getting our v16.0 / Debian Buster based release available. I was intending to implement this "fix" (i.e. work around the changed Proxmox defaults and limitations of LXC).
It's probably also worth noting that if you are really concerned about security, whilst an "unprivileged" container helps, the only real answer if you need solid security and true isolation, is to use a "proper" VM (i.e. a KVM VM within Proxmox). Our ISO should install to a "proper" VM, no problems ("proper" VMs don't have the same limitations as LXC).
Proxmox 6.0 which was live in June uses Buster but this issue still exists even as 6.1 was released earlier this month. Considering that it affects every turnkey template seems to suggest it's not a minor issue. The workaround might seem straightforward but flies in the face of the entire "turn key" nature of these templates. Do consider making it a higher priority issue.
Hi @joshuakoh7 - Thanks for your input and feedback. You are right, but as explained previously, we're a small team with a lot on. And my current priorit remains pushing v16.0 out the door (which is so far behind schedule, it's not even a little bit funny...).
Although having said that, there has been code to address this sitting there for quite some time now. It really needs some more testing before we roll it out, but I guess seeing as we're currently getting close(r) to a v16.0 release (which will require testing and possibly some refactoring of the LXC build anyway), what I'm going to do is merge that code and adjust as need be when we test the v16.0 LXC build.
[edit] To be really explicit; note that it is almost certain that there will be no more v15.x releases, so this won't be included until v16.0 LXC builds are released (at this stage, it's almost certain that they won't be available until early next year).
unprivileged = no
but don't start itvzdump 100 --exclude-path /var/spool/postfix/dev/random --exclude-path /var/spool/postfix/dev/urandom
unprivileged = yes
ref: https://forum.proxmox.com/threads/unprivileged-containers.26148/page-2#post-248550
FWIW the new TurnKey Linux v16.0 LXC templates run fine as unprivileged containers.
However the v16.0 containers won't run as priveledged, unless you configure them as "nested". See https://github.com/turnkeylinux/tracker/issues/1452
Reported via email.