mailcow / mailcow-dockerized

mailcow: dockerized - 🐮 + 🐋 = 💕
https://mailcow.email
GNU General Public License v3.0
8.24k stars 1.12k forks source link

LXC/LXD Support #4215

Closed pquan closed 2 years ago

pquan commented 2 years ago

Summary

Support for LXC/LXD containers as official platform (that is: docker running nested inside an LXC/LXD container)

Motivation

Mailcow can work in LXC/LXD container environment. This is useful for sharing resources and in systems where KVM/VmWare is not practical.

So far, the linux kernel has support for nested virtualization. Mailcow works properly when nested virtualization is enabled in LXC/LXD. This is mainly because docker supports running under LXC.

Almost no effort required for running mailcow in LXC and I have been running it under LXC (Proxmox) for several years now.

I can help with integration and support as LXC is my main platform and use case. I can write the required documentation for a full install under LXC.

mkuron commented 2 years ago

LXC was historically a source of many Mailcow issues. Apparently it did have meaningful differences from regular Docker that caused some things to behave weirdly. If that is sorted out by newer versions of LXC then we could certainly support it in Mailcow as long as it doesn‘t require any nontrivial changes. But before doing that, you should have a look at these old issues and confirm that they are no longer happening.

pquan commented 2 years ago

Just to clarify, with LXC support I mean nothing more than docker(ce) running inside an LXC container. I do not think/want to run mailcow straight on top of LXC container(s). I encapsulate the docker-ce debian (and ubuntu) server inside an LXC container. Then I let mailcow do it's thing with docker. It works and works nicely.

I will have a look at the lxc open issues. Please, let me know if you have a particular issue(s) in mind because, unfortunately, the search term "Lxc" matches all issues as it is contained in the official issue template for mailcow (the part that says: "Virtualization technlogy (KVM, VMware, Xen, etc - LXC and OpenVZ are not supported")

What I can say now is that I have been running docker-ce+mailcow under LXC for quite some time now. I was using mailcow non-dockerized before and only moved to dockerized once it was possible to run it under LXC. It's been a couple of years (by memory) and I have more than one mission critical installation that has been working properly during this time. I'm migrating more instances under LXC+dockerized as we "speak" and would like to have it as a supported platform.

andryyy commented 2 years ago

"My hoster updated the kernel, everything is broken now"

"My previous hoster used LXC, too, it run just fine. My new hoster, also LXC-based, does not work, why?"

"Why can't you make it work on every LXC container?"

Some of my top reasons against "support" for LXC. We require this and that and people will complain anyway, because they can't get their hoster to implement this or that.

Just adapt to the requirements, please. :)

crazytil commented 2 years ago

I have made it work in LXC on Proxmox with the following settings:

lxc.apparmor.profile: unconfined lxc.cgroup.devices.allow: a lxc.cap.drop: lxc.cgroup.devices.allow: b 7:* rwm lxc.cgroup.devices.allow: c 10:237 rwm

andryyy commented 2 years ago

These kind of infos would rock in the docs. But I don't feel good officially adding support for LXC.

pquan commented 2 years ago

I have made it work in LXC on Proxmox with the following settings:

lxc.apparmor.profile: unconfined lxc.cgroup.devices.allow: a lxc.cap.drop: lxc.cgroup.devices.allow: b 7:* rwm lxc.cgroup.devices.allow: c 10:237 rwm

With recent LXC, all you need is to enable nesting and keyctl under proxmox settings. You don't need to edit with the lxc profiles and you can actually run in an unprivileged container. That means, no root inside LXC. And all is done via the proxmox web interface. Life is good :)

@andryy:

"My hoster updated the kernel, everything is broken now" "My previous hoster used LXC, too, it run just fine. My new hoster, also LXC-based, does not work, why?" "Why can't you make it work on every LXC container?" I honestly think these people are best served by your paid service. And their life would be much happier too ;) If we lower the bar too much, you end up spending your time explaining ppl what smtp is and how it works and this is not necessary.

Perhaps, we can limit the platform to supported ones? So we can say we support Proxmox/LXC instead of saying "LXC" in general? A requirement would be that one has control of the raw server when running under LXC as there are settings you'd want to do outside the LXC container itself, like enabling nesting.

Once this is all documented, it becomes just another option.

PS: Nesting is actually quite well understood and used, even in the KVM world. I use is all the time for example for CI purposes in our development pipeline. Other pals I know use kubernetes under LXD.

andryyy commented 2 years ago

You cannot compare KVM nesting with container nesting imo.

I simply don't want to support it for the given reasons above. I'm sorry this is not what you want to hear. :/

pquan commented 2 years ago

I'm really sorry, you feel this way. Perhaps you will take an offer from myself and allow me to take care of this specific environment. For me this is not a game. I'm using the cow in a business, must-run and mission critical applications. So, I have a real use case and real pain if it does not work as intended. I'd rather contribute my time to your project than fork it as I had to do with the pre-dockerized version. Maybe we can start on a clean sheet?

shiz0 commented 2 years ago

Hm... while it would certainly be nice to extend supported platforms, I can totally understand @andryyy 's doubts and reservations about doing it officially. Doing so would mean being able to also give customers with active paid subscriptions (enterprise grade) support for that feature on various Distros/Kernels etc... adding a lot of complexity, also for testing!

Wouldn't it maybe be possible to add it to the docs as a "community supported" ("use at your own risk") install method, like the Traefik stuff is for example? Maybe also alongside a respective (support) category in the community forums, so users could ask questions and post problems there? I feel like that may be a good compromise, as that way it could be kind of supported, but would omit putting the weight of "having to support it" on mailcow's shoulders.

andryyy commented 2 years ago

I am fine with community support.

Am 06.08.2021 um 11:39 schrieb Hannes Happle @.***>:

 Hm... while it would certainly be nice to extend supported platforms, I can totally understand @andryyy 's doubts and reservations about doing it officially. Doing so would mean being able to also give customers with active paid subscriptions (enterprise grade) support for that feature on various Distros/Kernels etc... adding a lot of complexity, also for testing!

Wouldn't it maybe be possible to add it to the docs as a "community supported" ("use at your own risk") install method, like the Traefik stuff is for example? Maybe also alongside a respective (support) category in the community forums, so users could ask questions and post problems there? I feel like that may be a good compromise...?

— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub, or unsubscribe. Triage notifications on the go with GitHub Mobile for iOS or Android.

mkuron commented 2 years ago

Just for my understanding: is it correct that LXD is an alternative container runtime that provides a Docker-like interface? Similar to e.g. Podman? And all of these use the same backend (kernel namespaces), with Docker and Podman interactig with them directly and LXD going through LXC, which is the same as what early versions of Docker did?

I think the reason why we had so many issues in the past was that people were trying to run Mailcow on OpenVZ-based virtual servers which look like full virtual servers but have a slightly limited kernel interface. You could install regular Docker on top of that, but some networking things just didn‘t work right. LXD running on top of a full Linux kernel would then be something completly different from Docker running on top of OpenVZ, so I would imagine that you will not encounter any of the OpenVZ-related issues in your setup.

pquan commented 2 years ago

Sorry, for being late (as usual). I agree OpenVZ is not a good place to run Docker (or anything actually). The project was more a hack and became unmaintainable as time passed. The network was specially troublesome as was limiting access to some resources.

LXC is much more mature than what OpenVZ ever was. The good parts in vserver, openvz and lxc were taken and became what the current containerization api in the linux kernel is. LXD is just a manager for LXC. The network part of LXC is totally virtualized and isolated.

The tl;dr; is that the kernel exposes some set of containerization API (cgroups etc). You get a virtualized network stack, process space and limited resources. Docker uses that API as a manager. LXC/LXD use it as a manager in a similar way. The difference is that LXC supports nesting (docker can also, in theory). The nesting allows to set up the containers in a way that the same container API work inside a container. Inside the container, the virtualization is almost perfect. This is how Docker can be run inside LXC.

I'd like to stress one point: LXC would be useful to run mailcow but only if whoever runs it also has access to the physical server. It allows for some planning outside the mailcow container. Otherwise, a KVM/Vmware based cloud instance is a better bet for the normal user. Perhaps this was not clear and I hope it is now.

Community support would be OK for me. Let me organize my notes.

cedric-kicou commented 2 years ago

Hi, Should I ask for a community support ? :-)

I try to install mailcow on a LXC on a Proxmox server (mine). It was not a success ! I active nesting and keyctl / but failed on netfilter docker run. Add : lxc.apparmor.profile: unconfined lxc.cgroup.devices.allow: a lxc.cap.drop: lxc.cgroup.devices.allow: b 7:* rwm lxc.cgroup.devices.allow: c 10:237 rwm

netfilter pass but some other docker failed also after this.

I testing right now with a privilegied container and that's work, but it bad, I will prefer unprivilegied container.

What are the step for make it run correctly on Proxmox server at least ?

Thanks

cedric-kicou commented 2 years ago

@pquan Any doc to manage a full support of LXC fresh proxmox installation of mailcow ?

pquan commented 2 years ago

hi there. this is not the right place for community support, but I'll try to help you. It's actually quite easy to run docker in an lxc. Just follow the usual proxmox forum guides. Make sure your docker runs (hello-world) in an unprivileged container (it does work). For mailcow, all you need to do is disable the extended capabilities in the docker-compose.yml project. they're not supported (or needed). mailcow works without them anyway, specially if you're using a "home" sized installation.

cedric-kicou commented 2 years ago

Hi, thanks for your answer, may be we can switch to the correct place to continue this discussion. I make a mailcow community post for this : https://community.mailcow.email/d/1404-mailcow-in-a-proxmox-lxc-container

But don't understand what do you mean by extended capabilities.

Sieboldianus commented 1 year ago

hi there. this is not the right place for community support, but I'll try to help you. It's actually quite easy to run docker in an lxc. Just follow the usual proxmox forum guides. Make sure your docker runs (hello-world) in an unprivileged container (it does work). For mailcow, all you need to do is disable the extended capabilities in the docker-compose.yml project. they're not supported (or needed). mailcow works without them anyway, specially if you're using a "home" sized installation.

Thanks for all the information @pquan . I am preparing to install mailcow-dockerized in unprivileged LXC on proxmox myself (had very good experiences with nesting docker in LXC on ZFS so far for the last 5 years.. Gitlab, Nextcloud etc. - all work, so why mailcow shouldn't).

You say "disable extended capabilities" - I looked up the docker-compose.yml, but I don't see "extended capabilities", also this is not mentioned in the docs. Do you mean any particular setting or generally everything that could be considered "extended"? I know in the Gitlab.yml I had to remove the hostname form the docker-compose.yml, because it is a protected hypervisor setting. Would be nice to have a "cleaned" docker-compose.yml with all the forbidden settings for unprivileged containers removed.

codewithmartin commented 1 year ago

AFAIK: cap_add:

codewithmartin commented 1 year ago

By the way, when mailcow can be running in multiple copies on one server (one copy for each public IP address), we will not need LXC as "plan B"..

Sieboldianus commented 1 year ago

Thanks! By the way, there's an interesting discussion in Nextcloud all-in-one #1490 regarding nesting of Docker in unprivileged LXC on ZFS. This approach looks really promosing for fully supporting Docker-in-LXC nesting (including the rather difficult parts of backups, migrations, ZFS support etc.). I am still planning for the mailcow-docker, no progress so far, but I'll report.

pquan commented 1 year ago

Well, I'm doing nextcloud as well in the same scenario, for many years now on mission critical installation(s). No problems if you know your systems.

lukasz-zaroda commented 1 year ago

I'll add that for several months LXD supports zvols on ZFS storage. Source: https://discuss.linuxcontainers.org/t/lxd-5-11-has-been-released/16443