mailcow / mailcow-dockerized

mailcow: dockerized - 🐮 + 🐋 = 💕
https://mailcow.email
GNU General Public License v3.0
8.86k stars 1.18k forks source link

option to use TCP/IP instead of unix sockets #2078

Closed zeigerpuppy closed 5 years ago

zeigerpuppy commented 5 years ago

Unix sockets add a little bit of a speed boost to inter-container communications but they reduce the flexibility of the installation and also hamper deployment of mailcow in secure or scaled-out docker environments. Security risks of accessing docker.sock for instance are quite severe, potentially compromising all other docker containers on the same machine.

For instance, inter-container socket access is not permitted in installations with higher security requirements (an example is containers isolated using https://github.com/kata-containers )

Proposed solution An option to have inter-container communications via TCP/IP would be very useful. Or at least a unified config file where inter-container comms are defined. At the moment, modification of multiple Dockerfiles, /data/conf/* files, scripts and docker-compose.yml are required, which is difficult to maintain and keep track of changes upstream.

It is expected that a custom docker-compose.yml will be needed for a variety of setups but if most of the changes could be moved to this file rather than scattered throughout the service config files this would facilitate alternative configs.

refactoring all the interprocess comms would probably make maintenance easier even for the standard install!

andryyy commented 5 years ago

The Docker socket is only available for dockerapi-mailcow which does not pass this socket to any container. There is no way to access the socket from within any other container. dockerapi-mailcow is also not available on a public port.

I don't know where a custom docker-compose.yml is needed. I mean.. if you heavily modify mailcow, it is, yes. But where do we start and where do we end this? There are many people who want their personal changes in the official repo, that would be hard.

Which inter-container communication do you talk of here? Rspamd and MySQL?

zeigerpuppy commented 5 years ago

Yes, the main inter-container socket comms that I would like to address are for Rspamd and MySQL.

I have started making changes to the config files to make a general connect parameter (${DBCONN}} for the database (preserving the socket connections as the default but allowing the user to specify and configure a TCP/IP database connection when they use generate_config.sh).

The changes have been relatively simple, and in most cases they actually make the config scripts shorter in total length (largely by replacing some repeated --socket= definitions with a variable. In some cases I think the config scripts are also more readable due to the the variables being specified. I want to test a bit more before raising a pull request but hopefully it's worth considering.

The key merits would be:

  1. generalised config for DB connection parameters allows easily changing the connection type (socket vs tcp) or the mysql/rspamd host
  2. ability to easily use an external database server for scalability
  3. ideally, config files will be shorter and retain readability
  4. current default of socket will be maintained, so no breaking changes for current users

Current limitations:

  1. I haven't changed the config for the rspamd.sock yet
  2. more testing required
zeigerpuppy commented 5 years ago

I have now updated all the scripts and tested that mailcow starts with the TCP/IP options. All should be compatible with using the previous socket method too.

There's one issue that I'd like to resolve before raising a pull request. I am having trouble with the connection from syslog-ng to the redis host. I can connect with redis-cli -h $HOSTNAME_REDIS if I enter the docker containers but I am getting the following errors from the syslog-ng instances:

mailcow-postfix      | Dec 30 12:26:35 mail1 syslog-ng[9]: REDIS server error, suspending; driver='d_redis_f2b_channel#0', error='Name or service not known', time_reopen='60'
mailcow-postfix      | Dec 30 12:26:35 mail1 syslog-ng[9]: REDIS server error, suspending; driver='d_redis_ui_log#0', error='Name or service not known', time_reopen='60'
mailcow-dovecot      | Dec 30 12:26:40 mail1 syslog-ng[79]: REDIS server error, suspending; driver='d_redis_ui_log#0', error='Name or service not known', time_reopen='60'
mailcow-dovecot      | Dec 30 12:26:40 mail1 syslog-ng[79]: REDIS server error, suspending; driver='d_redis_f2b_channel#0', error='Name or service not known', time_reopen='60'
mailcow-sogo         | Dec 30 12:26:47 0e4e5bbd6041 syslog-ng[8]: REDIS server error, suspending; driver='d_redis_f2b_channel#0', error='Name or service not known', time_reopen='60'
mailcow-sogo         | Dec 30 12:26:47 0e4e5bbd6041 syslog-ng[8]: REDIS server error, suspending; driver='d_redis_ui_log#0', error='Name or service not known', time_reopen='60'

I have tried changing the host() parameter in syslog-ng.conf in a number of ways, including with/without quotes. I know that I can access redis from these containers as the redis-cli connects properly to the redis host.

example from dovecot, syslog-ng.conf:

source s_src {
  unix-stream("/dev/log");
  internal();
};
destination d_stdout { pipe("/dev/stdout"); };
destination d_redis_ui_log {
  redis(
    host("$HOSTNAME_REDIS")
    persist-name("redis1")
    port(6379)
    command("LPUSH" "DOVECOT_MAILLOG" "$(format-json time=\"$S_UNIXTIME\" priority=\"$PRIORITY\" progra$
  );
};

The only thing that seems to work is hardcoding the hostname, e.g. host("mailcow-redis"), so I think there must be something I'm missing with syslog-ng pulling environment variables.

zeigerpuppy commented 5 years ago

Ok, I solved the above by setting an ARG and sed replacement in the Dockerfiles, logging is working well with an abstracted hostname now.

andryyy commented 5 years ago

A PR will not be accepted. We cannot accept PRs to fit ones needs for their own setup. :( You will need to modify mailcow for your own use case then. There are several reasons not to use TCP. It was discussed especially for SQL in other issues. External SQL is easily possible right now but NOT supported. SQL is not SQL. You cannot connect any SQL instance and expect it to work fine.

The sockets were chosen for a reason. :/ If you prefer secure environments, stop sending everything over TCP anyway.

Feel free to share your solution as PR for others to see or link your forked repo.

igorakkerman commented 5 years ago

@zeigerpuppy Did you finalize your setup? I'd also be very interested to use a configuration that is not dependent on sockets, at the very least when it comes to MySQL.

@andryyy Mailcow, being a great product, I don't quite understand why you are being so reluctant to having a clear separation of the containers. No offense.

zeigerpuppy commented 5 years ago

Hi @igorakkerman, I got about 3/4 of the way there, but found I was running into a number of issues. To get system separation I have now put mailcow in a separate VM. It's not particularly elegant, but it works! The changes I made are available here: https://github.com/zeigerpuppy/mailcow-dockerized They're pretty well documented, please feel free to try it out and solve some of the remaining issues. @andryyy has been clear that the changes won't be merged but I guess if it was working we may be able to make a good case for it! All the changes I made fitted into the standard install method and left the default as sockets so it should have been backwards compatible. Unfortunately I don't have much time to work on it and I was worried that it would take too long to maintain if we diverged from the main branch. I can fully understand @andryyy being reluctant to have too much divergence it's a big project as it is!

andryyy commented 5 years ago

We would have provided a single KVM template if we wanted to keep everything in a single Docker container. That’s absolutely not what Docker is for and you lose everything great about it that way.

igorakkerman commented 5 years ago

@zeigerpuppy Thanks for your great work and for pointing me to your repo. I agree that it would not be a good idea to maintain a divergent project. It would be great to integrate the changes in the original project. Could you please clarify what you mean by "putting mailcow in a separate VM"? Do you mean the whole Docker daemon or specific parts/containers?

igorakkerman commented 5 years ago

@andryyy I totally agree that a single Docker container setup would be the worst of all worlds. Instead, it would be great to allow specific services like MariaDB or SOGo to run on separate machines of a cluster. This would be a more microservice-ish approach. For instance, it would allow using a cloud hosted database, independent of the application instances.

zeigerpuppy commented 5 years ago

yes, I put the whole docker daemon in a separate VM. Usually I run docker on the host with kata-containers. This adds a layer of security by running each container in its own kernel space while retaining compatibility with most docker/kubernetes commands. Unfortunately there is no socket comms between kata containers currently (although theoretically it could be made to work with a socket daemon).
I have serious reservations about running docker on bare metal. It's a very convenient system but privilege escalation/escape is a real possibility. So to keep it secure, I just wrapped it all in a KVM instance. I agree with you that for mailcow to be truly scalable, TCP/IP comms is better as it allows horizontal scaling across servers.

andryyy commented 5 years ago

It will not work with our netfilter implementation. A lot of features would be dropped, I won't do that.

Privilege escalation: How? You need to break out of the application first, then break out of the container (if this is even possible with dropped privileges - depends on the exploit, I guess).

Did you stop installing anything bare metal and only use special isolated containers for everything?

zeigerpuppy commented 5 years ago

Yes, everything on my servers runs in a KVM instance or a kata-container. The hypervisor is just that.

andryyy commented 5 years ago

So you run only a single application inside a single kvm? Or even a single application inside a kata-container inside a kvm? That's a litlte bit weird. What if there is an exploit to escape a kvm machine or a kata-container?

igorakkerman commented 5 years ago

Ok, I'm running the application on its own AWS EC2 instance, no need for an additional wrapper.

Horizontal scalability is not my main concern. What I care about are automatic DB backups (provided by RDS). It also saves resources to not include the DB on the application machine.

Another argument is the ability to seamlessly switch from one version to another in a blue-green-deployment-like manner, that is, without shutting down either service, not even for seconds.

andryyy commented 5 years ago

We would drop a lot of features, as I said.

You can mount any SQL socket to mailcow (see helper-scripts/ext_sql_sock.docker-compose.override.yml). You could install proxysql (https://github.com/sysown/proxysql) and mount its socket to mailcow. On your node you can define many external SQL installations.

All TCP communication would need to be encrypted btw...

andryyy commented 5 years ago

@igorakkerman isn't it sufficient to store the SQL volume on their storage system? Iirc they have large, redundant storage arrays. Maybe they can even snapshot it.

igorakkerman commented 5 years ago

Thanks, @andryyy, the approach using ProxySQL should work. Do all Mailcow apps store all their configuration solely in the database?

igorakkerman commented 5 years ago

Sure, you can store the DB volume on a separate EBS (SSD) or even EFS (network drive) and create snapshots of that drive (EBS/SSD). Or you could back up the DB dump to S3. But that involves some extra work. Plus potential issues when upgrading the application...

andryyy commented 5 years ago

Some things are stored in Redis (DKIM, meta data for Rspamd).

zeigerpuppy commented 5 years ago

escape from kvm is much less likely than docker. Also KVM can be run with an unprivileged user. There's good reason why kata containers was invented (Intel has been at the lead). Having said that, security is always a game of probabilities. I believe full separation is a good stategy but that doesn't mean it's always the best.

zeigerpuppy commented 5 years ago

Here's an example of a 'runc' vulnerability that allows container escape: https://www.zdnet.com/article/doomsday-docker-security-hole-uncovered/

on production servers, I think it's prudent not to run any docker containers with runc, root privileges and sharing the bare-metal kernel. That's why I suggest running docker containers with the kata runtine or virtualised within a KVM machine.

andryyy commented 5 years ago

You need a super critical bug in any service like Dovecot to be able to break into the container first. You need to gain root privileges, too (at least in 99% of the time). That's a super bad thing to happen, systems without Docker are doomed at this moment.

Now you need to be very lucky to find an exploit for Dockerd. As I said, you probably need root privileges, too. Now you can exploit Docker and get access to the host.

That's such a stupid scenario. Yes, it can happen, and it will hit machines without Docker even worse.

Even in your Kata runtime they will just gain root rights and delete your mail. Who knows, there could also be a Kata exploit out in the same time.

Plus: "To do this, an attacker has to place a malicious container within your system." ...

zeigerpuppy commented 5 years ago

The vector here is from a compromised docker container (any container) to privilege escalation to the root on the system by overwriting runc. This does not require system root privileges, only a compromised container and then privilege escalation.
This is a fundamental issue for docker, because it runs on the same kernel as the parent system and can access system files as root. The KVM encapsulated system of kata-containers is fundamentally different on both counts (does not run containers as root and a compromised container will not compromise the host). Docker is a nice system but there are some fundamental floors in its security, which this exploit nicely demonstrates.

p.s. I'm not suggesting that mailcow should support kata-containers. In fact, I think it's the other way around, kata-containers should be updated to support a socket mechanism so it works with containers like mailcow which rely on inter-container socket comms.

chriscroome commented 5 years ago

Does the update yesterday help?

18.09.2 : 2019-02-11

Security fixes for Docker Engine - Enterprise and Docker Engine - Community

  • Update runc to address a critical vulnerability that allows specially-crafted containers to gain administrative privileges on the host. CVE-2019-5736
zeigerpuppy commented 5 years ago

it most certainly helps but it doesn't fix the fundamental issues with docker (shared kernel, root filesystem access), which will certainly result in more exploits in the future. I may be a very conservative sysadmin, but I don't think docker is an appropriate system to run on bare metal in production. (it's fairly easy to wrap in a VM which is a simple mitigation).

andryyy commented 5 years ago

You need a modified container to begin with. 😄

So running any service on bare-metal must be an ABSOLUT NO GO for you.

Again: You need a service that can be exploited and is unpatched (like Dovecot, Postfix...). Then you need enough privileges inside the container. THEN you need a generic Docker exploit, that gives you access to the host.

When both Docker and Dovecot have a CRITICAL bug, so dangerous, that you can escape the application and escalate to root, a shit ton of systems are doomed.

But why would Kata not have a critical bug. Just imagine there is a Dovecot AND a Kata exploit going around at the same time.

Moreover your post from above does not even affect mailcow. You need to import a crafted container. Then you need to give some stranger access to that container (or run a exploitable application inside this container).

zeigerpuppy commented 5 years ago

Yes, there does need to be an initial exploit to undermine the docker runc but machines running docker typically have many containers, only one needs to be compromised for them all to be compromised.

There is no need to compromise Dovecot, only Docker (which I believe is insecure by design).

I'm sure kata-containers have bugs but at least it has proper separation of privileges (much harder to escalate privileges or move horizontally between containers) as they are running on separate kernels. Ultimately, KVM is just more architecturally secure than docker.

The way that mailcow is affected is that an exploit on an unrelated container can move horizontally (and vertically) via runc.

I agree, running any service on bare-metal is a vulnerability, which is why the only service running should be disk and the VM hypervisor. I don't think that's particularly contentious, reducing attack surface area is always desirable.