Open ptman opened 4 years ago
Just a remark: If podman is supported (which is perfectly fine from my perspective), a parallel running instance of dockerd should not be broken. Otherwise all containers running on the machine will have to be migrated in one step. This might be impossible if other containers are running in parallel for different tasks.
Just recording my last experience with Podman from some other tests I've done recently.
TLDR: Podman is just not good enough (for our use case) yet.
On CentOS 8, a relatively old podman
version is being shipped (1.8.x
), with the latest being 2.2
right now.
Something as simple as:
podman network create --driver=bridge custom
podman run -it --rm --network=custom -p 80:80 docker.io/nginx:1.19.5-alpine
.. leads to the port not being exposed correctly. It's not accessible from the internet.
If the --network=custom
call is removed and we use the default network, it works. Strange.
On Ubuntu 20.10, there's a podman
package which is newer than the one in CentOS 8 (yay!), but packaging is broken.
Certain dependencies (runc
, iptables
) are not listed as such and need to be installed separately (apt-get install -y runc iptables
).
iptables
, okay. Maybe it's optional, as one may wish to run non-networked containers and be fine without it.
runc
not being included as a dependency seems like a mistake. It doesn't inspire confidence.
After installing these dependencies manually, it seems to work and is not broken like the CentOS 8 package (likely because it's much newer - 2.0.x
, if I remember correctly).
For containers to talk to each other by name (something we rely on and quite enjoy to have available with Docker), we need the dnsname plugin.
Installing this plugin is a manual process which requires git
, golang
and then make && make install ...
, along with modifying a network configuration file in /etc/cni/..
to make it load the dnsname
plugin for a given network. For the plugin to function dnsmasq
also needs to be installed on the host.
I've verified that it works nicely though, but this kind of installation procedure is quite ugly.
An alternative to containers talking to each other by name (using the dnsname plugin) is making them explicitly pass IP addresses to one another.
Example .service
file snippet:
[Unit]
Requires=matrix-postgres.service
After=matrix-postgres.service
[Service]
Environment="MATRIX_POSTGRES_IP_ADDRESS=$(/usr/bin/env podman inspect matrix-postgres -f '{{.NetworkSettings.Networks.matrix.IPAddress}}')"
ExecStart=/bin/bash -c '/usr/bin/env podman run \
--rm \
--name some-other-service \
--log-driver=journald \
--network=matrix \
--add-host=matrix-postgres:${MATRIX_POSTGRES_IP_ADDRESS} \
some-other-service-image:latest'
Downsides:
this relies on service ordering a lot, and on containers actually being started, so we can inspect them. Restarting the upstream service means we need to restart as well, so we can get its new IP address. systemd will take care of that, but the forced downtime can be annoying. For certain kind of dependency relationships (matrix-synapse
depending on matrix-postgres
) it may be okay, but matrix-nginx-proxy
getting restarted for each and every other service it potentially proxies to -- that's just silly
related to the above, it may be complicated to manage all these systemd service dependencies and --add-host
IP address injections
this inspect format ({{.NetworkSettings.Networks.matrix.IPAddress}}
) seems to be podman-version specific. For older Podman versions, it's {{.NetworkSettings.IPAddress}}
. The latter yields ""
on newer versions.
this requires a lot of redoing on our part to support such --add-host
injection. Having to wrap everything in /bin/bash
calls to pass the environment variable is ugly and we'll have to deal with some escaping issues as we redo many many services. There may be a better way to do it though.
My conclusion from all of these tests is that Podman is just not good enough (for our use case) yet.
It's poorly-packaged (Ubuntu 20.10, likely due to Debian packaging) and basic networking functionality is broken on some important distros (CentOS 8). It's a bad experience on distros (RHEL 8 / CentOS 8) that are supposedly pushing hard for it and telling people they can do alias podman=docker
. For a "hello world", you can, but your mileage will surely vary.
Our aim with the playbook is to provide a good and straightforward installation and experience on a bunch of different distros. Right now, by relying on Docker, the playbook supports CentOS (7, but 8 is likely close), lots of Ubuntu versions (going back 4 years ago, to the ancient 16.04), Debian (latest and some few versions back), Archlinux. And for 3 different architectures even. Docker just works.
I do use podman
for other purposes (including simpler networked ones), and I do love its daemonless way of doing things. However, it still has quite a long way to go before it's a viable alternative for the things we're doing.
This seems to be relevant: https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=971253
Have you tried with packages from kubic? https://software.opensuse.org//download.html?project=devel%3Akubic%3Alibcontainers%3Astable&package=podman
Just adding a link to #1133 for anyone who's interested in podman support as well.
A small update. Looks like Podman v4 is now out, which changes the way networking works. Here's some release announcement.
So, we potentially don't need the dnsname
plugin installed anymore and containers are DNS-resolved by their name seamlessly. I haven't tested it yet though and I'm not sure if such DNS resolution works out of the box or it it needs to be enabled manually somehow.
An alternative to containers talking to each other by name (using the dnsname plugin) is making them explicitly pass IP addresses to one another.
Why not run all the matrix containers in the same pod? Basically they would all share a network namespace, so no need to either access other containers by name or passing the IP addresses. You can just use localhost in the containers to access other containers in the same pod.
I just stumbled across your project and its awesome! I don't run Docker on my server though and do not intend to install it, so Podman support would be greatly appreciated :+1:
I just stumbled across your project and its awesome! I don't run Docker on my server though and do not intend to install it, so Podman support would be greatly appreciated 👍
@ChuckMoe check out Deploy your own Matrix server on Fedora CoreOS at Fedora Magazine
Strange that the article's not been mentioned before here as it's almost as old as this bug report.
@xpseudonym thank you, I will take a look! :) I do actually have all most of it set up using pods by now. I am only struggling a little with the configuration. Maybe these links will help me with that.
Interesting - are these user pods? Using user pods with a unique user for each app (pod group) seems much more secure - seems to be emulating android security model of confining user installed apps each to a unique user: https://source.android.com/docs/security/app-sandbox (idk really, just musing...)
What do you mean with app? Currently I am using one pod for the server and the bridges, and another for the coturn server. I thought about putting all the bridges in different pods as well but I would have to fiddle with the networking then. Its still not ready though as there are some higher priority things on my list.
I was using app in a very unsophisticated way - like I might refer to Nextcloud as an app where obviously it's collection of things.
Then yes, you can define different users for every pod and therefore app. Here is my setup https://github.com/ChuckMoe/podman-nextcloud
This is quite far in the future, as ubuntu 20.04 doesn't yet include podman, but:
https://packages.debian.org/bullseye/podman
Probably relates to #300 and #64