Closed mskarbek closed 11 months ago
I know this is something we discussed when putting netavark together. It is on our list of things to look at for certain. @mheon PTAL
Also, it is interesting why netavark
chose to use iptables not firewalld backed when firewalld is running? With firewalld backend, netavark
could rely on https://github.com/firewalld/firewalld/issues/483 to restore all missing rules.
@mskarbek Please see https://github.com/containers/podman/issues/5431 Also the firewalld backend will not solve this problem since firewalld flushes its own rules as well unless you make them permanent which has bigger problem with leaking stuff after reboot etc...
@Luap99 that is why I pointed at d-bus signal. netavark
needs to have its own state and compare it with current firewall state after handling reload signal from firewalld.
Listener is definitely the answer, but the question then becomes: where do we put it?
We can't put it in Netavark; Netavark exits immediately after the network is configured.
We can't put it in Aardvark; Aardvark spins down when no containers are using it, and some networks (notably the default one!) don't use it.
Conmon seems like it could be logical, but we'd only want one Conmon process to fire the reload command, and we have 1 conmon per container. Conmon's Rust rewrite offers a potential opportunity to add enough intelligence that this could be viable?
We could also write a super-minimal binary with an associated systemd service that would always be running and listening.
@mheon Well we could also call podman network reload containerID
from conmon. In this case every conmon would need to listen on dbus for the reload event. Thinking about it, using conmon is better then other options because it already has the correct --root
and --runroot
arguments so it could also handle containers in non standard locations.
Also, it is interesting why
netavark
chose to use iptables not firewalld backed when firewalld is running? With firewalld backend,netavark
could rely on firewalld/firewalld#483 to restore all missing rules.
This is because @mheon was using their new dbus interface and it was/is not complete yet, so we had to follow what was done in the past with CNI -- to use both. The intent is to back out the iptables stuff for firewalld as soon as the dbus code is complete ... AND ...into distributions.
Yeah - firewalld is disabled until the v1.1.0 upstream release, due to a few missing features that have been added, but have not yet made it into a release. Once that happens we can reenable the firewalld backend conditional on firewalld v1.1.0 or higher being available.
@Luap99 Thinking about that more - downside is that we get 1 podman network reload
per running container, so we could potentially burst out 100 separate podman
processes when firewalld upgrades - could be a real strain on system resources.
Any update on this?
We can discuss this further at the F2F - basically, we need to locate a dbus listener somewhere in our code, but Netavark doesn't have a daemon to host one, so we either put it in Aardvark, or potentially Conmon-rs.
Using: NETAVARK_FW="firewalld" podman run <image>
I get netavark: Error retrieving dbus connection for requested firewall backend: DBus error: I/O error: No such file or directory (os error 2)
.
Is netavark currently supposed to work with firewalld ?
I see this comment in the code while firewalld 1.2.1 is out.
@SecT0uch Please create a new issue or discussion, this is not related to the issue.
This was fixed in https://github.com/containers/netavark/pull/840, in netavark v1.9.
See https://blog.podman.io/2023/11/new-netavark-firewalld-reload-service/ for info on how to use it.
Now we only need quickly propagate 1.9 to the RHEL. ;)
I would assume it will be part of 9.4/8.10 in ~6 months.
This was fixed in https://github.com/containers/netavark/pull/840, in netavark v1.9.
See https://blog.podman.io/2023/11/new-netavark-firewalld-reload-service/ for info on how to use it.
Any ideas whether it can support nftables
? Right now we override the systemd service to add a call to podman network reload --all
. I much rather have a service do that for me.
Any ideas whether it can support nftables?
What exactly do you mean? Using the nftables firewall driver in netavark? In this case yes.
If you mean when you flush your nftables ruleset then no the service is only setup to listen on the firewalld event. However it should be simple to add a new "oneshot" command to add the rules back like the firewalld-reload service. So in this case feel free to file a new RFE for that.
I was referring to the flush issue yes. Thanks, will look into that.
Create a container with
--publish=80:80
. You will get a set of chains/rules innft
ip nat
andip filter
tables which are, obviously, separate from firewalld tables. Issuefirewall-cmd --reload
and you have lost all communication with that container. Stopping container results with:Is there a way to track
nft
chains and fix them during container lifetime that can be incorporated intonetavark
?Versions:
Used repo: COPR rhcontainerbot/podman4