Open sighoya opened 1 year ago
A friendly reminder that this issue had no activity for 30 days.
@Luap99 PTAL
This would need to be implemented netavark first, I don't see a problem with adding a option. I think @mheon had objections when this came up a while ago, we do not really want to support existing bridges as we configure firewall rules and sysctl for it which could effect other users of this interface.
we do not really want to support existing bridges as we configure firewall rules and sysctl for it which could effect other users of this interface.
That's sad to hear.
I think people wanting to use existing bridges favor to setup sysctl vars and firewall rules themselves, as it is the case for me. To have that as an additional option would be nice.
The workaround for me is to restart systemd to recreate my bridge device which is not that nice.
The proper workaround is to always have at least one interface attached to the bridge. Netavark only deletes the bridge when there are no interfaces attached to it.
The proper workaround is to always have at least one interface attached to the bridge. Netavark only deletes the bridge when there are no interfaces attached to it.
True, but it's ugly to set up a dummy container just for the sake of not deleting a bridge device.
Not keeping the bridge around makes it more difficult to access host based services, which I don't want to bind on all interfaces. For example i want a service to be accessible from the host (bind to 127.0.0.1) and from containers. I usaually set an IP address on the bridge and bind to service to this address in addition to 127.0.0.1. Now I need to bind to 0.0.0.0 which also include the ethernet address and i need to be more careful with firewall rules.
True, but it's ugly to set up a dummy container just for the sake of not deleting a bridge device.
This would also require some dependency work in systemd, to
Is there a workaround to achieve the access to host based services which is less convoluted?
Feature request description
After the last container stopped, the bridge device vbr0 specified in /etc/containers/network/vnet.json:
gets deleted.
With network cni as backend, it was possible to retain the bridge device after no active podman process was available. Now that cni will be deprecated, it would be nice to have the behavior for netavark too.
Motivation: I want to share a bridge device with other virtual services.
Suggest potential solution
Specify an option via --opt to retain precreated linux bridge like:
Have you considered any alternatives?
Currently, I restart systemd-networkd to restore the bridge device
Additional context
Add any other context or screenshots about the feature request here.