containers / podman

Podman: A tool for managing OCI containers and pods.
https://podman.io
Apache License 2.0
23.44k stars 2.38k forks source link

Add option for not deleting bridge device after the last container stops #17844

Open sighoya opened 1 year ago

sighoya commented 1 year ago

Feature request description

After the last container stopped, the bridge device vbr0 specified in /etc/containers/network/vnet.json:

{
     "name": "vnet",
     "id": "490cbc8033d707b4754c280ddc62890778c47f48c1ccc3d997971d1856f24bba",
     "driver": "bridge",
     "network_interface": "vbr0",
     "created": "2023-03-11T21:17:14.311300273+01:00",
     "ipv6_enabled": false,
     "internal": false,
     "dns_enabled": false,
     "ipam_options": {
          "driver": "none"
     }
}

gets deleted.

With network cni as backend, it was possible to retain the bridge device after no active podman process was available. Now that cni will be deprecated, it would be nice to have the behavior for netavark too.

Motivation: I want to share a bridge device with other virtual services.

Suggest potential solution

Specify an option via --opt to retain precreated linux bridge like:

podman network create -d bridge --opt com.docker.network.bridge.retainafterexit=True

Have you considered any alternatives?

Currently, I restart systemd-networkd to restore the bridge device

Additional context

Add any other context or screenshots about the feature request here.

github-actions[bot] commented 1 year ago

A friendly reminder that this issue had no activity for 30 days.

rhatdan commented 1 year ago

@Luap99 PTAL

Luap99 commented 1 year ago

This would need to be implemented netavark first, I don't see a problem with adding a option. I think @mheon had objections when this came up a while ago, we do not really want to support existing bridges as we configure firewall rules and sysctl for it which could effect other users of this interface.

sighoya commented 1 year ago

we do not really want to support existing bridges as we configure firewall rules and sysctl for it which could effect other users of this interface.

That's sad to hear.

I think people wanting to use existing bridges favor to setup sysctl vars and firewall rules themselves, as it is the case for me. To have that as an additional option would be nice.

The workaround for me is to restart systemd to recreate my bridge device which is not that nice.

Luap99 commented 1 year ago

The proper workaround is to always have at least one interface attached to the bridge. Netavark only deletes the bridge when there are no interfaces attached to it.

sighoya commented 1 year ago

The proper workaround is to always have at least one interface attached to the bridge. Netavark only deletes the bridge when there are no interfaces attached to it.

True, but it's ugly to set up a dummy container just for the sake of not deleting a bridge device.

mlausch1963 commented 1 year ago

Not keeping the bridge around makes it more difficult to access host based services, which I don't want to bind on all interfaces. For example i want a service to be accessible from the host (bind to 127.0.0.1) and from containers. I usaually set an IP address on the bridge and bind to service to this address in addition to 127.0.0.1. Now I need to bind to 0.0.0.0 which also include the ethernet address and i need to be more careful with firewall rules.

True, but it's ugly to set up a dummy container just for the sake of not deleting a bridge device.

This would also require some dependency work in systemd, to

Is there a workaround to achieve the access to host based services which is less convoluted?