Closed swagatbora90 closed 1 week ago
Approved with some comments, LGTM just a couple of nits and clarifying questions.
Leave a few comments. Overall I am still concerned with potential race conditions, such as:
* concurrent network creates; or * concurrent network create and remove.
Prior to this change nerdctl seems able to handle the races with locked dirs. However I don't think the protection could be extended to our iptable setup/teardowns.
Added a per-network mutex map to address some of the race concerns: https://github.com/runfinch/finch-daemon/pull/69/commits/9fb2cf3dab15fe2354c568837265f156777f6588. The idea is that for the same network name, multiple concurrent requests for Create/Removal will be thread safe since we keep a map of mutex per network name. Also, operations within the same Create/Request calls are now atomic as the lock is held until the operation completes.
Issue #, if available:
Description of changes:
Docker currently supports configuration of intercontainer connectivity between containers attached to the same network bridge. By default, it enables inter-connectivity. However, to disable this behavior, it supports a special network option during network create called
com.docker.network.bridge.enable_icc
. CNI also allows intra-bridge connectivity, but there is no option to configure that behavior,i.e, disable it.This PR adds support for the
com.docker.network.bridge.enable_icc
docker network option. Specially, it will handle the case when theenable_icc
is set to false, by adding iptable rules disabling traffic in and out of the same bridge.This PR also:
Removes filtering of "com.docker.network.bridge.enable_ip_masquerade" option since this is now being handled in nerdctl
Refactors the code for network create to separate out driver specific logic and help with testing
Testing done:
Steps to test locally:
% sudo DOCKER_HOST=unix:///var/run/finch.sock docker network create --driver bridge --opt com.docker.network.bridge.enable_icc=false test_icc
% sudo DOCKER_HOST=unix:///var/run/finch.sock docker run --name container1 --network test_icc busybox sleep 3600
% sudo DOCKER_HOST=unix:///var/run/finch.sock docker run --name container2 --network test_icc busybox sleep 3600
% sudo DOCKER_HOST=unix:///var/run/finch.sock docker exec container1 ping -c 4 container2 PING container2 (10.4.2.5): 56 data bytes
--- container2 ping statistics --- 4 packets transmitted, 0 packets received, 100% packet loss exec failed with exit code 1
% sudo iptables -L -n -v Chain INPUT (policy ACCEPT 389 packets, 44527 bytes) pkts bytes target prot opt in out source destination
Chain FORWARD (policy DROP 0 packets, 0 bytes) pkts bytes target prot opt in out source destination 4 336 FINCH-ISOLATE-CHAIN all -- 0.0.0.0/0 0.0.0.0/0 24 2016 CNI-ISOLATION-STAGE-1 all -- 0.0.0.0/0 0.0.0.0/0 / CNI firewall plugin rules (ingressPolicy: same-bridge) / 24 2016 CNI-FORWARD all -- 0.0.0.0/0 0.0.0.0/0 / CNI firewall plugin rules / ... ... ... Chain FINCH-ISOLATE-CHAIN (1 references) pkts bytes target prot opt in out source destination 4 336 DROP all -- br-dd61a7c86eba br-dd61a7c86eba 0.0.0.0/0 0.0.0.0/0 0 0 RETURN all -- 0.0.0.0/0 0.0.0.0/0
Chain FINCH-ISOLATE-CHAIN (1 references) pkts bytes target prot opt in out source destination 0 0 RETURN all -- 0.0.0.0/0 0.0.0.0/0