runfinch / finch-daemon

Apache License 2.0
7 stars 10 forks source link

feat: Implementation of enable_icc option #69

Closed swagatbora90 closed 1 week ago

swagatbora90 commented 1 month ago

Issue #, if available:

Description of changes:

Docker currently supports configuration of intercontainer connectivity between containers attached to the same network bridge. By default, it enables inter-connectivity. However, to disable this behavior, it supports a special network option during network create called com.docker.network.bridge.enable_icc. CNI also allows intra-bridge connectivity, but there is no option to configure that behavior,i.e, disable it.

This PR adds support for the com.docker.network.bridge.enable_icc docker network option. Specially, it will handle the case when the enable_icc is set to false, by adding iptable rules disabling traffic in and out of the same bridge.

This PR also:

  1. Removes filtering of "com.docker.network.bridge.enable_ip_masquerade" option since this is now being handled in nerdctl

  2. Refactors the code for network create to separate out driver specific logic and help with testing

Testing done:

  1. Unit test
  2. E2E test to be added

Steps to test locally:

% sudo DOCKER_HOST=unix:///var/run/finch.sock docker network create --driver bridge --opt com.docker.network.bridge.enable_icc=false test_icc

% sudo DOCKER_HOST=unix:///var/run/finch.sock docker run --name container1 --network test_icc busybox sleep 3600

% sudo DOCKER_HOST=unix:///var/run/finch.sock docker run --name container2 --network test_icc busybox sleep 3600

  * Try to ping between containers

% sudo DOCKER_HOST=unix:///var/run/finch.sock docker exec container1 ping -c 4 container2 PING container2 (10.4.2.5): 56 data bytes

--- container2 ping statistics --- 4 packets transmitted, 0 packets received, 100% packet loss exec failed with exit code 1


  * Check iptable, specifically FINCH-ISOLATE-CHAIN

% sudo iptables -L -n -v Chain INPUT (policy ACCEPT 389 packets, 44527 bytes) pkts bytes target prot opt in out source destination

Chain FORWARD (policy DROP 0 packets, 0 bytes) pkts bytes target prot opt in out source destination 4 336 FINCH-ISOLATE-CHAIN all -- 0.0.0.0/0 0.0.0.0/0 24 2016 CNI-ISOLATION-STAGE-1 all -- 0.0.0.0/0 0.0.0.0/0 / CNI firewall plugin rules (ingressPolicy: same-bridge) / 24 2016 CNI-FORWARD all -- 0.0.0.0/0 0.0.0.0/0 / CNI firewall plugin rules / ... ... ... Chain FINCH-ISOLATE-CHAIN (1 references) pkts bytes target prot opt in out source destination 4 336 DROP all -- br-dd61a7c86eba br-dd61a7c86eba 0.0.0.0/0 0.0.0.0/0 0 0 RETURN all -- 0.0.0.0/0 0.0.0.0/0


  * Stop containers and remove network bridge. And check iptables again

Chain FINCH-ISOLATE-CHAIN (1 references) pkts bytes target prot opt in out source destination 0 0 RETURN all -- 0.0.0.0/0 0.0.0.0/0



- [x] I've reviewed the guidance in CONTRIBUTING.md

#### License Acceptance

By submitting this pull request, I confirm that my contribution is made under the terms of the Apache 2.0 license.
Shubhranshu153 commented 3 weeks ago

Approved with some comments, LGTM just a couple of nits and clarifying questions.

swagatbora90 commented 3 weeks ago

Leave a few comments. Overall I am still concerned with potential race conditions, such as:

* concurrent network creates; or

* concurrent network create and remove.

Prior to this change nerdctl seems able to handle the races with locked dirs. However I don't think the protection could be extended to our iptable setup/teardowns.

Added a per-network mutex map to address some of the race concerns: https://github.com/runfinch/finch-daemon/pull/69/commits/9fb2cf3dab15fe2354c568837265f156777f6588. The idea is that for the same network name, multiple concurrent requests for Create/Removal will be thread safe since we keep a map of mutex per network name. Also, operations within the same Create/Request calls are now atomic as the lock is held until the operation completes.