Closed saket424 closed 3 years ago
@oofnikj Upon further investigation, it turns out we need to tell the docker containers to use 192.168.16.2 as their default gateway to force them to take a detour via the docker-openwrt container on their way to the wan. They currently default to 192.168.16.1 as the default gateway for containers like grafana and influxdb and thus bypass the 192.168.16.2 firewall rule application .
A suggestion documented here in the link below appeared to work but you may have a better way
https://stackoverflow.com/questions/36882945/change-default-route-in-docker-container
Thanks!
I tried what you described on my setup and I was able to block ICMP packets from a specific address to the internet.
I launched a new container alongside OpenWrt:
$ docker run --rm -it --network openwrt-lan alpine
Added the following firewall rule:
config rule
option src 'lan'
option name 'Block-ping'
list src_ip '192.168.16.6'
option family 'ipv4'
option target 'DROP'
option dest 'wan'
list proto 'icmp'
And packets now get dropped:
/ # ping 8.8.8.8
PING 8.8.8.8 (8.8.8.8) 56(84) bytes of data.
From 192.168.16.240: icmp_seq=2 Redirect Host(New nexthop: 192.168.16.2)
From 192.168.16.240: icmp_seq=3 Redirect Host(New nexthop: 192.168.16.2)
From 192.168.16.240: icmp_seq=4 Redirect Host(New nexthop: 192.168.16.2)
From 192.168.16.240: icmp_seq=5 Redirect Host(New nexthop: 192.168.16.2)
From 192.168.16.240: icmp_seq=6 Redirect Host(New nexthop: 192.168.16.2)
From 192.168.16.240: icmp_seq=8 Redirect Host(New nexthop: 192.168.16.2)
From 192.168.16.240: icmp_seq=11 Redirect Host(New nexthop: 192.168.16.2)
From 192.168.16.240: icmp_seq=17 Redirect Host(New nexthop: 192.168.16.2)
^C
--- 8.8.8.8 ping statistics ---
17 packets transmitted, 0 received, 100% packet loss, time 16360ms
Why does 192.168.16.240 respond? The container is sending packets to the default network gateway, but since the ARP table contains duplicate entries for the same MAC address, the DHCP-assigned address replies instead.
And the redirects? My host OS is configured to send redirects (net.ipv4.conf.all.send_redirects=1
, net.ipv4.conf.br-000000000.send_redirects=1
), but yours may not be.
What do you see when you run traceroute
or mtr
inside the container?
It should be similar to this:
/ # traceroute -n 8.8.8.8
traceroute to 8.8.8.8 (8.8.8.8), 30 hops max, 46 byte packets
1 192.168.16.240 0.014 ms 0.005 ms 0.004 ms
2 192.168.16.2 0.008 ms 0.010 ms 0.008 ms
3 10.162.40.1 5.793 ms 6.373 ms 7.248 ms
4 * * *
In any case, I suppose explicitly specifying --gateway ${LAN_ADDR}
in https://github.com/oofnikj/docker-openwrt/blob/master/run.sh#L81-L84 may help.
That fixed it. Can you make that change in master?
anand@odyssey3:~/docker-openwrt$ git diff run.sh
diff --git a/run.sh b/run.sh
index a09bf3a..798e52d 100755
--- a/run.sh
+++ b/run.sh
@@ -81,6 +81,7 @@ _init_network() {
docker network create --driver $LAN_DRIVER \
$LAN_ARGS \
--subnet $LAN_SUBNET \
+ --gateway $LAN_ADDR \
$LAN_NAME || exit 1
docker network create --driver $WAN_DRIVER \
oops i spoke too soon, do you see the same error ?
Error response from daemon: Address already in use
Other seem to be having this "Address already in use problem also" https://forums.docker.com/t/setting-default-gateway-to-a-container/17420/3
Im not sure what is the use case, buy I was able to place containers behind wtt's fw, I've created a dedicated network for that (like dmz) and i hooked up it as an interface to the wrt container.
The only issue i had is somehow similar, I got this error, but i found that making my containers in docer-compose.yaml not to auto start, and running them in run.sh after everything was ready solved the issue Hope this information might be to some assistance
@hllhll Would you mind sharing your modified run.sh script here? The use case is i want to bring the rest of the microservice containers under openwrt firewall control
Error response from daemon: Address already in use
I did not try it myself, but it makes sense that Docker would complain about this -- it can't assign an address to a container that is the same as the gateway address.
If you share the result of mtr
or traceroute
from within the container you're trying to firewall, maybe I can suggest some more help, but without understanding how your routes are defined (and, more importantly, why they're different from what I'm seeing) it's difficult to say.
This is the traceroute from a nodered docker
bash-5.0# traceroute 4.2.2.2
traceroute to 4.2.2.2 (4.2.2.2), 30 hops max, 46 byte packets
1 192.168.16.221 (192.168.16.221) 0.029 ms 0.014 ms 0.014 ms
2 192.168.155.100 (192.168.155.100) 0.774 ms 0.742 ms 0.714 ms
3 d28-23-1-197.dim.wideopenwest.com (23.28.197.1) 8.821 ms 9.405 ms 9.169 ms
4 dynamic-76-73-171-1.knology.net (76.73.171.1) 9.524 ms 9.613 ms 9.019 ms
5 d14-69-94-162.try.wideopenwest.com (69.14.162.94) 10.862 ms 9.821 ms 10.442 ms
6 static-76-73-191-136.knology.net (76.73.191.136) 11.855 ms 13.117 ms 10.220 ms
7 d199-74-22-91.nap.wideopenwest.com (74.199.91.22) 12.733 ms 12.306 ms static-76-73-191-227.knology.net (76.73.191.227) 17.335 ms
8 static-76-73-191-232.knology.net (76.73.191.232) 11.425 ms 13.905 ms 12.020 ms
9 4.16.38.157 (4.16.38.157) 12.243 ms 17.098 ms 11.363 ms
10 *^C
bash-5.0# route -n
Kernel IP routing table
Destination Gateway Genmask Flags Metric Ref Use Iface
0.0.0.0 192.168.16.1 0.0.0.0 UG 0 0 0 eth0
192.168.16.0 0.0.0.0 255.255.255.0 U 0 0 0 eth0
bash-5.0# ifconfig -a
eth0 Link encap:Ethernet HWaddr 02:42:C0:A8:10:0C
inet addr:192.168.16.12 Bcast:192.168.16.255 Mask:255.255.255.0
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:6141 errors:0 dropped:0 overruns:0 frame:0
TX packets:47 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:749755 (732.1 KiB) TX bytes:3499 (3.4 KiB)
lo Link encap:Local Loopback
inet addr:127.0.0.1 Mask:255.0.0.0
UP LOOPBACK RUNNING MTU:65536 Metric:1
RX packets:6772 errors:0 dropped:0 overruns:0 frame:0
TX packets:6772 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:1588711 (1.5 MiB) TX bytes:1588711 (1.5 MiB)
I have a hacky script that gets the job done by modifying the container(s) default gateway at boot up and each time the container restarts by subscribing to docker events. I am looking for a more elegant way. Thanks in advance
I Didn't try to replace the host or gw ip address after starting the container, I didn't want to patch every image I use... or run an external script, however... if I run&connect my containers after wrt it works, if the other way around (such as in my case docker-compose restart triggered before run.sh on boot) it run.sh throws the above error. These are my changes https://github.com/hllhll/docker-openwrt/commit/cadc9ed592cc126e754df32e0a57f854841f8f3d .
I also added this in run.sh main
:
docker-compose -f /path/docker/compose/file/docker-compose.yml start
echo "* ready"
Notes:
docker-compose.yaml
services:
<service_name>:
container_name: <enter>
image: ...
...
restart: "no"
networks:
- openwrt-dmz
networks:
openwrt-dmz:
external: true
@saket424 in your traceroute
output above, are any of the addresses there the address of your OpenWrt container? Can you try running a plain alpine
container like in my example and adding the firewall rule to block ICMP? I don't know what nodered is, sorry.
@oofnikj , I am unclear what additional information I'll be providing that you don't already have access to . I certainly can launch alpine and add the openwrt-container as a firewall to the pristine alpine container per your suggestion and I know it'll work. I am looking for a way to elegantly automate the process of overriding the default gw from 192.168.16.1 to 192.168.16 2
@saket424 Maybe I missed something in your question. You wrote:
I tried defining a firewall rule in docker-openwrt to bar grafana from icmp to a specific ip address on the internet just to test if we can bring remaining dockers under docker-openwrt firewall control and the answer is it did not work
Perhaps you meant existing containers, i.e. that were already up and running before OpenWrt?
@hllhll wrote:
if I run&connect my containers after wrt it works, if the other way around (such as in my case docker-compose restart triggered before run.sh on boot) it run.sh throws the above error.
This makes sense. If the other container is running before OpenWrt comes up I'm not sure how to automate overriding the route table inside the container.
One thing you could try is launching the container with --cap-add NET_ADMIN
. This will allow you to change the default route from inside the container:
/ # ip route show
default via 192.168.16.1 dev eth0
192.168.16.0/24 dev eth0 scope link src 192.168.16.6
/ # ip route del default
/ # ip route add default via 192.168.16.2
/ # ip route show
default via 192.168.16.2 dev eth0
192.168.16.0/24 dev eth0 scope link src 192.168.16.6
But this too would require re-launching the container with the additional privilege.
If you really need to modify the route table of an already running container and absolutely cannot recreate it (or don't want to add NET_ADMIN
), you could instead create a symlink for the network namespace of the running container in /var/run/netns
and manipulate the network namespace from the host with ip netns exec ${NETNS} ip route ...
. You can follow the example in https://github.com/oofnikj/docker-openwrt/blob/master/run.sh#L208-L212 to prepare the symlink.
@oofnikj , It isn't arbitrary containers. It is containers for which openwrt-lan or openwrt-dmz is defined in the docker-compose as the external network to use.
I'm going to close this issue as I have a working solution. Thanks @hllhll and @oofnikj for weighing in.
https://serverfault.com/questions/1076548/docker-compose-disable-default-gateway-route seemed to help
cap_add:
- NET_ADMIN
@oofnikj You have done some amazing work. Thanks for this project In your monitoring example, I notice that you are using network: openwrt-lan for the influxdb and grafana docker Now, I tried defining a firewall rule in docker-openwrt to bar grafana from icmp to a specific ip address on the internet just to test if we can bring remaining dockers under docker-openwrt firewall control and the answer is it did not work
I am using bridge as the default networking driver for LAN
Any alternative driver or ideas that you can suggest where the open firewall can be made to apply to all the other containers too?