Closed sirdarckcat closed 3 years ago
--pid owner doesnt seem to work anymore, so it doesnt seem to be possible to filter based on that
here's some other idea on how to do this https://github.com/deitch/ctables/blob/master/ctables
after doing some testing with traffic control and network namespaces, I think:
as such, I think it should be throttled at a per-pod level, rather than at an nsjail level.
instead of cloning the interface, we could create a veth pair, maybe.. and then add it to nsjail dynamically somehow
this seems to work
root@ctf-daemon-gcsfuse-b2jfb:/nsjail# ip netns exec 5773 tc qdisc add dev veth-netns0 root tbf rate 1024kbit latency 50ms burst 1540
root@ctf-daemon-gcsfuse-b2jfb:/nsjail# iperf -c ping.online.net
------------------------------------------------------------
Client connecting to ping.online.net, TCP port 5001
TCP window size: 45.0 KByte (default)
------------------------------------------------------------
[ 3] local 10.44.3.3 port 56688 connected with 62.210.18.40 port 5001
[ ID] Interval Transfer Bandwidth
[ 3] 0.0-13.3 sec 1.90 GBytes 1.23 Gbits/sec
root@ctf-daemon-gcsfuse-b2jfb:/nsjail# ip netns exec 5773 iperf -c ping.online.net
------------------------------------------------------------
Client connecting to ping.online.net, TCP port 5001
TCP window size: 45.0 KByte (default)
------------------------------------------------------------
[ 3] local 10.0.3.2 port 47546 connected with 62.210.18.40 port 5001
[ ID] Interval Transfer Bandwidth
[ 3] 0.0-12.2 sec 1005 KBytes 672 Kbits/sec
So, TL;DR:
echo 1 > /proc/sys/net/ipv4/ip_forward
connect_to_internet() {
vethname=$1
ip link add $vethname-out type veth peer name $vethname-in
ip link set $vethname-in netns $vethname
ip addr add 10.0.$counter.1/24 dev $vethname-out
ip netns exec $vethname ip addr add 10.0.$counter.2/24 dev $vethname-in
ip link set $vethname-out up
ip netns exec $vethname ip link set $vethname-in up
iptables -A FORWARD -o eth0 -i $vethname-out -j ACCEPT
iptables -A FORWARD -i eth0 -o $vethname-out -j ACCEPT
iptables -t nat -A POSTROUTING -s 10.0.$counter.2/24 -o eth0 -j MASQUERADE
ip netns exec $vethname ip route add default via 10.0.$counter.1
ip netns exec $vethname tc qdisc add dev $vethname-in root tbf rate 1024kbit latency 50ms burst 1540
}
alternatively, we can just do this at a per-pod level instead. this seems less error-prone, and most likely better anyway..
this worked (on challenge-name/config/advanced/deployment/containers.yaml
), it throttled the speed to 1megabyte per second. more info is available here and an explanation of the burst/rate here.
apiVersion: "apps/v1"
kind: "Deployment"
metadata:
name: "chal"
spec:
template:
spec:
initContainers:
- name: tc
image: ubuntu:19.10
command: ["sh", "-c", "echo nameserver 1.1.1.1 > /etc/resolv.conf; apt-get update && apt-get install -y iproute2 gawk && tc qdisc add dev `ip route | awk '/default/ {print $5}'` root tbf rate 1mbps latency 50ms burst 2000"]
securityContext:
capabilities:
add:
- NET_ADMIN
test:
# iperf -c ping.online.net -R
------------------------------------------------------------
Client connecting to ping.online.net, TCP port 5001
TCP window size: 45.0 KByte (default)
------------------------------------------------------------
[ 3] local 10.44.3.31 port 41774 connected with 62.210.18.40 port 5001
[ ID] Interval Transfer Bandwidth
[ 3] 0.0-10.0 sec 9.25 MBytes 7.73 Mbits/sec
7.73Mbits/sec = 1MBps
so yea, it might be worth looking into moving to https://github.com/minrk/tc-init or something similar to that. I don't know how licensing might affect this.
to use the k8s native one https://kubernetes.io/docs/concepts/extend-kubernetes/compute-storage-net/#support-traffic-shaping we probably need https://github.com/projectcalico/calico/issues/2815 to be fixed unless there's some way to enable CNI plugins in GKE
seems like we can manipulate calico directly https://thenewstack.io/tutorial-explore-project-calico-network-policies-with-google-kubernetes-engine/ and we just need to enable network policies for GKE (which I think we already do anyway)
Either at the k8s side: https://github.com/containernetworking/plugins/tree/master/plugins/meta/bandwidth https://kubernetes.io/docs/concepts/extend-kubernetes/compute-storage-net/#support-traffic-shaping https://github.com/minrk/tc-init/blob/master/tc-init
or with tc/iptables: https://www.frozentux.net/iptables-tutorial/iptables-tutorial.html#OWNERMATCH https://lartc.org/howto/lartc.cookbook.fullnat.intro.html http://luxik.cdi.cz/~devik/qos/htb/manual/userg.htm
this would limit the risk of DoS