Open norbjd opened 3 days ago
It could be implemented in gVisor, sure. A few things:
AF_PACKET
skips it (just a guess).runsc
flags to to pass the rate limit to the implementation. Thankfully the implementation is pretty clear cut: we need another implementation of an existing interface, and some plumbing to get command line configuration down to the implementation.Thanks for your quick answer Kevin 😃
Glad to see that at least it is technically possible, despite the fact there are better ways 😁 By chance, do you (or someone else) have any idea/pointers about how to shape traffic on the host directly that would work with gVisor? 👀
To be honest, I couldn't make it work on a k8s cluster after a few hours of tweaking, because of an error I couldn't find who is throwing it: containerd? the host? the CNI (cilium in my case)? gVisor? So I couldn't really blame gVisor, as there were many components involved 😛 All I can remember is an error in containerd
system logs when using the default bandwidth CNI plugin:
containerd plugin type="bandwidth" failed (add): create ingress qdisc: file exists
I will try to reproduce with a minimal example with k8s, but if you or other people have ideas feel free to answer. I thought it was a good option to add it on runsc
itself because people might run gVisor outside of k8s, but I do agree that finding (and document) a way to shape traffic directly on the host would be even better.
To be clear: we'd love to have it added to gVisor. We just always think about things from a security perspective first, but we'd 100% work with a contributor to get a PR in with shaping or rate limiting.
It looks like, based on a little investigating (e.g.), doing rate limiting on the host would involve a highly custom setup. Kubernetes isn't going to make it easy to support that out-of-the-box.
Description
Hello 👋
I'm wondering if it is technically possible to implement traffic shaping directly in the gVisor sandbox, and not rely on external tools (like
tc
) to restrict both ingress and egress bandwidth of containers. I have been reading the networking architecture guide, and the sentry/netstack abstraction makes me think it could be possible, since all packets seems to go through the virtual interface. Also I've already seen someqdisc
references in the code. I am not a networking expert though, so maybe I'm just writing absolute nonsense; feel free to correct me if so.Anyway, my use-case is very simple: I want to run multiple containers using
runsc
, but easily restrict/throttle/rate-limit both transmitted and received bytes rate to something like 10 MB/s for each container, without having to tweak my host. This use-case can also apply in more high level tools like Kubernetes. There are some ways to do traffic shaping in a K8s cluster (bandwidth CNI plugin, cilium bandwidth manager, ...) but I have not been able to make it work with gVisor though.Despite having a very different approach when it comes to sandboxing, Kata containers have implemented this feature, and it is as simple as setting
rx_rate_limiter_max_rate
andtx_rate_limiter_max_rate
parameters in the configuration. So, I would expect ingress/egress rate limit to be as simple as passing a flag to therunsc
command.Again, I don't know if what I am asking is possible, and whether there are easy alternatives that could already fulfill the bandwidth rate limiting feature. Any pointers would be appreciated.
Thanks 😃
Is this feature related to a specific bug?
No.
Do you have a specific solution in mind?
Not really sadly 😕 I'm assuming this behavior could live in the
qdisc
"algorithm" where packets are dispatched, or in thenetstack
abstraction, but these are just wild guesses as I'm not an expert of how gVisor works under the hood (especially the networking part).