containernetworking / plugins

Some reference and example networking plugins, maintained by the CNI team.
Apache License 2.0
2.23k stars 788 forks source link

fix issues with bandwidth plugin and htb introduction #1097

Closed h0nIg closed 1 month ago

h0nIg commented 2 months ago

With the introduction of the htb changes through https://github.com/containernetworking/plugins/pull/921, 2 performance issues were introduced which we see in our production environments.

issue 1

the submitter of the htb changes did not knew about rate2quantum and subsequently about quantum issues which may arise with HTB (or any other qdisc like fq_codel).

https://github.com/containernetworking/plugins/blob/main/plugins/meta/bandwidth/ifb_creator.go#L163

The kernel is warning once unreasonable values are specified, because too high values cause subsequent issues (like txqueue drops once too much packets are dequeued). https://github.com/torvalds/linux/blob/baeb9a7d8b60b021d907127509c44507539c15e5/net/sched/sch_htb.c#L2037 https://github.com/torvalds/linux/blob/baeb9a7d8b60b021d907127509c44507539c15e5/net/sched/sch_htb.c#L2052-L2055

issue 2

if the txqueuelen is unspecified, the kernel will decide about appropriate values. In our case the kernel will use reasonable defaults like txqueuelen=0 for this virtual device once the value is not specified. This means queueing is not to be done in case the receiving kernel code is busy and not picking up the packet fast enough. We are talking about kernel here and not the ring buffers nor /sys/class/net/eth0/queues/tx-*

Systems should drop early enough to ensure continuity and avoid working on full buffer. Bursting is OK, but HTB bursts beyond HW limits should still get limited, because this is causing latency for all containers running on the system for longer period of time. The "full buffer" can get caused by several reasons, one reason is the "unlimited HTB class" allowing unlimited rates / bursts for a single container which is causing unfairness on a system with containers running below their limited rates.

The following screenshot describes the old tbf setup which was compared to the htb setup (numtxqueues / numrxquees are the same for the tbf and the htb code and match the amount of vCPU).

Dequeue-ing from veth tx queues with 64 concurrent queues (queues equal to vCPU to concurrently execute kernel code) compared to a single txqueue without "concurrency". This was not relevant until the introduction of HTB which is dropping less (causing latency) and the introduction of txqueuelen=1000 causing latency as well.

https://gist.github.com/eqhmcow/939373#file-hfsc-shape-sh-L48-L50

image

It was intended to make txqueuelen equal between the veth interface, but numtxqueues / numrxqueues were not considered. Once you queue with different amount of queues between veth and ifb, it will cause kernel performance issues. Similar warnings exists in veth.c itself already, because the Host and container numtxqueues need to have at least same size counterparts:

https://github.com/torvalds/linux/blob/9d3684c24a5232c2d7ea8f8a3e60fe235e6a9867/drivers/net/veth.c#L1199-L1206

The usual veth creation may look like this, where numtxqueues of outer0 correlate to the numrxqueues of inner0, the same applies the other way around.

ip link add outer0 numrxqueues 16 numtxqueues 16 type veth peer inner0 numrxqueues 16 numtxqueues 16

The kernel is even suggesting to use numtxqueues for ifb devices, which is missing here: https://github.com/torvalds/linux/blob/e8fc317dfca9021f0ea9ed77061d8df677e47a9f/drivers/net/ifb.c#L396

The upstream netlink library does not offer setting numtxqueues / numrxqueues for ifb adapters yet (just supported for veth), therefore it is required to avoid setting qlen until the upstream library offers support and numtxqueues / numrxqueues match between veth and ifb.

h0nIg commented 2 months ago

ping @dcbw @MikeZappa87 @s1061123

you reviewed the causing PR https://github.com/containernetworking/plugins/pull/921, may i ask you to take a look into this please? any questions regarding the actual problem (despite that i still need to fix a test)?

s1061123 commented 1 month ago

@h0nIg , thank you for the PR. I just start the review the code and will be reply then.

s1061123 commented 1 month ago

Hi, @h0nIg . From the coding/fix point of view, it looks good to me. Thank you!

Please take care Github action error?

DCO

We need Signed-off-by field in commit logs. Please check following URL, modify commit message and push it again. https://github.com/containernetworking/plugins/pull/1097/checks?check_run_id=30454599231

CI Test failure

CI integration test is failed. Could you please check that? https://github.com/containernetworking/plugins/actions/runs/10966531920/job/30454704770?pr=1097

squeed commented 1 month ago

@h0nIg this is the last PR we'd like to see before we cut a release -- could you take a look?

h0nIg commented 1 month ago

@squeed i was on vacation and i will take a look into this within the next 1-3 days

h0nIg commented 1 month ago

honestly speaking: the whole HTB introduction is a mess (sorry, but that is reality).

The integration test reveals the same timing results, the introducer of the HTB change added debug output to make this visible. He knew that the "without qos" is not working (for whatever reason) and still asked for a merge of this.

2024/10/10 07:22:34 Runtime with qos limit 2.65 seconds
2024/10/10 07:22:34 Runtime without qos limit 2.65 seconds

I just changed it to "-1" which will end up with the linux defaults of "txqueuelen = 32". We have ran tests with this low value which did not manifest in measureable issues, but we know that any value > 0 might cause issues. I would first want to get this PR in and later maybe revert the whole HTB change.

h0nIg commented 1 month ago

@squeed ping, merge-able

squeed commented 1 month ago

@h0nIg should we just revert the whole change as-is? Is it worth trying to fix?

Thanks so much for your help on this; the bandwidth plugin is orphaned and nobody on the project has any expertise here.

h0nIg commented 1 month ago

issues are too severe, HTB requires a txqueue > 0 (indirectly through htb qdisc "direct_queue" and the like). Just set the txqueuelen == 0 and the tests are failing. Overall txqueue > 0 and numtxqueues / numrxqueues == 1 causes locking issues on interface level as mentioned above, therefore not worth to fix it and it is better to keep it simple.

full revert: https://github.com/containernetworking/plugins/pull/1105

oOraph commented 1 month ago

Hi I just see this issue about a pr I made a while ago to add some option to exclude some subnets from traffic shapping in the bandwith plugin

  1. First and foremost: sorry for any unmeasured inconvenience this could have caused. There seems to have been some side effect with the htb qdisc I did not know about. Then you were right reverting

  2. Answering to this comment: https://github.com/containernetworking/plugins/pull/1097#issuecomment-2404341721

Mess -> your point of view, not a fact. the pr is small if we except the unrelated tests tidying/splitting, which was requested during review because the original file was becoming too big

I did not "know" the "without qos" test was not working:

the assertion is here to check it's ok: https://github.com/containernetworking/plugins/blob/c29dc79f96cd50452a247a4591443d2aac033429/integration/integration_linux_test.go#L199

and as for the logs showing the same time it's a typo here: https://github.com/containernetworking/plugins/blob/c29dc79f96cd50452a247a4591443d2aac033429/integration/integration_linux_test.go#L197

-> should have been

log.Printf("Runtime without qos limit %.2f seconds", runtimeWithoutLimit.Seconds())

we printed twice the same thing... so no need to blame people about bad intents.

Once corrected if you rerun it:

2024/10/11 13:08:16 Runtime with qos limit 2.65 seconds
2024/10/11 13:08:16 Runtime without qos limit 0.01 seconds

which is consistent with the assert test below the logs

cc @h0nIg

h0nIg commented 1 month ago

Hi I just see this issue about a pr I made a while ago to add some option to exclude some subnets from traffic shapping in the bandwith plugin

  1. First and foremost: sorry for any unmeasured inconvenience this could have caused. There seems to have been some side effect with the htb qdisc I did not know about. Then you were right reverting
  2. Answering to this comment: fix issues with bandwidth plugin and htb introduction #1097 (comment)

Mess -> your point of view, not a fact. the pr is small if we except the unrelated tests tidying/splitting, which was requested during review because the original file was becoming too big

cc @h0nIg

1) not a problem, mistakes may happen and i didn't knew about such issues before as well. 2) it is not just limited to the integration test, i may need to summarize the problems: Missing quantum adjustments are causing issues, the kernel is warning the user already and there are comments in the code that the author does not understand them. There are better qdiscs than HTB which avoid latency issues, does not come with txqueue>0 locks (numtxqueues in combination with numrxqueues on the other end, see initial issue description) and qdisc locks because single qdisc needs to hold the state vs dedicated separated class per vCPU.

Overall i tried to get it fixed, but you can see that missing configurations in the upstream library, the need of txqueuelen>0 for HTB and further adjustments (queue per vCPU or similar) are required here. So it is better to go back to a working version instead which works without queueing

squeed commented 1 month ago

We have some other merged changes that people would like to see released. @oOraph; we will revert the htb changes for now, and you can fix them at your leisure.