Closed zeeke closed 1 year ago
Changes Missing Coverage | Covered Lines | Changed/Added Lines | % | ||
---|---|---|---|---|---|
pkg/server/server.go | 4 | 18 | 22.22% | ||
<!-- | Total: | 16 | 30 | 53.33% | --> |
Files with Coverage Reduction | New Missed Lines | % | ||
---|---|---|---|---|
pkg/server/server.go | 97 | 2.71% | ||
<!-- | Total: | 97 | --> |
Totals | |
---|---|
Change from base Build 2576128179: | -0.1% |
Covered Lines: | 830 |
Relevant Lines: | 1678 |
I suppose it does not make sense from memory consumption point of view. In your code, bytes.NewBuffer()
is called every time in FinalizeRules()
. So I do not want to call NewBuffer(). This daemon should be designed to work long time and keep the memory consumption (not increased). So to reuse the same buffer, I create filterRules buffer and reuse every call.
If you want to proceed this change, could you please benchmarking it and guarantee that memory consumption is same or less than current?
Hi @s1061123 , I tried to run pulicyrules_test.go
in a loop, using Go benchmark features.
It shows some flakiness in the test environment, and failures prevent me to see the number of allocations per test run. I think it is due to a sort of delay between podListers and informers, derived from test environment. I think there is no problem on the production side.
Here is a sample error
2022-08-23T10:26:19.9916932Z [32m•[0mI0823 10:26:19.989801 11303 pod.go:123] Starting pod config controller
2022-08-23T10:26:19.9917474Z I0823 10:26:19.989817 11303 shared_informer.go:223] Waiting for caches to sync for pod config
2022-08-23T10:26:19.9917959Z I0823 10:26:19.989860 11303 pod.go:361] pod:testns1/testpod1 samplehost/
2022-08-23T10:26:19.9918707Z I0823 10:26:19.989905 11303 reflector.go:175] Starting reflector *v1.Pod (15m0s) from pkg/mod/k8s.io/client-go@v0.18.8/tools/cache/reflector.go:125
2022-08-23T10:26:19.9919539Z I0823 10:26:19.989910 11303 reflector.go:211] Listing and watching *v1.Pod from pkg/mod/k8s.io/client-go@v0.18.8/tools/cache/reflector.go:125
2022-08-23T10:26:19.9920298Z I0823 10:26:19.989994 11303 pod.go:361] pod:testns1/testpod2 samplehost/
2022-08-23T10:26:19.9921373Z I0823 10:26:19.990059 11303 server.go:301] OnPodUpdate testns1/testpod1 -> testns1/testpod1
2022-08-23T10:26:19.9921740Z I0823 10:26:19.990130 11303 pod.go:361] pod:testns1/testpod1 samplehost/
2022-08-23T10:26:19.9922095Z I0823 10:26:19.990171 11303 pod.go:361] pod:testns1/testpod1 samplehost/
2022-08-23T10:26:19.9922408Z I0823 10:26:19.990188 11303 server.go:309] OnPodDelete
2022-08-23T10:26:19.9922846Z I0823 10:26:19.990191 11303 server.go:301] OnPodUpdate testns1/testpod2 -> <nil>
2022-08-23T10:26:19.9923197Z I0823 10:26:19.990228 11303 pod.go:361] pod:testns1/testpod2 samplehost/
2022-08-23T10:26:19.9923506Z I0823 10:26:19.990319 11303 server.go:295] OnPodUpdate
2022-08-23T10:26:19.9923940Z I0823 10:26:19.990324 11303 server.go:301] OnPodUpdate <nil> -> testns1/testpod2
2022-08-23T10:26:19.9924272Z I0823 10:26:19.990365 11303 pod.go:361] pod:testns1/testpod2 samplehost/
2022-08-23T10:26:19.9924450Z
2022-08-23T10:26:19.9924645Z [90m------------------------------[0m
2022-08-23T10:26:19.9924977Z [91m[1m• Failure [0.001 seconds][0m
2022-08-23T10:26:19.9925202Z policyrules testing
2022-08-23T10:26:19.9925933Z [90m/home/runner/work/multi-networkpolicy-iptables/multi-networkpolicy-iptables/pkg/server/policyrules_test.go:222[0m
2022-08-23T10:26:19.9926745Z [91m[1megress rules podselector/matchlabels [It][0m
2022-08-23T10:26:19.9927553Z [90m/home/runner/work/multi-networkpolicy-iptables/multi-networkpolicy-iptables/pkg/server/policyrules_test.go:694[0m
2022-08-23T10:26:19.9927844Z
2022-08-23T10:26:19.9928003Z [91mExpected
2022-08-23T10:26:19.9928333Z <string>: "...-TO -m comm..."
2022-08-23T10:26:19.9928580Z to equal |
2022-08-23T10:26:19.9928913Z <string>: "...-TO -o net1..."[0m
2022-08-23T10:26:19.9929195Z
2022-08-23T10:26:19.9929681Z /home/runner/work/multi-networkpolicy-iptables/multi-networkpolicy-iptables/pkg/server/policyrules_test.go:768
Anyway, even if doing a benchmark is the most correct way to remove performance doubts, consider having a look at the code, as iptableBuffer
variable is created, used, and finalized in server.generatePolicyRules(*v1.Pod, *controllers.PodInfo)
that is invoked once per target pod.
The number of invocation of newIptableBuffer()
should be the same as FinalizeRules()
, but maybe I'm missing some usage of it.
My concern in your design is bytes.NewBuffer()
in FinalizeRules()
because 'bytes.NewBuffer()` may invoke memory allocation.
In this case, I suppose we should benchmark, focusing on the buffer structure (i.e. your change and current) to understand how many memopy allocation is invoked in test, not in CI.
Let me close this PR due to no update. Please file a new PR if you want to proceed. Thank you for the PR!
Hi, I'm trying to figure out how to implement IPv6 support and this PR is meant to make the code a little bit clearer.
Every change should not affect the overall component behavior, I think.