Closed zeeke closed 2 years ago
I opened https://github.com/k8snetworkplumbingwg/multi-networkpolicy-iptables/issues/14 a few weeks ago, which raised (what I think is) a similar issue.
I could not validate it worked - it didn't work for my scenario - since I currently do not have enough time to investigate this issue, but this could never the less be helpful.
I opened #14 a few weeks ago, which raised (what I think is) a similar issue.
I could not validate it worked - it didn't work for my scenario - since I currently do not have enough time to investigate this issue, but this could never the less be helpful.
Yes, it seems to be the same problem. Looking deeper, it seems iptables rules get generated if the namespace selector match the target pod namespace. So the problem is about having a two namespaces (nsX, nsY), two pods (podA in nsX, podB in nsY) and a multinetwork-policy in nsX with podA as target and that reference nsY in ingress, egress rules.
I suppose it is almost the scenario of https://github.com/k8snetworkplumbingwg/multi-networkpolicy-iptables/issues/14
I opened #14 a few weeks ago, which raised (what I think is) a similar issue. I could not validate it worked - it didn't work for my scenario - since I currently do not have enough time to investigate this issue, but this could never the less be helpful.
Yes, it seems to be the same problem. Looking deeper, it seems iptables rules get generated if the namespace selector match the target pod namespace. So the problem is about having a two namespaces (nsX, nsY), two pods (podA in nsX, podB in nsY) and a multinetwork-policy in nsX with podA as target and that reference nsY in ingress, egress rules.
FWIW, my scenario did feature 2 different namespaces: I want only pods from namespace A to reach namespace B.
I suppose it is almost the scenario of #14
@zeeke would you be willing to either re-open or create another issue, describing the detail about the namespace membership ?
Totals | |
---|---|
Change from base Build 2282410776: | 0.2% |
Covered Lines: | 831 |
Relevant Lines: | 1661 |
I fixed the ingress rules namespace selector
test case as suggested by @s1061123 : with two different net-attach-def we need to invoke buf.renderIngress(...)
with both network attachment. Now it's green and I think it worths keeping it in the suite.
I also added another test case: enforce policy with net-attach-def in a different namespace than pods
with the following scenario (@maiqueb maybe it's more similar to your case):
default
, testns1
, testns2
default/net-attach1
testns1/testpod2
and testns2/testpod2
testns/ingressPolicies1
with:
NamespaceSelector: &metav1.LabelSelector{
MatchLabels: map[string]string{
"nsname": "testns2",
},
},
buf.renderIngress(s, podInfo1, 0, ingressPolicies1, []string{"default/net-attach1"})
as it had policy-for default/net-attach1"
It should be a consistent scenario, even though I'm not sure how frequent it can be in real use cases.
Fixed as @s1061123 suggested, tests are now all green.
It seems I'm not able to reproduce the bug I found in a real cluster with this kind of test. Going to dig deeper.
Meanwhile, I put this PR on "ready for review", if you think these tests can be useful.
@zeeke , thank you for your incorporate my comments. I suppose of course these tests are useful to improve CI, so let me merge that.
This PR is about supporting
NamespaceSelector
for ingress/egress policies. I added a test with two pods in two different namespace and I was expecting an iptables output similar to the pod selector one, but no rule gets generated.The scenario is like the one described here.
If you confirm it's a bug, I can take the ownership of the fix with some suggestion