Closed vaskozl closed 2 weeks ago
those were my thoughts when I added the comment, and I kind of like the simplicity of existing solution that just needs one nftables rule ... I think that is a matter of taking numbers at this point
I'm also thinking the podSelector optimisation alone probably won't realy provide a significant saving assuming blanket BANP or otherwise policies covering the majority of workloads, while it might instead introduce extra load/bugs.
I found this comment quite interesting as I was interested in an
nftables
network policies implementation. We could this with by creating a a set:Then we can use two queues, one for when the PodIP is the source and one when it's the destination, so we only need to evaluate egress and ingress respectively.
Something like:
This partially solve the double processing in https://github.com/kubernetes-sigs/kube-network-policies/issues/10, assuming the
@netpoledPodsSet
set only includes pods currently running on the node. (We'd still process in userspace twice but we'd do only half of the checks).While this seems good in theory to me, I can't help but wonder whether the added complexity here is worth it, because now we have to maintain an up to date set of on each node.
We could actually maintain a set per network policy (instead of one big one). If we do all that, then it's not much of a strech to also maintain an ingress and egress nftables set per netpol each contain addr/protocol and port(range)?
Something like:
(We'd actually also need a
noPort
ingress/egress set and rule, as inet_service must be a port or range)TLDR: If we bother to maintain one set, should we not maintain 5 and do everything in
nftables
(future features aside)?