Open yndai opened 8 months ago
Moving this to Network Policy controller repo. Fix for this issue in NP controller is currently being rolled out to all the clusters. Will update here once the rollout is complete.
Update:
I see our managed cluster has been updated to v1.27.10-eks-508b6b3
and the managed platform version eks.13
.
In the reproduction example from my post, the from.podSelector
rule is now working, however the from.ipBlock
rule still does not work for a named port. Replacing port: web-service
with port: 80
produces a correct EndpointPolicy.
Specifically from above example:
Expected:
apiVersion: networking.k8s.aws/v1alpha1
kind: PolicyEndpoint
metadata:
name: allow-web-traffic-fjz4q
[...]
spec:
ingress:
- cidr: 172.17.0.0/16
ports:
- port: 80
protocol: TCP
- cidr: <source Pod IP>
ports:
- port: 443
protocol: TCP
[...]
But got:
apiVersion: networking.k8s.aws/v1alpha1
kind: PolicyEndpoint
metadata:
name: allow-web-traffic-fjz4q
[...]
spec:
ingress:
- cidr: <source Pod IP>
ports:
- port: 443
protocol: TCP
[...]
@yndai Any particular reason behind specifying a static Pod IP as opposed to using Pod/Namespace selectors? Pod IPs are ephemeral, so trying to understand the use case here. Right now, we don't support named ports when pod IPs are specified as static IPs in the NP resource (and also don't support Service VIPs for static pod IPs)..
@achevuru I think you might be misunderstanding my example (correct me if I am wrong):
In my net policy I have an ingress rule for a CIDR range (not pod ip):
ingress:
- from:
- ipBlock:
cidr: 172.17.0.0/16
ports:
- protocol: TCP
port: web-service # Named port on target pod = 80
In the PolicyEndpoint I expect a rule like this:
spec:
ingress:
- cidr: 172.17.0.0/16
ports:
- port: 80
protocol: TCP
But I get:
spec:
ingress: []
However, this CIDR ingress rule is not created. If I change the network policy rule to target the numeric port like so, then it works:
In my net policy I have an ingress rule for a CIDR range (not pod IP):
ingress:
- from:
- ipBlock:
cidr: 172.17.0.0/16
ports:
- protocol: TCP
port: 80
Basically, the issue persists for ipBlock rules, but was fixed for podSelector rules.
@yndai Understood. We should be able to extend the named port support to ingress
rules at the minimum for ipBlock
but I would like to call out that if the CIDR range you provide is trying to capture the cluster's pod IP range, it will run in to issues if the pod to which the NP applies tries to use Service VIPs of those pods (egress
rules).
@achevuru Thank you, and yes, I understand the caveats
@yndai thanks for reporting this. After discussing internally, as @achevuru mentioned there could be some caveats by using IPs, we will add the named ports support for ipBlocks for Ingress only. The change will be made by one of our members soon. Thanks again for testing them quickly!
Another note for anyone else possibly facing this: For rules that allow all traffic to specific ports e.g.
ingress:
- ports:
- port: https
protocol: TCP
Under the hood this maps to a CIDR rule on ::/0
, so using named ports on such rules are also not working.
What happened:
Summary: When creating a
NetworkPolicy
with an ingress rule that allows traffic to a named port on a target pod, the ingress rule is not created in the resultingPolicyEndpoint
(and traffic is not allowed either).Additional details:
NetworkPolicy
works finePod
and that sourcePod
has the same named ports, the rule also works fine (regardless of the numerical values declared in the source pod) (See the "Anything else we need to know" section)What you expected to happen:
When specifying a named port in a
NetworkPolicy
rule, the port should be mapped to the corresponding numerical port on the targetPod
, if available.How to reproduce it (as minimally and precisely as possible):
Example:
kubctl apply -f
on:We expect this in the resulting
PolicyEndpoint
:But instead got this:
Anything else we need to know?:
Another weirdness: if the source
Pod
itself has the same named ports defined in the ingress rule, the rule that specifies the sourcePod
actually works as expected!e.g. if you delete the source pod in the above example and apply this instead:
We now get:
The CIDR source rule is still missing, however. This makes me think that there is an incorrect check against the source
Pod
when mapping named ports to their numerical values instead of the targetPod
.Environment:
kubectl version
):Server Version: version.Info{Major:"1", Minor:"27+", GitVersion:"v1.27.8-eks-8cb36c9", GitCommit:"fca3a8722c88c4dba573a903712a6feaf3c40a51", GitTreeState:"clean", BuildDate:"2023-11-22T21:52:13Z", GoVersion:"go1.20.11", Compiler:"gc", Platform:"linux/amd64"}
amazon-k8s-cni:v1.16.0-eksbuild.1
aws-network-policy-agent:v1.0.7-eksbuild.1
cat /etc/os-release
):Amazon Linux 2
uname -a
):Linux ip-10-1-61-179.ec2.internal 5.10.192-183.736.amzn2.x86_64 aws/aws-network-policy-agent#1 SMP Wed Sep 6 21:15:41 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux