Open dropdeadfu opened 3 years ago
bump, created a ticket on aws's official support regarding this
Tested windows server core 2004 (host) and 2004 base image successfully, traffic gets routed correctly instead of nated. This issue only is valid for windows server < 2004 (2019 mainly)
Summary
When I enable a policy to allow pods in one namespace to communicate with services in the same namespace windows pods can no longer communicate with any service inside that namespace. The rest of the cluster (linux nodes and pods) work as expected and can communicate.
Description
After a lot of trial and error I could pin point the issue on the fact, that the traffic from the pods on the windows node is not routed directly but NATed through the host. (I documented a lot of the process in this issue I created on the calico project https://github.com/projectcalico/calico/issues/4936) As a workaround I whitelisted the IP of the windows host in one of the namespaces which is very ugly and not really scalable.
As a start it would be nice to know how you guys intended this to function since I can't really find any useful documentation specifying what the actual expected behaviour is.
Expected Behavior
For running windows nodes in EKS, I expect kube-proxy to route the traffic directly to the VPC as is the case for the rest of the cluster running on linux nodes.
Observed Behavior
Kube-proxy on windows applies NAT to the traffic emerging from the pods.
Environment Details
EKS with k8s 1.19 Calico version 3.20 WindowsServer2019FullContainer AMI for the windows nodes Linux nodes are managed