Open asimpleidea opened 5 years ago
Thanks @SunSince90 for reporting this issue. If not mistaken, this was working. We surely needs to identify it this is a bug or an environment setup issue.
I can access the service via cluster IP from pod (with IP in the range --pod-network-cidr) in the remote host. Accessing service from host or pcn-k8s pod (it is host IP) seems not covered in pcn_k8s design and I am not sure whether we should support or not -- I didn't see the need because if the service is needed to access from outside the cluster, pcn_k8s can support node port.
host to ClusterIP case is already supported (besides any new bug I am not aware of). In this case the packet is processed by kube-proxy that performs the load balancing and changes the destination IP to the one of the selected pod, after it the packet is picked by polycube and forwarded as usual.
you are right host to clusterIP case is taken care by Kube-proxy, but based on my observation since the request packet use the host IP as its source IP, when packet goes through the path: host stack->vxlan tunnel->physical NIC->remote vxlan tunnel->pod, the packet will be dropped by kernel due to RP filter since the routing points to different source interface.
This is more like a routing issue of Kube-proxy, when wget uses the VXLan interface address as source IP of requesting packet, the packet can go through; on the other hand if wget uses the physical interface IP to build the request packet, problem will occur.
A workaround is to use option "--bind-address" to bind to the VXLan interface when send request.
Describe the bug
As per title, if the pods are on different nodes, they cannot be accessed if called by their service.
To Reproduce
For simplicity, let's suppose you have two nodes: so, a
master
and aworker
. Deploy a test pod that will be scheduled onworker
:kubectl run apiserver --image=nginx --labels app=apiserver --expose --port 80
Wait until it is running and get its
podIP
and its service'sclusterIP
.Now, get inside a pod that is running on the same node as the pod just deployed, i.e.
polycube-worker
, and try to access the pod:Now, do the same on a pod running in a different node, i.e.
polycube-master
:Expected behavior
The pod should be accessible and the following message should be displayed if performing the steps above (of course here the message is cut):
Please tell us about your environment:
4.15.0-55-generic #60-Ubuntu SMP Tue Jul 2 18:22:20 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux
Additional context