Closed amargheritini closed 1 year ago
Hi @amargheritini ,
A. Please provide details also about CNI
B. Submariner components not involved in load-balancing service IP to endpoints/backend pods , assuming the following use case :
Submariner components (gateway, route-agent ) will handle packet forwarding from cluster B to cluster A, next submariner-gateway on cluster A will decrypt the packet and from this point the kube-proxy (or any other component that is responsible for distributing traffic to service endpoints) should distribute the packet towards one of the backend pods.
C. K8S load to service endpoint is random, but the distribution should be approximately equal for non-trivial loads. e.g. when we run tests for 1000 requests you can see it is close to equal.
When you check it locally, do you run it from a pod running on the gateway node? do you use the same client ?
Check [1] for more details about k8s service traffic distribution
[1] https://groups.google.com/g/kubernetes-users/c/lvfyKzUf-Vg?pli=1
(Colleague of amargheritini)
CNI is OVNKubernetes
Assuming your proposed use case. Through a temporary bash shell within cluster A
kubectl run -n nginx-test my_pod_in_clusterA- --rm -i --tty --image quay.io/submariner/nettest -- /bin/bash
A curl of my_svc_in_clustera, is correcly balanced against all pods behind that service.
Through another temporary bash shell within cluster B
kubectl run -n nginx-test my_pod_in_clusterB --rm -i --tty --image quay.io/submariner/nettest -- /bin/bash
A curl of my_svc_in_clustera, is NOT correcly balanced against all pods behind that service.
How do we know that? Because the actual behavior of pods behind my_svc_in_clustera service is to get back the actual pods hostname. Locally, returned hostname changes at every curl, from a remote pods, hostname is always the same (aka, not balancing pods at service level).
my_svc_in_clustera is correctly exported from cluster a to cluster b. kubectl get -n submariner-operator serviceimport
include the my_svc_in_clustera service alongside the correct IP assingned by the cluster A to my_svc_in_clustera.
That IP address, from any pods within cluster B is reachable. So I would exclude connection issue.
When you check it locally, do you run it from a pod running on the gateway node? do you use the same client ?
No, the issue is not limited when running the test from a local pod running on the gateway node.
With OVNK as CNI, OVNK is responsible also for distributing traffic to local service between backend pods (using OpenFlow rules).
I want to verify how OVNK load balance the traffic between the different pods for local services regardless of Submariner.
So, could you please share the results of the following tests: A. On clusterA, run netshoot pod (with pod networking) on the same node that submariner gateway pod is running
Repeat step A, but this time run the pod with hostnetworking
You can use [1] and [2] (download as ZIP and apply it to your cluster) to deploy netshoot daemonset with hostnet or podnet
[1] https://gist.github.com/yboaron/82494eaff925e186f8dc3662f48f21b6 [2] https://gist.github.com/yboaron/bd5f913e59a0fb307b877b29e33d88a5
@giacomotontini @amargheritini Any feedback for @yboaron?
Closing for now, if there's any feedback feel free to re-open or open a new issue.
Hi dear!
actual implementation are two k8s cluster based on RHOS with ACM for global management.
Both cluster are deployed with submariner.
After linking them, we exposed adn imported between cluster some services. behind services there are multiple pod serving same apps.
What we find out, is that if curl from a remote endpoint on a cluster HUB trying to a expoerted remote service the answer is always provided by the same POD. Locally, generating a request inside the remote cluster from a endpoint to the same service (correspongin to the exported one) the answer is served in a round robin scenario, does it happend to someone in that community? Thanks!