Closed marekaf closed 5 years ago
Any Solution for this?
I didn't solve this using k8s federation. This project seems dead. I recommend checking Istio 1.0 with GKE multicluster setup. It uses alias ips (beta) to have routable private ips for pods and you can do cross-cluster service discovery with istio control plane. Not exactly what I wanted, but I will play with Istio a bit more.
@bartimar thanks dude. I will take a look on istio
I followed this Istio setup (https://istio.io/docs/examples/multicluster/gke/). Looks like pod communications between 2 GKE clusters over DNS is through a Load Balancer. What I wanted to do is to use DNS to lookup a pod in one GKE cluster from a pod in another GKE cluster. I wanted to exactly pinpoint a DNS service name such as opscenter-0.opscenter.default.svc.cluster.local where opscenter itself is a statefulset. Not sure if Istio is solving this kind of use case.
The Federation documentation clearly states this should be a supported use-case, but I'm also facing the same issue.
Docs: Cross-cluster Service Discovery using Federated Services Check section From pods inside your federated clusters.
Did anyone find any other workarounds, short of using istio?
I would want to pursue a solution using Istio since the K8 Federation project is not promising.
I followed this Istio setup (https://istio.io/docs/examples/multicluster/gke/). Looks like pod communications between 2 GKE clusters over DNS is through a Load Balancer.
Yep, all traffic is going through the istio-ingressgateway which is a LoadBalancer svc type. Which means all the traffic is always going through the one cluster's ingressgateway. Maybe having more istio masters would help?
Got cross-cluster service discovery working with Google CloudDNS. There was a swallwed exception deep-inside kube-dns
: kube-system/kube-dns service account is missing VIEW permissions to list nodes. Run the following command in all clusters and ensure service type is LoadBalancer :
kubectl create clusterrolebinding nodes-kube-dns --clusterrole=system:node --serviceaccount kube-system:kube-dns
Good luck!
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale
.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close
.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /lifecycle stale
Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten
.
Rotten issues close after an additional 30d of inactivity.
If this issue is safe to close now please do so with /close
.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /lifecycle rotten
any progress on this?
@marekaf I would suggest asking in the SIG-Multicluster Slack channel ( https://kubernetes.slack.com/messages/C09R1PJR3 ) or mailing list https://groups.google.com/forum/#!forum/kubernetes-sig-multicluster .
Or file an issue in https://github.com/kubernetes-sigs/federation-v2
I'm not sure why this repo is still accepting updates, and not being redirected to the above one. I'll try to have that fixed.
@marekaf Also, did https://github.com/kubernetes/federation/issues/274#issuecomment-430120387 not work for you?
Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen
.
Mark the issue as fresh with /remove-lifecycle rotten
.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /close
@fejta-bot: Closing this issue.
Hi, I'm trying to setup a working demo on GCP. 2 GKE clusters, each in different region, say
service A has one pod in cluster-BR, other pod in cluster-EU service B (10.23.243.18) has only one pod (IP 10.5.6.3), in cluster-EU
What I want to achieve: service A in cluster-BR wants to communicate with service B, it queries a local dns service-b.default.federation, kube-dns find out there are no healthy endpoints of service-b in cluster-BR so it checks GoogleDNS (managed by federation api plane) and returns a private IP 10.23.243.18 which is the IP or service-b k8s service running in cluster-EU.
Why is this not working? I'm able to communicate from one pod in cluster-EU with pods in cluster-BR using their local pod IPs. I'm not able to hit services' IPs (virtual IPs?, I can't see them from the outside?). Does not matter if it's ClusterIP or NodePort. I don't want to use External Loadbalancers. Internal loadbalancer are regional, so I can't use those either. GoogleDNS with federation handles public ipv4 only from LoadBalancer service specs, it ignores everything else.
Am I the only one trying to solve this problem? What am I missing?
Thanks :)