kubernetes / kubernetes

Production-Grade Container Scheduling and Management
https://kubernetes.io
Apache License 2.0
110.62k stars 39.55k forks source link

FEATURE REQUEST: Support GCE Internal Load Balancer #33483

Closed tafypz closed 7 years ago

tafypz commented 8 years ago

I would be great to be able to use GCE new internal load balancer feature. Configuration could be done via the same type of config as AWS internal load balancer. This would avoid having to create bastion routes to access non public services living in GKE.

pdecat commented 7 years ago

FWIW, GCE Internal Load Balancers made it to the compute API v1 since november 21st: https://github.com/google/google-api-go-client/blob/master/compute/v1/compute-gen.go#L1803

thockin commented 7 years ago

Can you tell me how you're hoping to use ILB? There are some issues with it that I have not yet worked through wrt Kubernetes, and we're looking for more user input on what result you hope to achieve.

On Thu, Dec 15, 2016 at 3:29 AM, Patrick Decat notifications@github.com wrote:

FWIW, GCE Internal Load Balancers made it to the compute API v1 since november 21st: https://github.com/google/google-api-go-client/blob/ master/compute/v1/compute-gen.go#L1803

— You are receiving this because you are subscribed to this thread. Reply to this email directly, view it on GitHub https://github.com/kubernetes/kubernetes/issues/33483#issuecomment-267305761, or mute the thread https://github.com/notifications/unsubscribe-auth/AFVgVCGUOrA2Sdf9r4tst4idxkcTRC38ks5rISSlgaJpZM4KGfpl .

tafypz commented 7 years ago

@thockin The usage is fairly simple, we have services that need to be exposed outside a kubernetes cluster (we use container engine). Using the typical load balancer exposes a public IP which in our case is not necessary. We used to use the kubernetes on AWS and used the AWS internal load balancers in our service config. So all in all simple usage, access kubernetes services from outside a container engine cluster and expose the services to compute engine instances without exposing them to the public. Right now we create bastion routes to access k8s services this but it is not ideal.

thockin commented 7 years ago

Thanks. That fits one of the expected usage modes, and actually one of the easier to implement ones. Truth is that nobody is working on it just now, but we're planning for it.

pdecat commented 7 years ago

Same usecase here.

Le jeu. 15 déc. 2016 18:16, Tim Hockin notifications@github.com a écrit :

Thanks. That fits one of the expected usage modes, and actually one of the easier to implement ones. Truth is that nobody is working on it just now, but we're planning for it.

— You are receiving this because you are subscribed to this thread. Reply to this email directly, view it on GitHub https://github.com/kubernetes/kubernetes/issues/33483#issuecomment-267385733, or mute the thread https://github.com/notifications/unsubscribe-auth/AATcGpP2GOHP8ezZwms8_ona6b5bZzqCks5rIXX9gaJpZM4KGfpl .

rvrignaud commented 7 years ago

Very same use case here. For now we are adding static routes for servicesIpv4Cidr to cluster nodes this is really ugly.

kuroneko25 commented 7 years ago

Exact same use case. We are on GKE and currently uses bastion routes. AFAIK that is the best alternative at this point.

ipadavic commented 7 years ago

Same use case. Need to access GKE services from GCE instances.

replicant0wnz commented 7 years ago

+1

evaldasou commented 7 years ago

no updates on this one?

thockin commented 7 years ago

We're working on a couple of things that need to be done before this can really be done, and working through the limitations.

On Mon, Jan 30, 2017 at 6:07 PM, Evaldas notifications@github.com wrote:

no updates on this one?

— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/kubernetes/kubernetes/issues/33483#issuecomment-276252749, or mute the thread https://github.com/notifications/unsubscribe-auth/AFVgVED4X-pPUMfT9ZPWunc9pXlhY_ueks5rXpdUgaJpZM4KGfpl .

vdm commented 7 years ago

Out of the box:

  1. GKE pods can ping GCE by IP and by name, but not vice versa.
  2. GCE VMs can ping GKE pods by IP (obtained from ip addr inside the pod), but not by name. This uses a GCE route generated by GKE to a "VMs" /24 subnet.

A GCE route to a subnet including generated K8s Service IPs can be added manually, with the same next hop as the generated route. This allows GCE VMs to access GKE Service IPs without using a GCE internal load balancer. The metadata DNS server does not know about the Service names, but it is possible to use statically configured hostnames pointing to the Service IPs.

Instead of exposing GCE internal load balancers, GKE could generate a GCE route for Services and publish Pod/Service names to the GCE metadata/DNS server.

thockin commented 7 years ago

The downside of this technique (we call it service bastion routes) is that if the node to which you are routing goes down, the service gateway is down. It needs a rectifier to check and choose a new node or nodes when it fails, if you want to offer any sort of availability SLA.

On Tue, Jan 31, 2017 at 10:23 AM, Vincent Murphy notifications@github.com wrote:

Out of the box:

  1. GKE pods can ping GCE by IP and by name, but not vice versa.
  2. GCE VMs can ping GKE pods by IP (obtained from ip addr inside the pod), but not by name. This uses a GCE route generated by GKE to a "VMs" /24 subnet.

A GCE route to a subnet including generated K8s Service IPs can be added manually, with the same next hop as the generated route. This allows GCE VMs to access GKE services IPs (but not by name) without using a GCE internal load balancer. Now it is possible to use statically configured hostnames on GCE pointing to K8s Service IPs.

Instead of exposing GCE internal load balancers, GKE could create a GCE route for Services and publish Pod/Service names to the GCE metadata/DNS server.

— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/kubernetes/kubernetes/issues/33483#issuecomment-276447381, or mute the thread https://github.com/notifications/unsubscribe-auth/AFVgVA7Skh-HLyOsQGcTwqJb0PdOSZeGks5rX3wygaJpZM4KGfpl .

kuroneko25 commented 7 years ago

How much integration are we currently planning to have between ILB and Kubernetes/GKE? Will the ILB be able to route to a specific service VIP or pod? Or is it only able to forward requests to a host port and we need to have an intermediate proxy layer running in the cluster ? i.e. nginx, haproxy, envoy etc.

GrantGochnauer commented 7 years ago

+1

edenxia commented 7 years ago

we have the same requirement. couple of scenarios. though, we can setup one cluster across zones , we still need db replication and internal service call across regions

denisa commented 7 years ago

+1

jlewi commented 7 years ago

+1

itamaro commented 7 years ago

Not directly related to ILB, but the new Identity-Aware Proxy could solve the same use-case, so as far as I'm concerned - whichever of these integrates with GKE first wins :-)

API for IAP

peay commented 7 years ago

I have the same use case too. Internal load balancers would be great, although automatic generation of the service bastion route and exporting service names to the metadata/DNS server as mentioned by @vdm would be even better.

This would greatly enhance our ability to interface services in GKE with other GCP products easily, especially in a CI/CD environment where there can be many namespaces and services.

bviolier commented 7 years ago

+1 We would primarily use it as a solution to switch services from on-premise to GCP (through VPN between the two).

bviolier commented 7 years ago

Would another probable solution be to VPN through Google VPN into the cluster (with openVPN service for instance) and in that way make the ClusterIP available?

Do note that you then also need to have a non-changing ClusterIP or also expose the DNS so you can use the service name.

dgpc commented 7 years ago

FYI for the folks talking about using VPN and the Internal Load Balancer: The documentation on internal load balancers specifically states that it cannot receive traffic through a VPN tunnel, it can only receive traffic from GCE nodes in the same region (and we have tested that this indeed does not work).

You cannot send traffic through a VPN tunnel to your load balancer IP.

https://cloud.google.com/compute/docs/load-balancing/internal/

We ran into this problem, and worked around it by setting up NodePort type Services, then configuring BIND on our non-GCP machines to forward requests to Kube DNS. That way applications outside GCP could resolve the Node IPs, and connect to them via the VPN tunnel.

nicksardo commented 7 years ago

/assign

bviolier commented 7 years ago

@dgpc Thanks for the info! The VPN for us would be a temporary solution, and if the GCP VPN's don't work, we could also go for some GCE instances that take on this role :-)

Happy to see that the issue is assigned and being worked on.

thockin commented 7 years ago

For this who are waiting for ILB - how often do you need BOTH an internal ILB and an external LB on the same Service?

On Mar 25, 2017 12:01 AM, "Bob Violier" notifications@github.com wrote:

@dgpc https://github.com/dgpc Thanks for the info! The VPN for us would be a temporary solution, and if the GCP VPN's don't work, we could also go for some GCE instances that take on this role :-)

Happy to see that the issue is assigned and being worked on.

— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/kubernetes/kubernetes/issues/33483#issuecomment-289193892, or mute the thread https://github.com/notifications/unsubscribe-auth/AFVgVEmC9bn7aoKfe8sBa5rGpp3zplFPks5rpLvlgaJpZM4KGfpl .

tafypz commented 7 years ago

@thockin Our usage is such that we would not require both external and internal on the same service. If having both slows down the development, i would vote for having only one type LB per service first and both as an enhancement later.

pdecat commented 7 years ago

Hi @thockin, in our current setup, we do not need both Internal and External LB on the same services.

We are using ILB to expose non public facing services from one K8S cluster to another, both in distinct subnets.

itamaro commented 7 years ago

For this who are waiting for ILB - how often do you need BOTH an internal ILB and an external LB on the same Service?

we have no use-case for both ILB and external LB on the same service.

we need external LB's for externally accessible services, and ILB's for simple & direct access to non-public-facing services (AKA "internal services", "dev/test stack", etc.) from outside the GKE cluster, from personal dev machines / office network.

rvrignaud commented 7 years ago

We have same use case as @itamaro so we don't need both internal and external LB for one particular service

rnavarro commented 7 years ago

@thockin Same as @itamaro

We don't have any use cases where we deploy both a public and internal LB for a single service.

kuroneko25 commented 7 years ago

For us a service is either internal or external but not both at the same time. So we don't have a case where a service needs to use both ILB and GLBC.

davidquarles commented 7 years ago

+1, and same as the previous commenters -- our only use case is internal.

jeffyecn commented 7 years ago

+1, only use case is internal.

EmiPhil commented 7 years ago

+1 only use case is internal

writer-jr commented 7 years ago

+1 for internal only

jlaham commented 7 years ago

+1 for separate intLB and extLB services.

If anyone does need both intLB and extLB, wouldn't it be as straight forward as defining two separate services (one intLB, and one extLB), both pointing to the same deployment?

ualtinok commented 7 years ago

+1 for internal only

slavakl commented 7 years ago

@thockin Do you have any update or ETA on this?

nicksardo commented 7 years ago

ILB setup via service controller is under development and should be shipped in 1.7

slavakl commented 7 years ago

thank you, @nicksardo.

roobert commented 7 years ago

@nicksardo: any word on whether ILBs will be accessible via GCP VPNs?

replicant0wnz commented 7 years ago

@roobert I like this question. Currently internal LB's can't be accessed via VPN's or other projects :-( We currently have to roll our own HAProxy deployments for this very reason ..

pires commented 7 years ago

@nicksardo is there a PR or different issues one can subscribe to?

Also, can you fix the labels for this issue?

myaghini commented 7 years ago

Same case, same use. Need internal lb for container cluster services to access them from VM and from VPN.

k8s-github-robot commented 7 years ago

@tafypz There are no sig labels on this issue. Please add a sig label by:
(1) mentioning a sig: @kubernetes/sig-<team-name>-misc
(2) specifying the label manually: /sig <label>

Note: method (1) will trigger a notification to the team. You can find the team list here.

astraverkhau commented 7 years ago

Same requirement here.

nicksardo commented 7 years ago

/sig network /area platform/gce /kind feature /remove-area kube-ctl

k8s-ci-robot commented 7 years ago

@nicksardo: Those labels are not set on the issue: area/kube-ctl.

In response to [this](https://github.com/kubernetes/kubernetes/issues/33483#issuecomment-305562865): >/sig network >/area platform/gce >/kind feature >/remove-area kube-ctl Instructions for interacting with me using PR comments are available [here](https://github.com/kubernetes/community/blob/master/contributors/devel/pull-requests.md). If you have questions or suggestions related to my behavior, please file an issue against the [kubernetes/test-infra](https://github.com/kubernetes/test-infra/issues/new?title=Prow%20issue:) repository.
nicksardo commented 7 years ago

/remove-area kubectl