Closed tafypz closed 7 years ago
FWIW, GCE Internal Load Balancers made it to the compute API v1 since november 21st: https://github.com/google/google-api-go-client/blob/master/compute/v1/compute-gen.go#L1803
Can you tell me how you're hoping to use ILB? There are some issues with it that I have not yet worked through wrt Kubernetes, and we're looking for more user input on what result you hope to achieve.
On Thu, Dec 15, 2016 at 3:29 AM, Patrick Decat notifications@github.com wrote:
FWIW, GCE Internal Load Balancers made it to the compute API v1 since november 21st: https://github.com/google/google-api-go-client/blob/ master/compute/v1/compute-gen.go#L1803
— You are receiving this because you are subscribed to this thread. Reply to this email directly, view it on GitHub https://github.com/kubernetes/kubernetes/issues/33483#issuecomment-267305761, or mute the thread https://github.com/notifications/unsubscribe-auth/AFVgVCGUOrA2Sdf9r4tst4idxkcTRC38ks5rISSlgaJpZM4KGfpl .
@thockin The usage is fairly simple, we have services that need to be exposed outside a kubernetes cluster (we use container engine). Using the typical load balancer exposes a public IP which in our case is not necessary. We used to use the kubernetes on AWS and used the AWS internal load balancers in our service config. So all in all simple usage, access kubernetes services from outside a container engine cluster and expose the services to compute engine instances without exposing them to the public. Right now we create bastion routes to access k8s services this but it is not ideal.
Thanks. That fits one of the expected usage modes, and actually one of the easier to implement ones. Truth is that nobody is working on it just now, but we're planning for it.
Same usecase here.
Le jeu. 15 déc. 2016 18:16, Tim Hockin notifications@github.com a écrit :
Thanks. That fits one of the expected usage modes, and actually one of the easier to implement ones. Truth is that nobody is working on it just now, but we're planning for it.
— You are receiving this because you are subscribed to this thread. Reply to this email directly, view it on GitHub https://github.com/kubernetes/kubernetes/issues/33483#issuecomment-267385733, or mute the thread https://github.com/notifications/unsubscribe-auth/AATcGpP2GOHP8ezZwms8_ona6b5bZzqCks5rIXX9gaJpZM4KGfpl .
Very same use case here. For now we are adding static routes for servicesIpv4Cidr to cluster nodes this is really ugly.
Exact same use case. We are on GKE and currently uses bastion routes. AFAIK that is the best alternative at this point.
Same use case. Need to access GKE services from GCE instances.
+1
no updates on this one?
We're working on a couple of things that need to be done before this can really be done, and working through the limitations.
On Mon, Jan 30, 2017 at 6:07 PM, Evaldas notifications@github.com wrote:
no updates on this one?
— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/kubernetes/kubernetes/issues/33483#issuecomment-276252749, or mute the thread https://github.com/notifications/unsubscribe-auth/AFVgVED4X-pPUMfT9ZPWunc9pXlhY_ueks5rXpdUgaJpZM4KGfpl .
Out of the box:
ip addr
inside the pod), but not by name. This uses a GCE route generated by GKE to a "VMs" /24 subnet.A GCE route to a subnet including generated K8s Service IPs can be added manually, with the same next hop as the generated route. This allows GCE VMs to access GKE Service IPs without using a GCE internal load balancer. The metadata DNS server does not know about the Service names, but it is possible to use statically configured hostnames pointing to the Service IPs.
Instead of exposing GCE internal load balancers, GKE could generate a GCE route for Services and publish Pod/Service names to the GCE metadata/DNS server.
The downside of this technique (we call it service bastion routes) is that if the node to which you are routing goes down, the service gateway is down. It needs a rectifier to check and choose a new node or nodes when it fails, if you want to offer any sort of availability SLA.
On Tue, Jan 31, 2017 at 10:23 AM, Vincent Murphy notifications@github.com wrote:
Out of the box:
- GKE pods can ping GCE by IP and by name, but not vice versa.
- GCE VMs can ping GKE pods by IP (obtained from ip addr inside the pod), but not by name. This uses a GCE route generated by GKE to a "VMs" /24 subnet.
A GCE route to a subnet including generated K8s Service IPs can be added manually, with the same next hop as the generated route. This allows GCE VMs to access GKE services IPs (but not by name) without using a GCE internal load balancer. Now it is possible to use statically configured hostnames on GCE pointing to K8s Service IPs.
Instead of exposing GCE internal load balancers, GKE could create a GCE route for Services and publish Pod/Service names to the GCE metadata/DNS server.
— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/kubernetes/kubernetes/issues/33483#issuecomment-276447381, or mute the thread https://github.com/notifications/unsubscribe-auth/AFVgVA7Skh-HLyOsQGcTwqJb0PdOSZeGks5rX3wygaJpZM4KGfpl .
How much integration are we currently planning to have between ILB and Kubernetes/GKE? Will the ILB be able to route to a specific service VIP or pod? Or is it only able to forward requests to a host port and we need to have an intermediate proxy layer running in the cluster ? i.e. nginx, haproxy, envoy etc.
+1
we have the same requirement. couple of scenarios. though, we can setup one cluster across zones , we still need db replication and internal service call across regions
+1
+1
Not directly related to ILB, but the new Identity-Aware Proxy could solve the same use-case, so as far as I'm concerned - whichever of these integrates with GKE first wins :-)
I have the same use case too. Internal load balancers would be great, although automatic generation of the service bastion route and exporting service names to the metadata/DNS server as mentioned by @vdm would be even better.
This would greatly enhance our ability to interface services in GKE with other GCP products easily, especially in a CI/CD environment where there can be many namespaces and services.
+1 We would primarily use it as a solution to switch services from on-premise to GCP (through VPN between the two).
Would another probable solution be to VPN through Google VPN into the cluster (with openVPN service for instance) and in that way make the ClusterIP available?
Do note that you then also need to have a non-changing ClusterIP or also expose the DNS so you can use the service name.
FYI for the folks talking about using VPN and the Internal Load Balancer: The documentation on internal load balancers specifically states that it cannot receive traffic through a VPN tunnel, it can only receive traffic from GCE nodes in the same region (and we have tested that this indeed does not work).
You cannot send traffic through a VPN tunnel to your load balancer IP.
https://cloud.google.com/compute/docs/load-balancing/internal/
We ran into this problem, and worked around it by setting up NodePort type Services, then configuring BIND on our non-GCP machines to forward requests to Kube DNS. That way applications outside GCP could resolve the Node IPs, and connect to them via the VPN tunnel.
/assign
@dgpc Thanks for the info! The VPN for us would be a temporary solution, and if the GCP VPN's don't work, we could also go for some GCE instances that take on this role :-)
Happy to see that the issue is assigned and being worked on.
For this who are waiting for ILB - how often do you need BOTH an internal ILB and an external LB on the same Service?
On Mar 25, 2017 12:01 AM, "Bob Violier" notifications@github.com wrote:
@dgpc https://github.com/dgpc Thanks for the info! The VPN for us would be a temporary solution, and if the GCP VPN's don't work, we could also go for some GCE instances that take on this role :-)
Happy to see that the issue is assigned and being worked on.
— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/kubernetes/kubernetes/issues/33483#issuecomment-289193892, or mute the thread https://github.com/notifications/unsubscribe-auth/AFVgVEmC9bn7aoKfe8sBa5rGpp3zplFPks5rpLvlgaJpZM4KGfpl .
@thockin Our usage is such that we would not require both external and internal on the same service. If having both slows down the development, i would vote for having only one type LB per service first and both as an enhancement later.
Hi @thockin, in our current setup, we do not need both Internal and External LB on the same services.
We are using ILB to expose non public facing services from one K8S cluster to another, both in distinct subnets.
For this who are waiting for ILB - how often do you need BOTH an internal ILB and an external LB on the same Service?
we have no use-case for both ILB and external LB on the same service.
we need external LB's for externally accessible services, and ILB's for simple & direct access to non-public-facing services (AKA "internal services", "dev/test stack", etc.) from outside the GKE cluster, from personal dev machines / office network.
We have same use case as @itamaro so we don't need both internal and external LB for one particular service
@thockin Same as @itamaro
We don't have any use cases where we deploy both a public and internal LB for a single service.
For us a service is either internal or external but not both at the same time. So we don't have a case where a service needs to use both ILB and GLBC.
+1, and same as the previous commenters -- our only use case is internal.
+1, only use case is internal.
+1 only use case is internal
+1 for internal only
+1 for separate intLB and extLB services.
If anyone does need both intLB and extLB, wouldn't it be as straight forward as defining two separate services (one intLB, and one extLB), both pointing to the same deployment?
+1 for internal only
@thockin Do you have any update or ETA on this?
ILB setup via service controller is under development and should be shipped in 1.7
thank you, @nicksardo.
@nicksardo: any word on whether ILBs will be accessible via GCP VPNs?
@roobert I like this question. Currently internal LB's can't be accessed via VPN's or other projects :-( We currently have to roll our own HAProxy deployments for this very reason ..
@nicksardo is there a PR or different issues one can subscribe to?
Also, can you fix the labels for this issue?
Same case, same use. Need internal lb for container cluster services to access them from VM and from VPN.
@tafypz There are no sig labels on this issue. Please add a sig label by:
(1) mentioning a sig: @kubernetes/sig-<team-name>-misc
(2) specifying the label manually: /sig <label>
Note: method (1) will trigger a notification to the team. You can find the team list here.
Same requirement here.
/sig network /area platform/gce /kind feature /remove-area kube-ctl
@nicksardo: Those labels are not set on the issue: area/kube-ctl
.
/remove-area kubectl
I would be great to be able to use GCE new internal load balancer feature. Configuration could be done via the same type of config as AWS internal load balancer. This would avoid having to create bastion routes to access non public services living in GKE.