Closed bowei closed 2 years ago
From @bprashanth on March 7, 2017 3:39
I don't think there's an immediate work around, as the controller will construct a url map based on your ingress and sync it continuously.
Something that says: serve static content for these paths from a content cache backed by [S3, GCS, memory etc] sounds like a good idea. We should allow GCE L7 x in-memory cache, but for the first cut we might get away with a simple boolean on the HTTPIngressPath (https://github.com/kubernetes/kubernetes/blob/master/pkg/apis/extensions/types.go#L685). We'd have to assume GCS and allocates a private bucket if the Ingress is a GCE lb.
Maybe we should fork into another issue that tackles how we enable CDN on Ingress, and figure out the api part first? @nicksardo @thockin
From @ConradIrwin on March 29, 2017 22:16
@nicksardo thanks for picking this up!
I'd also like to be able to proxy certain paths to CloudStorage — let me know if you want a sounding board for design decisions.
From @bbzg on April 30, 2017 8:14
This would be very useful for us. Has there been any progress since March?
From @thockin on May 1, 2017 4:16
As far as I know, nobody is looking at this right now. What I don't want to do is make Ingress a proxy-API for all of GCLB..
On Sun, Apr 30, 2017 at 1:15 AM, bbzg notifications@github.com wrote:
This would be very useful for us. Has there been any progress since March?
— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/kubernetes/ingress/issues/353#issuecomment-298218441, or mute the thread https://github.com/notifications/unsubscribe-auth/AFVgVNrjEkp-7zCsHBX1TB1bjBwJkD6eks5r1EMFgaJpZM4MN2NU .
From @ConradIrwin on May 1, 2017 4:50
Tim. The ingress API is very convenient, but I see your argument. Would it make more sense to make it a GCLB "controller" instead?Sent via SuperhumanOn Sun, Apr 30, 2017 at 9:16 PM, Tim Hockinnotifications@github.comwrote:As far as I know, nobody is looking at this right now. What I don't want to do is make Ingress a proxy-API for all of GCLB..
On Sun, Apr 30, 2017 at 1:15 AM, bbzg notifications@github.com wrote:
This would be very useful for us. Has there been any progress since March?
— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/kubernetes/ingress/issues/353#issuecomment-298218441, or mute the thread https://github.com/notifications/unsubscribe-auth/AFVgVNrjEkp-7zCsHBX1TB1bjBwJkD6eks5r1EMFgaJpZM4MN2NU .
—You are receiving this because you commented.Reply to this email directly, view it on GitHub, or mute the thread.
From @thockin on May 1, 2017 5:7
I'm not sure. It could go a couple ways.
We could clearly denote where our controller will revert manual changes and where it won't, so you could make these changes manually (or by other controller). We could consider additional annotations for this, but it is a slippery slope. Other ideas?
On Sun, Apr 30, 2017 at 9:50 PM, Conrad Irwin notifications@github.com wrote:
Tim. The ingress API is very convenient, but I see your argument. Would it make more sense to make it a GCLB "controller" instead?Sent via SuperhumanOn Sun, Apr 30, 2017 at 9:16 PM, Tim Hockin< notifications@github.com>wrote:As far as I know, nobody is looking at this right now. What I don't want to do is make Ingress a proxy-API for all of GCLB..
On Sun, Apr 30, 2017 at 1:15 AM, bbzg notifications@github.com wrote:
This would be very useful for us. Has there been any progress since March?
— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub <https://github.com/kubernetes/ingress/issues/353#issuecomment-298218441 , or mute the thread https://github.com/notifications/unsubscribe-auth/AFVgVNrjEkp- 7zCsHBX1TB1bjBwJkD6eks5r1EMFgaJpZM4MN2NU .
—You are receiving this because you commented.Reply to this email directly, view it on GitHub, or mute the thread.
— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/kubernetes/ingress/issues/353#issuecomment-298285781, or mute the thread https://github.com/notifications/unsubscribe-auth/AFVgVH8mIh19FVzwp-Z5jQnCWRFBV1Iqks5r1WSzgaJpZM4MN2NU .
From @ConradIrwin on May 1, 2017 5:18
I'd be happy with a less magic API between the two — i.e. I could manually configure a load balancer if kubernetes gave me a backend I could point it to (or maybe just an instance-group + port?)Sent via SuperhumanOn Sun, Apr 30, 2017 at 10:07 PM, Tim Hockinnotifications@github.comwrote:I'm not sure. It could go a couple ways.
We could clearly denote where our controller will revert manual changes and where it won't, so you could make these changes manually (or by other controller). We could consider additional annotations for this, but it is a slippery slope. Other ideas?
On Sun, Apr 30, 2017 at 9:50 PM, Conrad Irwin notifications@github.com wrote:
Tim. The ingress API is very convenient, but I see your argument. Would it make more sense to make it a GCLB "controller" instead?Sent via SuperhumanOn Sun, Apr 30, 2017 at 9:16 PM, Tim Hockin< notifications@github.com>wrote:As far as I know, nobody is looking at this right now. What I don't want to do is make Ingress a proxy-API for all of GCLB..
On Sun, Apr 30, 2017 at 1:15 AM, bbzg notifications@github.com wrote:
This would be very useful for us. Has there been any progress since March?
— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub <https://github.com/kubernetes/ingress/issues/353#issuecomment-298218441 , or mute the thread https://github.com/notifications/unsubscribe-auth/AFVgVNrjEkp- 7zCsHBX1TB1bjBwJkD6eks5r1EMFgaJpZM4MN2NU .
—You are receiving this because you commented.Reply to this email directly, view it on GitHub, or mute the thread.
— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/kubernetes/ingress/issues/353#issuecomment-298285781, or mute the thread https://github.com/notifications/unsubscribe-auth/AFVgVH8mIh19FVzwp-Z5jQnCWRFBV1Iqks5r1WSzgaJpZM4MN2NU .
—You are receiving this because you commented.Reply to this email directly, view it on GitHub, or mute the thread.
From @gcbirzan on July 20, 2017 13:58
As a workaround for this issue, wouldn't not touching rules that kubernetes cannot create (i.e. those not pointing to a backend service, but to a bucket) be okay?
As a side note, we had this working on 1.6.x, but after upgrading it started removing the extra rules in the url map...
From @c3s4r on July 26, 2017 22:24
Any updates on this? Is it scheduled? Is there a timeline? ... Since I want the content served using https, right now the only workaround I can think on is to manually create another load balancer (not using ingress) just for the static content, which I don't like because it adds the cost of an additional load balancer :(
From @lostpebble on July 29, 2017 10:16
Just coming across this now after finishing setting up backend buckets for our system...
This is a major set back for us trying to set up static file routes alongside our server backends. I agree with @gcbirzan that perhaps the load balancer should be updated for the values that Kubernetes can control rather than replaced wholly (and in the process removing GCP-specific rules).
Right now things feel too flaky to rely on backend buckets for static file serving, if when we update the configuration we might lose those pathways and return bad requests.
Huge pity because the CDN and load bearing capabilities the backend buckets could afford us is a major asset to our system.
From @jakobholmelund on September 27, 2017 13:35
Any news on this ?
Would be also really interested in this feature! we could really need that.
Would there be maybe a possibility to use maye a ignore pattern and the possibility to use a existing load balancer? Then there could be existing url_maps maps ignored (if told in the ingress) same as other existing backends for example. This could also solve the CDN Problem. Then you configure Google Cloud Stuff individually without adding everything in kubernetes.
+1
+1 this would be very useful. Current workaround is to use a regular old loadbalancer.
+1 waiting this feature
+1
+1
+1
+1
+1
You can achieve this by configuring the K8 load-balancing manually as opposed to using an NGINX ingress. Assuming your services are deployed NodePort
with a static nodePort
indicated in the ports
object (the tcp:30000-40000 range seems to be where they go by default and is a good rule to follow), you will need to:
130.211.0.0/22
and 35.191.0.0/16
to the K8 instance groupnodePort
you specified aboveFor incoming TLS, create a Kubernetes secret of a cert and assign as per https://cloud.google.com/compute/docs/load-balancing/tcp-ssl
You could also manage your own ports with the NodePort service to get around the ephemeral port allocation.
On Mon, Jan 22, 2018, 9:20 PM Alex Zuzin notifications@github.com wrote:
You can achieve this by configuring the K8 load-balancing manually as opposed to using an NGINX ingress. Assuming your services are deployed NodePort, you will need to:
- create a load balancer instance
- create a firewall rule that allows traffic from 130.211.0.0/22 and 35.191.0.0/16 to the K8 instance group
- create a backend service against the K8 instance group, pointing to whatever node port the service in question exposes (they seem to be in the tcp:30000-40000 range, roughly; exact port is available from kubectl get service
This is mildly error-prone in the sense that a service's exposed node port is presumably ephemeral wrt a kubectl replace, but still is much preferred to having to run a group of HAProxy/Nginx instances just to get around a temporary limitation in the NGINX ingress controller.
— You are receiving this because you commented. Reply to this email directly, view it on GitHub https://github.com/kubernetes/ingress-gce/issues/33#issuecomment-359651652, or mute the thread https://github.com/notifications/unsubscribe-auth/AAkQy3fsaoCwxC_EsYPglw3b4g9Vxocdks5tNUH8gaJpZM4P13dR .
@zvozin I think you can set the NodePort
to something static in the service descriptor - which should make this setup a little more solid.
I'm pretty much doing the same on my side, seems to be working quite well. And even though I'm not setting the NodePort
statically as I should yet, it seems to be remembering them on full cluster resets.
Here are some notes I made during the process last time:
Linking NodePort service to the GCE Load Balancer
Thank you @scottefein and @lostpebble - good point! Relevant docs: https://kubernetes.io/docs/concepts/services-networking/service/#type-nodeport
Thanks @lostpebble, although setting up HTTPs on top of this would be too cumbersome if possible at all. I'm currently looking at https://github.com/PalmStoneGames/kube-cert-manager and https://github.com/jetstack/cert-manager/ looks like they rely on Ingress created with k8s. I hope this is fixed at GKE level soon.
HTTPS is really easy to add via the GCLB, you just upload the certificate and apply it. https://cloud.google.com/compute/docs/load-balancing/http/#creating_a_cross-region_load_balancer
@scottefein not very practical with Let's Encrypt certs that expire every 3 months, especially for multiple domains. Tools that automate certificate renewals based on existing Ingress resources are very useful, but this workflow breaks because of the Ingress/GCS bucket incompatibility issue.
@explicitcall: certbot-auto renew
, more ____ | base64 > secret.yaml
, kubectl replace -f secret.yaml
. Step #2 obviously requires templating, which should also be un-hard.
Hi @zvozin, I'm confused. Do you mind telling me that what does NodePort has to do with GCS static file serving?
Any news on this one?
@olala7846 - Google Load Balancer knows how to serve static files out of a bucket.
It sounds like a few people are suggesting working around this by moving load balancing outside of k8s control and fixing services to static node ports. I'm reminded of the joke where a guy goes to the doctor and says "Doc my hand hurts when I make a fist" and doc says "Well then stop doing that!" I'm not sure what it is we should stop doing though. I'd prefer a temporary solution where k8s simply refrains from blowing away externally applied load balancing rules. (since that's the behavior I'd expect)
Those of you all using Google's L7 load balancer in front of GKE, how do you do your maintenance mode? We had been pointing our ingress-provisioned load balancer default backend at a GCS bucket and it worked great during same maintenance windows in December and January. But tried to do a maintenance mode tonight and it did not go well :( At first I thought it was when we updated our deployments that caused k8s to switch our LB back from the bucket to the cluster in the middle of maintenance, so I set it back manually again. But then k8s switched it back again seemingly out of nowhere. Presently I would really highly discourage anyone from trying to accomplish maintenance mode this way, as k8s apparently really wants to stomp the LB default backend.
Hey @gkop — we actually just went through this, and ended up deciding to just deploy a dummy version of the service that returned a maintenance page (our experience has also been that twiddling with the ingress settings is a super bad idea).
why is this still not implemented? :(
I'm also waiting on Cloud Storage Bucket support in Ingress-GCE
+1, is there an officially recommended approach until this issue is addressed?
After spending a lot of time on this, I found a way to use GCS backends and fix is not to use Ingress.
Kubernetes NodePort
services allows us to use any port in 30000-32767 range. Instead of using GCE or nginx Ingress, you can provision the LB using Terraform and set your backends as either GCS based Backend Buckets or GKE based Backend Services.
You'll also need to use "Port name mapping" in Instance Group created for GKE instances. In summary:
Hope this helps!
@bfarayev I use the same in production (actually provisioning w/ Google Deployment manager with jinja, not terraform) this is so wrong. Somebody fix the ingress-gce please :(
My experience was that Google Cloud CDN + Storage + Kubernates is currently impossible. For this purpose (for static resources) I'm using another CDN and not the Google Cloud crap (totally inflexible and expensive). So i configured my DNS to route cdn.
Hey there - thanks for maintaining ingress-gce :-)
I just wanted to chime in and say that I've also just hit this and used the NodePort workaround. I think it's fine for a one-off but it's fairly burdensome as a general approach. So +1 to integrating this into the ingress GCE controller!
+1, would be helpful to me as well to serve static content directly.
+1 to this. My frontend load balancer is the only piece of my stack not fully automated. Would love to see this feature!
+1 on this feature. We need Google Backend bucket support for serving some static pages. Is this feature on road map at all?
+1. I'm getting the TLS cert for the static website onto the ingress load balancer through cert-manager (https://github.com/jetstack/cert-manager) so I'd like to reach the bucket directly from there.
Now that there is a BackendConfig CRD, and i see issues for FrontendConfig CRD as well, configuring a bucket served through a load balancer might fit into one of those? Really rooting for this feature 🥇
+1. I just realized that this can not be done!!
+1 we want to migrate some memory intensive resources, like our sitemap, to cloud storage but currently have no easy way to serve it from the same domain. This would be really helpful.
Any news on this? Wanted to add a backend bucket and noticed that didn't work.
From @omerzach on February 28, 2017 1:20
We're happily using the GCE load balancer controller in production to route traffic to a few different services. We'd like to have some paths point at backend buckets in Google Cloud Storage instead of backend services running in Kubernetes.
Right now if we manually create this backend bucket and then configure the load balancer to point certain paths at it the UrlMap is updates appropriately but almost immediately reverted to its previous setting, presumably because the controller sees it doesn't match the YAML we initially configured the Ingress with.
I have two questions:
(For some context, we'd like to do something like this: https://cloud.google.com/compute/docs/load-balancing/http/using-http-lb-with-cloud-storage)
Copied from original issue: kubernetes/ingress-nginx#353