Open hoerup opened 2 years ago
Thanks for raising this issue @hoerup! When discussing this in relation to GEP #735, one of the questions we had was where this configuration should belong (more in #1127). Is it important that this is tied to a route? If so, how would merging and conflict resolution work? If not, would it be sufficient to tie this to a Gateway? Should it apply to the full Gateway or per listener?
As I see it, the IP ACL should definitely be tied to a route - in order to give the operator the wides flexibility so (s)he kan define multiple HTTPRoutes each with it's own ACL's although an option to apply it on gateway/listener as well might me usefull in other scenarios?
While looking at the current (0.4.2) spec and wondering where to put this im thinking that it should probably go into a new acls
field in HTTPRouteRule. That way an incomming http request will be matched first against the hostnames in HTTPRouteSpec and any matches defined in HTTPRouteRule.matches. When a route has been selected then we can apply any ACL's to determine whether the client is allowed to access this route and if not reject it with a HTTP 403
Another design issue is whether the IP ACL should be inline sub-object OR perhaps be an new top-level Kind. If it is a new Kind then an operator could define a set of IP ACL's at a central place and refer to them from the various route's as needed
Filters on HTTPRoute were meant to solve exactly this class of problem and I think we should explore that approach a bit more before additions to the core API.
A feature requests to specific implementation to add a custom filter to support this use case makes a lot of sense as the first step. If multiple implementations add support for such a capability, then we can discuss moving the feature into an "Extended" conformance level. And then eventually "Core" if the time comes for it.
Would that help your use-case @hoerup?
hbagdi: Yes - i think that when I originally wrote this i hadn't completely understood the purpose of filters. But yes a HttpACLFilter would be great
I wonder if the suggested filter approach would be manageable. If it is only necessary to apply ACLs to one rule, then it looks simple enough. However, if it is necessary to apply ALCs to more than one rule, then there will be duplicated ACLs spread across one resource. Or even multiple resources. Maintaining those duplicated ACLs will be a burden imho.
Additionally, should ACLs be the responsibility of the developer persona (who owns HTTPRoute) or the cluster operator?
I think there are two classes of IP ACLs, per-app and per-infrastructure.
Using HTTPRoute Filters to handle per-app sounds like a good fit, but won't scale well if you want to apply the same ACLs across lots of things.
To be honest, per-infrastructure IP ACLs are probably better handled either by something in the Gateway spec, or by something that attaches to the Gateway (like a Policy resource). For simplicity, a Policy resource could mandate the use of particular Filters for every route attached to a Gateway. But this design currently only exists in my head. (I have a TODO to update the Policy documentation with some more examples to illustrate this sort of use-case).
It feels like there may be some substantial overlap here with NetworkPolicy ingress
rules (if it could be explicitly applied to a Gateway resource instead of a pod selector?) for the whole-infra case.
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/remove-lifecycle stale
/lifecycle rotten
/close
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
/remove-lifecycle stale
We need to triage and evaluate this further to determine if this is something that we want to do, but to that end it will be low priority until v1.0.0
/GA is complete and don't believe we will have bandwidth for it until then.
I recently worked on a similar feature in contour, and thought we had some good discussions that could be relevant for the Gateway APIs.
Some of the highlights:
direct_remote_ip
and remote_ip
, and the number of hops to trust if using X-Forwarded-For
).Design: https://github.com/projectcontour/contour/blob/2c6015d30004508661ee2cf086354d91bc6d0986/design/ip-filtering-design.md Docs: https://projectcontour.io/docs/1.25/config/ip-filtering/ API Bikeshed: https://github.com/projectcontour/contour/pull/4990#issuecomment-1408693088
Using this in Contour HTTPProxy. Would be nice to have this in the Gateway API. Not being able to limit outside access to a specific Gateway is painful.
Just a note that this has basically gone stale, if someone is interested in championing it forward and working on a proposal please let us know.
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/remove-lifecycle rotten
/close
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten
/remove-lifecycle rotten
/cc @tssurya @npinaeva
You may be interested in this re: Network Policy
The Kubernetes project currently lacks enough contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/remove-lifecycle stale
/close
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
/remove-lifecycle stale
Hi @sathieu! Just curious, but we've noticed you've removed stale from this a couple times, is this something that you're personally interested in working on?
I'm using Contour right now, but would like to see this land in the Gateway API so that I can migrate fully to these APIs.
I don't know that I have the time to contribute this work any time soon, but if I did, what would be the first step? Showing up to a community call?
I'm using Contour right now, but would like to see this land in the Gateway API so that I can migrate fully to these APIs.
I don't know that I have the time to contribute this work any time soon, but if I did, what would be the first step? Showing up to a community call?
Yes, That's a great first step. Please feel free to drop something on the agenda for us to discuss!
Hi @sathieu! Just curious, but we've noticed you've removed stale from this a couple times, is this something that you're personally interested in working on?
No. I just need this feature.
Hi @sathieu! Just curious, but we've noticed you've removed stale from this a couple times, is this something that you're personally interested in working on?
No. I just need this feature.
Understandable, and we're very glad for your interest! We are an entirely volunteer run organization, and so committing to a feature is largely a function of people in the community coming forward to champion an issue. So in general we would ask that people don't change the lifecycle of the issues to bump them, unless they are ready to personally commit time to them because accurate lifecycle management is an important part of our organizational process. We would on the other hand love to have you do a write-up for us explaining your interest and needs in greater detail for our own edification, this can provide productive insights for us and possibly inspire others who may want to work more directly on the feature. If you could share some more details (what implementation you use, what (if anything) you're doing now to handle the issue, e.t.c.) that context could potentially be very helpful here!
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/remove-lifecycle rotten
/close
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten
@shaneutt Can you elaborate on what you're looking for to keep this issue from going stale (you added the needs-information tag).
It seems like a valid feature request that simply lacks available bandwidth. There appears to be a decent amount of community interest from the comments here, and no one has suggested a reason that this shouldn't be implemented. Closing (even as rotten) doesn't seem like the right move, since it would suggest that it shouldn't be implemented at all.
Can we just freeze it open instead?
Can you elaborate on what you're looking for to keep this issue from going stale (you added the needs-information tag).
Sure thing: we are in need a community member who has the bandwidth to jump in and move this forward if there's going to be any progress. That person would need to start by driving some detailed conversations about the motivation, and justification for adding this scope to the project (as it does feel at least a little bit outside of scope and I feel we would want to coordinate with NetworkPolicy
(see comments above about the overlap)) thus why this is marked as needs-information
.
It seems like a valid feature request that simply lacks available bandwidth. There appears to be a decent amount of community interest from the comments here, and no one has suggested a reason that this shouldn't be implemented. Closing (even as rotten) doesn't seem like the right move, since it would suggest that it shouldn't be implemented at all. Can we just freeze it open instead?
We as a project do not consider closed-as-stale issues to be automatically be considered "declined" (we have some language in GEPs specifically around declined
things, and for issues we would explicitly call that out), and it's important to note that closed does not mean it's "dead, gone and can never come back" it is more intended to mean "unplanned, and unprioritized for the moment". We have some documentation in our contributing guide that provides a bit more color on how we think of these.
So all that said: you're someone who wants the feature and wants to see it potentially move forward so you can use it some day. That's great, and we appreciate you coming here to talk to us about it and let us know! What's generally best for moving issues forward however, is for you yourself to come jump in right here while you're already interested!
Driving a proposal like this forward doesn't mean you have to dedicate a ton of time to it, and it doesn't mean you have to do the whole thing yourself (in fact, we advise against this and recommend finding allies!). It means you dedicate some amount of time that works for you to drive the discussion and build consensus within the community. Importantly, you wont be alone in doing this (as it's the job of the maintainers, and often the will of the community to help support people championing issues).
Ultimately there's really no better way to see an issue like this move forward than jumping in and trying to be the change you want to see in the project! If you feel inclined to do so, let us know how we can support you. :bow:
What would you like to be added: An option for defining a list of IP ranges that should be allowed/denied to call a certain HTTPRoute
Why this is needed: When using the Gateway API for http requests it is possible to create a single http handler for all http traffic, which is delegated to the various backends. In some instances it might be necessary to add additional ACL's in order to control which source IP's are allowed to call those endpoints. I might have a route for https://gw.company.tld/public that should be accesible for all but https://gw.company.tld/internal that I want to restrict to IP ranges owned by my employer.
In ingress controllers that can typically be configured by adding an annotiation on the Ingress object eg: nginx.ingress.kubernetes.io/whitelist-source-range: 10.0.0.0/24,172.10.0.1
There are several problems with this approach that I think can be solved better in Gateway API
Ref: https://haproxy-ingress.github.io/docs/configuration/keys/#allowlist https://www.haproxy.com/documentation/kubernetes/latest/configuration/ingress/#whitelist https://kubernetes.github.io/ingress-nginx/user-guide/nginx-configuration/annotations/#whitelist-source-range