kubernetes-sigs / gateway-api

Repository for the next iteration of composite service (e.g. Ingress) and load balancing APIs.
https://gateway-api.sigs.k8s.io
Apache License 2.0
1.86k stars 482 forks source link

Enhancement: add HTTPRoute IP ACL's #1141

Closed hoerup closed 6 days ago

hoerup commented 2 years ago

What would you like to be added: An option for defining a list of IP ranges that should be allowed/denied to call a certain HTTPRoute

Why this is needed: When using the Gateway API for http requests it is possible to create a single http handler for all http traffic, which is delegated to the various backends. In some instances it might be necessary to add additional ACL's in order to control which source IP's are allowed to call those endpoints. I might have a route for https://gw.company.tld/public that should be accesible for all but https://gw.company.tld/internal that I want to restrict to IP ranges owned by my employer.

In ingress controllers that can typically be configured by adding an annotiation on the Ingress object eg: nginx.ingress.kubernetes.io/whitelist-source-range: 10.0.0.0/24,172.10.0.1

There are several problems with this approach that I think can be solved better in Gateway API

Ref: https://haproxy-ingress.github.io/docs/configuration/keys/#allowlist https://www.haproxy.com/documentation/kubernetes/latest/configuration/ingress/#whitelist https://kubernetes.github.io/ingress-nginx/user-guide/nginx-configuration/annotations/#whitelist-source-range

robscott commented 2 years ago

Thanks for raising this issue @hoerup! When discussing this in relation to GEP #735, one of the questions we had was where this configuration should belong (more in #1127). Is it important that this is tied to a route? If so, how would merging and conflict resolution work? If not, would it be sufficient to tie this to a Gateway? Should it apply to the full Gateway or per listener?

hoerup commented 2 years ago

As I see it, the IP ACL should definitely be tied to a route - in order to give the operator the wides flexibility so (s)he kan define multiple HTTPRoutes each with it's own ACL's although an option to apply it on gateway/listener as well might me usefull in other scenarios?

While looking at the current (0.4.2) spec and wondering where to put this im thinking that it should probably go into a new acls field in HTTPRouteRule. That way an incomming http request will be matched first against the hostnames in HTTPRouteSpec and any matches defined in HTTPRouteRule.matches. When a route has been selected then we can apply any ACL's to determine whether the client is allowed to access this route and if not reject it with a HTTP 403

Another design issue is whether the IP ACL should be inline sub-object OR perhaps be an new top-level Kind. If it is a new Kind then an operator could define a set of IP ACL's at a central place and refer to them from the various route's as needed

hbagdi commented 2 years ago

Filters on HTTPRoute were meant to solve exactly this class of problem and I think we should explore that approach a bit more before additions to the core API.

A feature requests to specific implementation to add a custom filter to support this use case makes a lot of sense as the first step. If multiple implementations add support for such a capability, then we can discuss moving the feature into an "Extended" conformance level. And then eventually "Core" if the time comes for it.

Would that help your use-case @hoerup?

hoerup commented 2 years ago

hbagdi: Yes - i think that when I originally wrote this i hadn't completely understood the purpose of filters. But yes a HttpACLFilter would be great

pleshakov commented 2 years ago

I wonder if the suggested filter approach would be manageable. If it is only necessary to apply ACLs to one rule, then it looks simple enough. However, if it is necessary to apply ALCs to more than one rule, then there will be duplicated ACLs spread across one resource. Or even multiple resources. Maintaining those duplicated ACLs will be a burden imho.

Additionally, should ACLs be the responsibility of the developer persona (who owns HTTPRoute) or the cluster operator?

youngnick commented 2 years ago

I think there are two classes of IP ACLs, per-app and per-infrastructure.

Using HTTPRoute Filters to handle per-app sounds like a good fit, but won't scale well if you want to apply the same ACLs across lots of things.

To be honest, per-infrastructure IP ACLs are probably better handled either by something in the Gateway spec, or by something that attaches to the Gateway (like a Policy resource). For simplicity, a Policy resource could mandate the use of particular Filters for every route attached to a Gateway. But this design currently only exists in my head. (I have a TODO to update the Policy documentation with some more examples to illustrate this sort of use-case).

mikemorris commented 2 years ago

It feels like there may be some substantial overlap here with NetworkPolicy ingress rules (if it could be explicitly applied to a Gateway resource instead of a pod selector?) for the whole-infra case.

k8s-triage-robot commented 1 year ago

The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

You can:

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

hoerup commented 1 year ago

/remove-lifecycle stale

shaneutt commented 1 year ago

We need to triage and evaluate this further to determine if this is something that we want to do, but to that end it will be low priority until v1.0.0/GA is complete and don't believe we will have bandwidth for it until then.

ecordell commented 1 year ago

I recently worked on a similar feature in contour, and thought we had some good discussions that could be relevant for the Gateway APIs.

Some of the highlights:

Design: https://github.com/projectcontour/contour/blob/2c6015d30004508661ee2cf086354d91bc6d0986/design/ip-filtering-design.md Docs: https://projectcontour.io/docs/1.25/config/ip-filtering/ API Bikeshed: https://github.com/projectcontour/contour/pull/4990#issuecomment-1408693088

HummingMind commented 11 months ago

Using this in Contour HTTPProxy. Would be nice to have this in the Gateway API. Not being able to limit outside access to a specific Gateway is painful.

shaneutt commented 8 months ago

Just a note that this has basically gone stale, if someone is interested in championing it forward and working on a proposal please let us know.

k8s-triage-robot commented 7 months ago

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

You can:

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle rotten

sathieu commented 7 months ago

/remove-lifecycle rotten

shaneutt commented 7 months ago

/cc @tssurya @npinaeva

You may be interested in this re: Network Policy

k8s-triage-robot commented 4 months ago

The Kubernetes project currently lacks enough contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

You can:

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

sathieu commented 4 months ago

/remove-lifecycle stale

shaneutt commented 4 months ago

Hi @sathieu! Just curious, but we've noticed you've removed stale from this a couple times, is this something that you're personally interested in working on?

ecordell commented 4 months ago

I'm using Contour right now, but would like to see this land in the Gateway API so that I can migrate fully to these APIs.

I don't know that I have the time to contribute this work any time soon, but if I did, what would be the first step? Showing up to a community call?

shaneutt commented 4 months ago

I'm using Contour right now, but would like to see this land in the Gateway API so that I can migrate fully to these APIs.

I don't know that I have the time to contribute this work any time soon, but if I did, what would be the first step? Showing up to a community call?

Yes, That's a great first step. Please feel free to drop something on the agenda for us to discuss!

sathieu commented 4 months ago

Hi @sathieu! Just curious, but we've noticed you've removed stale from this a couple times, is this something that you're personally interested in working on?

No. I just need this feature.

shaneutt commented 4 months ago

Hi @sathieu! Just curious, but we've noticed you've removed stale from this a couple times, is this something that you're personally interested in working on?

No. I just need this feature.

Understandable, and we're very glad for your interest! We are an entirely volunteer run organization, and so committing to a feature is largely a function of people in the community coming forward to champion an issue. So in general we would ask that people don't change the lifecycle of the issues to bump them, unless they are ready to personally commit time to them because accurate lifecycle management is an important part of our organizational process. We would on the other hand love to have you do a write-up for us explaining your interest and needs in greater detail for our own edification, this can provide productive insights for us and possibly inspire others who may want to work more directly on the feature. If you could share some more details (what implementation you use, what (if anything) you're doing now to handle the issue, e.t.c.) that context could potentially be very helpful here!

k8s-triage-robot commented 1 month ago

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

You can:

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle rotten

ecordell commented 1 month ago

@shaneutt Can you elaborate on what you're looking for to keep this issue from going stale (you added the needs-information tag).

It seems like a valid feature request that simply lacks available bandwidth. There appears to be a decent amount of community interest from the comments here, and no one has suggested a reason that this shouldn't be implemented. Closing (even as rotten) doesn't seem like the right move, since it would suggest that it shouldn't be implemented at all.

Can we just freeze it open instead?

shaneutt commented 1 month ago

Can you elaborate on what you're looking for to keep this issue from going stale (you added the needs-information tag).

Sure thing: we are in need a community member who has the bandwidth to jump in and move this forward if there's going to be any progress. That person would need to start by driving some detailed conversations about the motivation, and justification for adding this scope to the project (as it does feel at least a little bit outside of scope and I feel we would want to coordinate with NetworkPolicy (see comments above about the overlap)) thus why this is marked as needs-information.

It seems like a valid feature request that simply lacks available bandwidth. There appears to be a decent amount of community interest from the comments here, and no one has suggested a reason that this shouldn't be implemented. Closing (even as rotten) doesn't seem like the right move, since it would suggest that it shouldn't be implemented at all. Can we just freeze it open instead?

We as a project do not consider closed-as-stale issues to be automatically be considered "declined" (we have some language in GEPs specifically around declined things, and for issues we would explicitly call that out), and it's important to note that closed does not mean it's "dead, gone and can never come back" it is more intended to mean "unplanned, and unprioritized for the moment". We have some documentation in our contributing guide that provides a bit more color on how we think of these.

So all that said: you're someone who wants the feature and wants to see it potentially move forward so you can use it some day. That's great, and we appreciate you coming here to talk to us about it and let us know! What's generally best for moving issues forward however, is for you yourself to come jump in right here while you're already interested!

Driving a proposal like this forward doesn't mean you have to dedicate a ton of time to it, and it doesn't mean you have to do the whole thing yourself (in fact, we advise against this and recommend finding allies!). It means you dedicate some amount of time that works for you to drive the discussion and build consensus within the community. Importantly, you wont be alone in doing this (as it's the job of the maintainers, and often the will of the community to help support people championing issues).

Ultimately there's really no better way to see an issue like this move forward than jumping in and trying to be the change you want to see in the project! If you feel inclined to do so, let us know how we can support you. :bow:

k8s-triage-robot commented 6 days ago

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues according to the following rules:

You can:

Please send feedback to sig-contributor-experience at kubernetes/community.

/close not-planned

k8s-ci-robot commented 6 days ago

@k8s-triage-robot: Closing this issue, marking it as "Not Planned".

In response to [this](https://github.com/kubernetes-sigs/gateway-api/issues/1141#issuecomment-2484579432): >The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs. > >This bot triages issues according to the following rules: >- After 90d of inactivity, `lifecycle/stale` is applied >- After 30d of inactivity since `lifecycle/stale` was applied, `lifecycle/rotten` is applied >- After 30d of inactivity since `lifecycle/rotten` was applied, the issue is closed > >You can: >- Reopen this issue with `/reopen` >- Mark this issue as fresh with `/remove-lifecycle rotten` >- Offer to help out with [Issue Triage][1] > >Please send feedback to sig-contributor-experience at [kubernetes/community](https://github.com/kubernetes/community). > >/close not-planned > >[1]: https://www.kubernetes.dev/docs/guide/issue-triage/ Instructions for interacting with me using PR comments are available [here](https://git.k8s.io/community/contributors/guide/pull-requests.md). If you have questions or suggestions related to my behavior, please file an issue against the [kubernetes-sigs/prow](https://github.com/kubernetes-sigs/prow/issues/new?title=Prow%20issue:) repository.