Closed estroz closed 3 months ago
I loved 👍 The only possible con here is check how if it would work nicely on other vendors such as OCP. Did you check that already?
NetworkPolicy
has been around for a long time, so all full-featured k8s distros support it.
Hi @estroz,
I raised this concern because in the past I remember some problems to work with Ingress
on OCP. Regards OCP, I could check that this API networking.k8s.io/v1
is supported since 4.4 only : https://docs.openshift.com/container-platform/4.6/rest_api/network_apis/networkpolicy-networking-k8s-io-v1.html.
So, since Kubebuilder as SDK needs to support previous k8s cluster versions we would also need to check if we would need to allow scaffold it by using networking.k8s.io/v1beta1 as well
Hi @camilamacedo86 @estroz
Network policies are implemented by CNI(container Network Interface) in the cluster. It means the users which are deploying kubebuilder's operator on such cluster where CNI doesn't support the Network Policy (For example: Flannel) this restriction policy will not work there. This con also needs to be considered while implementing this.
t means the clusters which are deploying kubebuilder's operator on such cluster where CNI doesn't support the Network Policy (For example: Flannel) this restriction policy will not work there
Right, and this potentially makes NetworkPolicy scaffolding (by default at least) a non-starter since it locks projects out of certain CNI's.
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale
.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close
.
Send feedback to sig-contributor-experience at kubernetes/community. /lifecycle stale
/lifecycle frozen
Also, regards the motivations
kube-rbac-proxy, while a nice feature, is a third-party workaround for a problem (restricting pod access to /metrics)
Then, see the motivation for this project: https://github.com/brancz/kube-rbac-proxy/#motivation
I developed this proxy in order to be able to protect Prometheus metrics endpoints. In a scenario, where an attacker might obtain full control over a Pod, that attacker would have the ability to discover a lot of information about the workload as well as the current load of the respective workload. This information could originate for example from the node-exporter and kube-state-metrics. Both of those metric sources can commonly be found in Prometheus monitoring stacks on Kubernetes. This project was created to specifically solve the above problem, however, I felt there is a larger need for such a proxy in general.
Then, see that the auth-proxy was built to:
In Kubernetes clusters without NetworkPolicies any Pod can perform requests to every other Pod in the cluster. This proxy was developed in order to restrict requests to only those Pods, that present a valid and RBAC authorized token or client TLS certificate.
However, I also align with the pros of using NetworkPolicy (remove thrid-party and reduce the deps). But we might want to better analyse and discuss this one.
Remove assign to @estroz he is not working on this one so others might want to. Adding to needs-triage so that it can be re-discussed in the meetings removing the frozen-lifecycle so that we can check how much it is or is not relevant for the community and should be priorized.
/assign
/assign @yashsingh74
Good argumentations to move forward within this one:
More info: https://github.com/kubernetes-sigs/kubebuilder/issues/1885#issuecomment-1242998719
Another good reason, and a potential deadline, gcr.io container-registry is deprecated and is scheduled to be moved to Artifact Registry on May 15, 2024.
https://cloud.google.com/container-registry/docs/deprecations/container-registry-deprecation
Since we have enough positive responses to move forward with this approach, I'm going to add this to the agenda for our next community meeting. Meanwhile, if anyone would like to take this up and contribute, please feel free to assign yourself.
I'm interested! /assign mjlshen
On the other hand, AWS VPC CNI does't support NetworkPolicy. There are a lot of kubernetes users that use EKS and have no choice but to use that CNI driver.
Sorry to bring some negative perspectives but to me NetworkPolicy and kube-rbac-proxy, although both security related, do not address the same use case:
In practice it means that you can control with kube-rbac-proxy, which user or serviceaccount has access whereas you can control with networkpolicy, which network segment has access. Both may be useful but not interchangeable.
Thanks for bring that up, after some reading, I agree. Is it worth providing or documenting a NetworkPolicy with information similar to the above or do we think that we shouldn't pursue this issue any further?
The other option as @camilamacedo86 mentioned in the community meeting is to build a plugin that adds the scaffolding to this to the main gaoling plugin. This will also reduce the burden of us not building the kube-rbac-proxy image again (since its not a part of k8s project: https://github.com/brancz/kube-rbac-proxy), as we could accept a user provided image to be added to the project.
We can also consider using this feature from controller-runtime 0.16.0 https://github.com/kubernetes-sigs/controller-runtime/pull/2407 which I think perfectly fits our use-case of locking down the metrics endpoint with K8s RBAC without kube-rbac-proxy
I do prefer this approach and will start work in this direction:
We can also consider using this feature from controller-runtime 0.16.0 https://github.com/kubernetes-sigs/controller-runtime/pull/2407 which I think perfectly fits our use-case of locking down the metrics endpoint with K8s RBAC without kube-rbac-proxy
but with regards to this:
On the other hand, AWS VPC CNI does't support NetworkPolicy. There are a lot of kubernetes users that use EKS and have no choice but to use that CNI driver.
I noticed that AWS VPC CNI now supports NetworkPolicy! https://aws.amazon.com/blogs/containers/amazon-vpc-cni-now-supports-kubernetes-network-policies/
Hi @mjlshen
Your help is very required if you are interested in this one. We really need to check this option. In order to make easier what are the expected steps here (IHMO) and what we need to do next I added the bellow comment, see: https://github.com/kubernetes-sigs/kubebuilder/issues/1885#issuecomment-1809841870
If you be unable to help us out, it will be sad but please remove the assign to you so maybe others can try to check this one.
Thank you a lot for all your help. 🥇
Expected Outcome:
The outcome of this task is to make an informed decision regarding the adoption of NetworkPolicy in Kubebuilder. By evaluating its pros and cons and conducting testing, we aim to determine whether this change will enhance the project's functionality and align with community preferences.
Let's work together to evaluate this potential improvement and make an informed choice for the future of Kubebuilder.
Goals:
We can also consider using this feature from controller-runtime 0.16.0 https://github.com/kubernetes-sigs/controller-runtime/pull/2407 which I think perfectly fits our use-case of locking down the metrics endpoint with K8s RBAC without kube-rbac-proxy
We need to create such as markdown table factor | using NetwrokPolices | using auth-proxy | using Controller Runtime | . So that we can evaluate the 3 options raised.
Steps: a) Evaluate Pros and Cons:
Review comments to capture NetworkPolicy's benefits and drawbacks. Note that the argument against NetworkPolicy (CNI support) may no longer apply.
b) Create PR with Scaffold Changes:
c) Community Discussion:
Share PR results in Kubebuilder channel so that we can collect feedback and insights as decide whether to merge the PR or retain rbac-proxy.
Your help here is very appreciated by the whole community 🥇
Note that the argument against NetworkPolicy (CNI support) may no longer apply
Is supporting NetworkPolicy a requirement for CNI, or is it just an option? If it's optional, should we be requiring it?
Hi @mjlshen,
You wrote above:
We can also consider using this feature from controller-runtime 0.16.0 https://github.com/kubernetes-sigs/controller-runtime/pull/2407 which I think perfectly fits our use-case of locking down the metrics endpoint with K8s RBAC without kube-rbac-proxy
That might be an option. We need to create a proposal that compares the pros and cons of the three options. Then, if whoever is checking it thinks that option X is better, then a PR with the changes to the default scaffold with the respective Option. That will prove that it works and show the required changes to be performed.
Regards controller runtime
See: https://github.com/kubernetes-sigs/kubebuilder/issues/1885#issuecomment-1809841870
We no longer use kube_rbac_proxy and we have the desire to add network policies as an optional helper for users it is mapped in https://github.com/kubernetes-sigs/kubebuilder/issues/3871 and we have been looking on in this PR: https://github.com/kubernetes-sigs/kubebuilder/pull/3853
So, I am closing this one in favor of https://github.com/kubernetes-sigs/kubebuilder/issues/3871
kube-rbac-proxy, while a nice feature, is a third-party workaround for a problem (restricting pod access to
/metrics
) that can be solved by aNetworkPolicy
. For example, the following policy could be scaffolded by default so only manager-related pods could be selected:Benefits:
/kind feature