Closed ibrokethecloud closed 2 years ago
I think you've found something that's incorrect. Controllers, normally, want to watch resources at a cluster scope so they can act on anything deployed to the cluster. This restricts the controller to a single namespace, which isn't ideal.
Thanks for raising the issue. Will mark as a bug.
@chrisdoherty4 i will send a PR. Its a minor fix which i have patched locally just wanted to be sure if there was a specific reason for the same.
Not sure why this wasn't closed automatically. Fixed with #32.
Apologies for the late response.
The kube-namespace
flag to restrict rufio to a namesapce was intentional. This is because if the controller was not namespaced it would default to look for resources with a cluster scope and this means it would look for Secrets
in a cluster scope and fail, because we dont give rufio that level of permissions on secrets in the manifests we generate.
The below link explains the behavior of how controller runtime client would look for resources when namespace is ""
https://github.com/tinkerbell/rufio/blob/main/controllers/machine_controller.go#L208
Of course the controller can be given Cluster scope permissions on Secrets, but it was intended as the default behavior.
The user can still specify a kubeNamespace flag to restrict the controller to only watch objects in a particular namespace.
I think I appreciate the problem @pokearu is highlighting. If a resource, A, is watched across all namespaces (common for controllers) and references a resource, B, then by extension the controller also needs access to B. B, however, is a secret so its difficult to know what the right thing to do is.
After thinking on it a while I think there are 2 options: (1) match the access control of Secrets with Machines (a ClusterRoleBinding); (2) force the operator to define secrets in the same namespace as Machines and to specify what namespace a given Rufio deployment is watches.
(1) is less desirable from a security point of view but works out of the box. (2) forces users to use more resources in the event they place machines in different namespaces, is somewhat contradictory to typical controller deployments, and I don't think is doable without Helm or equivalent.
Its worth pointing out, assuming I'm not mistaken, other high profile projects like ArgoCD deploy with a cluster role binding and expect users to reduce permissions as desired. This ensures the application works out of the box and docs tell the user they should restrict further as desired. Because of this, I lean toward using a ClusterRoleBinding and letting users scope down given they dictate their deployment and can configure the RBAC appropriately.
~/Workspace/github.com/chrisdoherty4/exp-kubeclient/scope > kubectl describe clusterrole/argocd-application-controller
Name: argocd-application-controller
Labels: app.kubernetes.io/component=application-controller
app.kubernetes.io/name=argocd-application-controller
app.kubernetes.io/part-of=argocd
Annotations: <none>
PolicyRule:
Resources Non-Resource URLs Resource Names Verbs
--------- ----------------- -------------- -----
*.* [] [] [*]
[*] [] [*]
rufio main.go currently overrides an empty
--kube-namespace
flag from the the k8s client.https://github.com/tinkerbell/rufio/blob/main/main.go#L82-L90
As a result of this change, rufio only watches objects in the namespace it is currently deployed in.
This breaks object processing for objects created in other namespaces.
Is there a specific reason for the same?
Expected Behaviour
rufio should be able to watch objects across the scope of the cluster, unless
--kube-namespace
is explicitly specified.Current Behaviour
Possible Solution
Steps to Reproduce (for bugs)
1. 2. 3. 4.
Context
Your Environment
Operating System and version (e.g. Linux, Windows, MacOS):
How are you running Tinkerbell? Using Vagrant & VirtualBox, Vagrant & Libvirt, on Packet using Terraform, or give details:
Link to your project or a code example to reproduce issue: