Closed jkroepke closed 2 months ago
The helm chart doesn't currently support namespaced rolebindings but we could add it. In addition, there's an undocumented, experimental option in the kubetail config named namespace
that restricts access to a given namespace at the UI-level. It's currently being used on the demo site (in addition to rbac under the hood): https://www.kubetail.com/demo.
Would that work for you?
Also - thanks for the helm pull request! If you're interested in adding RoleBinding support too, that would be very much appreciated otherwise I'll work on it asap.
Also - thanks for the helm pull request! If you're interested in adding RoleBinding support too, that would be very much appreciated otherwise I'll work on it asap.
Thats something I can cover. There are also a few options missing in helm chart that I would like to add, e.g. securityContext
,autoMountServiceAccountToken
and or tpl support at ingress. I have already a commit in draft.
Feel free to close the issue, or keep is unless the option is stable.
Thanks!
Sounds great! Looking forward to checking out the PR.
Would that work for you?
I had no success here:
I'm using the image kubetail/kubetail:0.1.9
and I have assign RoleBindings and a Role to a service account.
The logs remains empty and on browser console, there is an error visible that the serviceAccount try to access nodes (which are a cluster-scoped resource).
Maybe there can be implement a try/catch we ignores the error, if the nodes are not available.
Currently you have to give it cluster-scoped access to nodes and namespaces:
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: kubetail
rules:
- apiGroups: [""]
resources: ["nodes", "namespaces"]
verbs: ["get", "list", "watch"]
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: kubetail
subjects:
- kind: ServiceAccount
namespace: kubetail
name: kubetail
roleRef:
kind: ClusterRole
name: kubetail
apiGroup: rbac.authorization.k8s.io
---
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
namespace: my-namespace
name: kubetail
rules:
- apiGroups: ["", "apps", "batch"]
resources: ["cronjobs", "daemonsets", "deployments", "jobs", "pods", "pods/log", "replicasets", "statefulsets"]
verbs: ["get", "list", "watch"]
---
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
namespace: kubetail
name: my-namespace
subjects:
- kind: ServiceAccount
name: kubetail
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: kubetail
I think reworking the app to add a namespace-scoped option that doesn't need cluster-scoped access to get/list/watch namespaces shouldn't be too difficult but adding an option to remove the need for nodes would take more work.
BTW, to enable the (experimental) namespace-scoped UI feature in the app just add namespace=<name>
to the app config:
kind: ConfigMap
apiVersion: v1
metadata:
namespace: kubetail
name: kubetail
data:
config.yaml: |
addr: :4000
auth-mode: cluster
namespace: my-namespace
BTW, to enable the (experimental) namespace-scoped UI feature in the app just add namespace=
to the app config:
Sorry, I had to mention this, I did it this via --set config.namespace="default"
via helm chart and it worked (I could only see the pods of the configured namespace)
kind: ConfigMap apiVersion: v1 metadata: namespace: kubetail name: kubetail data: config.yaml: | addr: :4000 auth-mode: cluster namespace: my-namespace
Please make the namespace a list so multiple namespaces can be supported.
kind: ConfigMap
apiVersion: v1
metadata:
namespace: kubetail
name: kubetail
data:
config.yaml: |
addr: :4000
auth-mode: cluster
namespaces:
- my-namespace
Thanks for the suggestion. I should be able to start working on this soonish.
@jkroepke @rophy I removed the (hidden) namespace
config option added support for multiple restricted namespaces via the allowed-namespaces
config option:
kind: ConfigMap
apiVersion: v1
metadata:
namespace: kubetail
name: kubetail
data:
config.yaml: |
addr: :4000
auth-mode: cluster
allowed-namespaces:
- ns1
- ns2
If you use helm you can enable it using the following values.yaml file (chart v0.5.0):
kubetail:
allowedNamespaces:
- ns1
- ns2
helm repo update
helm upgrade kubetail kubetail/kubetail --namespace kubetail --values /path/to/values.yaml
It's working live here https://www.kubetail.com/demo. Let me know if you run into any issues using the new feature!
Hi,
I would like to ask, if kubetail supports a namespaced installation as well, e.g. auth-mode=cluster and namespaced Rolebindings only.
Thanks!