Open cmichr opened 3 years ago
Hi, If you deploy the manifests given as exemple, the clusterrole bind to serviceaccount node-exporter is "view" and this role do not permit to get/list/watch the nodes. You need to change the file 00-roles.yaml and define another clusterrole with all rights you need and change the rolebinding to bind serviceaccount with this role and with this get nodes events works.
Kubernetes Documentation : https://kubernetes.io/docs/reference/access-authn-authz/rbac/*
@cmichr : Here is an example for your reference:
apiVersion: v1
kind: Namespace
metadata:
name: kube-event-export
---
apiVersion: v1
kind: ServiceAccount
metadata:
namespace: kube-event-export
name: event-exporter
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: kube-event-view
rules:
- apiGroups:
- ""
resources:
- configmaps
- endpoints
- persistentvolumeclaims
- persistentvolumeclaims/status
- pods
- replicationcontrollers
- replicationcontrollers/scale
- serviceaccounts
- services
- services/status
- nodes
verbs:
- get
- list
- watch
- apiGroups:
- ""
resources:
- bindings
- events
- limitranges
- namespaces/status
- pods/log
- pods/status
- replicationcontrollers/status
- resourcequotas
- resourcequotas/status
verbs:
- get
- list
- watch
- apiGroups:
- ""
resources:
- namespaces
verbs:
- get
- list
- watch
- apiGroups:
- apps
resources:
- controllerrevisions
- daemonsets
- daemonsets/status
- deployments
- deployments/scale
- deployments/status
- replicasets
- replicasets/scale
- replicasets/status
- statefulsets
- statefulsets/scale
- statefulsets/status
verbs:
- get
- list
- watch
- apiGroups:
- autoscaling
resources:
- horizontalpodautoscalers
- horizontalpodautoscalers/status
verbs:
- get
- list
- watch
- apiGroups:
- batch
resources:
- cronjobs
- cronjobs/status
- jobs
- jobs/status
verbs:
- get
- list
- watch
- apiGroups:
- extensions
resources:
- daemonsets
- daemonsets/status
- deployments
- deployments/scale
- deployments/status
- ingresses
- ingresses/status
- networkpolicies
- replicasets
- replicasets/scale
- replicasets/status
- replicationcontrollers/scale
verbs:
- get
- list
- watch
- apiGroups:
- policy
resources:
- poddisruptionbudgets
- poddisruptionbudgets/status
verbs:
- get
- list
- watch
- apiGroups:
- networking.k8s.io
resources:
- ingresses
- ingresses/status
- networkpolicies
verbs:
- get
- list
- watch
- apiGroups:
- metrics.k8s.io
resources:
- nodes
- pods
verbs:
- get
- watch
- list
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: event-exporter
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: kube-event-view
subjects:
- kind: ServiceAccount
namespace: kube-event-export
name: event-exporter
---
apiVersion: v1
kind: ConfigMap
metadata:
name: event-exporter-cfg
namespace: kube-event-export
data:
config.yaml: |
logLevel: info
logFormat: json
route:
routes:
- match:
- receiver: "dump"
receivers:
- name: "dump"
stdout: {}
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: event-exporter
namespace: kube-event-export
spec:
replicas: 1
template:
metadata:
labels:
app: event-exporter
version: v1
spec:
serviceAccountName: event-exporter
containers:
- name: event-exporter
image: ghcr.io/opsgenie/kubernetes-event-exporter:v0.10
imagePullPolicy: IfNotPresent
args:
- -conf=/data/config.yaml
volumeMounts:
- mountPath: /data
name: cfg
volumes:
- name: cfg
configMap:
name: event-exporter-cfg
selector:
matchLabels:
app: event-exporter
version: v1
Thanks for the example here - pasting a simplified version if it helps anyone.
We use role aggregation on the view
role so don't want to hardcode this. The important part here on top of the default "view" ClusterRole, is the following.
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: event-exporter-extra
rules:
- apiGroups:
- ""
resources:
- nodes
verbs:
- get
- list
- watch
You can bind that separately to the service account alongside the version that ships in the deploy folder:
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: event-exporter-extra
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: event-exporter-extra
subjects:
- kind: ServiceAccount
namespace: kube-event-export
name: event-exporter
Event Exporter is able to get pod events, however, node events are getting a permissions error. Configuration of deployments, etc are the same... See error message below:
{"level":"error","error":"nodes \"zzz.yyy.vvv\" is forbidden: User \"system:serviceaccount:monitoring:event-exporter\" cannot get resource \"nodes\" in API group \"\" at the cluster scope","time":"2021-08-17T16:05:12Z","caller":"/app/pkg/kube/watcher.go:72","message":"Cannot list labels of the object"}
{"level":"error","error":"nodes \"zzz.yyy.vvv\" is forbidden: User \"system:serviceaccount:monitoring:event-exporter\" cannot get resource \"nodes\" in API group \"\" at the cluster scope","time":"2021-08-17T16:05:12Z","caller":"/app/pkg/kube/watcher.go:81","message":"Cannot list annotations of the object"}