Open paebersold-tyro opened 2 weeks ago
@iblancasa anything jumping out as problematic here?
IIRC the --create-rbac-permissions
does not create RBAC for TA/promethes.
however it would be great to support it
fyi for clarity my test setup did not use the target allocator (I'm aware the current helm charts require you to manually setup the target allocator RBAC resources). Apologies for the confusion in the naming. My sample config is below.
apiVersion: opentelemetry.io/v1beta1
kind: OpenTelemetryCollector
metadata:
name: collector-with-ta
spec:
mode: daemonset
targetAllocator:
enabled: false
config:
processors:
batch: {}
receivers:
prometheus:
config:
scrape_configs:
- job_name: test-pushgateway
scrape_interval: 30s
scrape_timeout: 10s
honor_labels: true
scheme: http
kubernetes_sd_configs:
- role: pod
namespaces:
names:
- app-platform-monitoring
relabel_configs:
# and pod is running
- source_labels: [__meta_kubernetes_pod_phase]
regex: Running
action: keep
# and pod is ready
- source_labels: [__meta_kubernetes_pod_ready]
regex: true
action: keep
# and only metrics endpoints
- source_labels: [__meta_kubernetes_pod_container_port_name]
action: keep
regex: metrics
exporters:
debug: []
service:
telemetry:
logs:
level: debug
pipelines:
metrics:
receivers: [prometheus]
processors: []
exporters: [debug]
@paebersold-tyro I am not sure whether to describe this as a bug or a feature request. However, I can definitely reproduce the issue. The root cause seems to be that the creation and management of RBAC for the component is done on a case-by-case basis, which will require a process to gradually provide support.
Correct, this should be an enhancement proposal to automate RBAC for the prometheus receiver.
I have updated the title, please edit it if it does not match what is being asked here.
Thanks for the clarification on the issue and fine with the title update. Ideally it would be great to have a note on exactly what create-rbac-permissions
gives you out of the box too.
Actually, the title should be changed because the flag does nothing now. https://github.com/open-telemetry/opentelemetry-operator/blob/main/main.go#L149
Now, we check if the operator has permissions to create RBAC resources and, if permissions are there, the operator will create the RBAC resources.
Component(s)
collector
What happened?
Description
I am running the opentelemetry-operator with the
--create-rbac-permissions
flag set. When a new OpenTelemetryCollector resource is created (eg mode: daemonset) new pods are created and a new serviceaccount is created as well. However no new clusterroles or clusterrolebindings are created. This results in prometheus scrape errors due to lack of permissions for example. EgNo logs are generated on the operator-manager pod.
The clusterole that the operator manager is using has the access to create clusterroles/clusterrolebindings (I am deploying via the helm chart opentelemetry-operator version 0.62.0 (https://open-telemetry.github.io/opentelemetry-helm-charts)
Based on other issues raised previously it seems this flag was optional but now may no longer be required with the permissions being automatically granted based on existing access - I would like clarification on this aspect too please.
Steps to Reproduce
Run the opentelementry-operator with the create-rbac-permissions flag.
Expected Result
Clusterroles/bindings would be create when the new collector pods are created
Actual Result
No new roles/bindings created
Kubernetes Version
1.29
Operator version
0.102.0
Collector version
0.102.0
Environment information
Serviceaccount used by manager
Clusterrolebinding
clusterrole for the operator manager (generated via helm chart)
Log output
No response
Additional context
Pods created via manager..
Associated service account
No clusterroles/etc associated