grafana / beyla

eBPF-based autoinstrumentation of web applications and network metrics
https://grafana.com/oss/beyla-ebpf/
Apache License 2.0
1.45k stars 102 forks source link

Kubernetes service discovery not working without Kubernetes decoration #673

Open aabmass opened 8 months ago

aabmass commented 8 months ago

I am running Beyla as a Daemonset in GKE and want to only instrument processes that are part of a pod:

discovery:
  services:
  # only gather metrics from workloads running as a pod
  - k8s_pod_name: .+
  skip_go_specific_tracers: true
otel_traces_export:
  endpoint: http://otel-collector:4317
  interval: 30s

I'm finding this doesn't work unless I also enable the Kubernetes decorator:

discovery:
  services:
  # only gather metrics from workloads running as a pod
  - k8s_pod_name: .+
  skip_go_specific_tracers: true
otel_traces_export:
  endpoint: http://otel-collector:4317
  interval: 30s
attributes:
  kubernetes:
    enable: true

In my particular case, I'd like to use the OTel collector's k8sattributesprocessor to add this metadata instead (happy to expand more on why).

grcevski commented 8 months ago

Thanks @aabmass, yes this is a current limitation, we'll work on removing it, so that you can enable the kubernetes discovery, but not the annotation. We'd really appreciate if you could let us know why you'd use the Otel collector for annotation instead. Even if it's something we can't fix, we'd like the feedback.

I wonder for now if there's a way to drop our k8s annotations at the collector side and then inject theirs. I think this might work: https://github.com/open-telemetry/opentelemetry-collector-contrib/blob/main/processor/filterprocessor/README.md

aabmass commented 8 months ago

Thanks! We are creating a servicegraph metric in the collector from Beyla spans and I want to resolve the k8s pod name for both server.address and client.address.

One other concern @dashpole brought up is that since we're running Beyla as a Daemonset, doing k8s annotation in Beyla would create a watch in every node which could be expensive for the k8s API server and generate a lot of notifications for irrelevant pods. This is more precautionary, we aren't seeing any issues right now.

I wonder for now if there's a way to drop our k8s annotations at the collector side and then inject theirs.

Yes there a few workarounds, I just wanted to raise the issue if the docs need to be updated.

dashpole commented 8 months ago

doing k8s annotation in Beyla would create a watch in every node which could be expensive for the k8s API server and generate a lot of notifications for irrelevant pods. This is more precautionary, we aren't seeing any issues right now.

For "regular" http metrics, this isn't an issue, since in theory you can filter pods being watched for pods that are on the same node as beyla. But for service graph metrics, the client or server address can be for pods on a different node, which means you can't filter the watch used by beyla anymore.