open-telemetry / opentelemetry-collector

OpenTelemetry Collector
https://opentelemetry.io
Apache License 2.0
4.36k stars 1.44k forks source link

How would I go about getting internal traces for a collector itself? #2831

Closed jcleal closed 9 months ago

jcleal commented 3 years ago

My team and I are looking to monitor some collectors we have running for a project, and was wondering how to pull the internal traces to send "somewhere" (eg. could be Jaeger, could be AWS X-Ray, etc).

I'm thinking that I'd need to instrument a collector with an SDK to forward the traces to somewhere, but I noticed there are some traces returned from the zpages extension. Just wondering how I'd forward those, if that's possible at the moment.

I previously asked this question over in https://cloud-native.slack.com/archives/C01N6P7KR6W, and was told to create an issue here 👍🏻

alolita commented 3 years ago

@bogdandrutu this issue does not seem to be a GA must-have issue? Can we move this to a post-GA phase 3 backlog?

bogdandrutu commented 3 years ago

I think we need to extend the "config.Service" and allow to configure telemetry support:

  1. Exporters for traces/metrics
  2. Telemetry level (see that in the code).

So users should do this via configuration of the service (here is an example but somebody needs to really think if this is the right config):

service:
  telemetry:
    - defaultlevel: normal
    - metrics:
      - exporter: 
        - name: prometheus
        - port: 123
    - traces: 
      - exporter: 
        - name: x-ray
        - endpoint: localhost:123
  extensions: [exampleextension/0, exampleextension/1]
  pipelines:
    traces:
      receivers: [examplereceiver]
      processors: [exampleprocessor]
      exporters: [exampleexporter]
julealgon commented 1 year ago

Is there a known workaround we can implement until this is put in practice? Not being able to see my collector itself in Datadog is very bad. The only way to see the collector logs for us is to log into an Azure VM and inspect a file in the system where we are redirecting the stdout/strerr to.

This feels incredibly counterintuitive considering the whole purpose of having the collector is to centralize and improve monitoring. The fact that the collector itself doesn't push its own logs and traces feels incomplete to me.

Would it be possible to setup a second collector instance that takes the output of the first and sends it back as telemetry to the main collector as a "file" input of sorts? This sounds incredibly hacky but maybe it would work?

I just wanted to be able to see something regarding the collector in my DataDog instance.

codeboten commented 9 months ago

I'm closing this issue as the goals of exporting traces (and metrics & logs) is part of this other issue here: https://github.com/open-telemetry/opentelemetry-collector/issues/7532

Note that exporting traces is currently an experimental feature supported behind a feature gate: https://github.com/open-telemetry/opentelemetry-collector/blob/main/docs/observability.md#how-we-expose-telemetry