open-telemetry / opentelemetry-rust

The Rust OpenTelemetry implementation
https://opentelemetry.io
Apache License 2.0
1.88k stars 440 forks source link

[Bug]: tracing-opentelemetry+opentelemetry-prometheus+opentelemetry_sdk stopped working together after upfrade #1929

Open biryukovmaxim opened 4 months ago

biryukovmaxim commented 4 months ago

What happened?

Code works with deps:

opentelemetry-prometheus = "0.15.0"
opentelemetry_sdk = { version = "0.22.1" }
tracing-opentelemetry = { version = "0.23.0" }

#unrelated
prometheus = { version = "0.13.4" }
tokio = { version = "1.14.1", features = ["full"] }
tracing = "0.1.40"
tracing-subscriber = { version = "0.3.18", features = ["env-filter", "json"] }
use opentelemetry_sdk::metrics::SdkMeterProvider;
use prometheus::{Encoder, TextEncoder};
use std::time::Duration;
use tracing::{level_filters::LevelFilter};
use tracing_opentelemetry::MetricsLayer;
use tracing_subscriber::{fmt, layer::SubscriberExt, util::SubscriberInitExt, EnvFilter, Layer};

#[tokio::main]
async fn main() {
    let prometheus_registry = init_tracing();
    tokio::spawn(async move {
        loop {
            tracing::trace!(monotonic_counter.foo = 1);

            tokio::time::sleep(Duration::from_secs(1)).await;
        }
    });
    let foo = tokio::task::spawn(async move {
        loop {
            tokio::time::sleep(Duration::from_secs(1)).await;
            let encoder = TextEncoder::new();
            let mut buffer = vec![];
            encoder.encode(&prometheus_registry.gather(), &mut buffer).unwrap();
            println!("{}", String::from_utf8_lossy(&buffer));
        }
    });
    foo.await.unwrap();
}

fn init_tracing() -> prometheus::Registry {
    let fmt_layer = fmt::Layer::new()
        .with_ansi(false)
        .json()
        .flatten_event(true)
        .with_span_list(true)
        .with_filter(
            EnvFilter::builder()
                .with_default_directive(LevelFilter::INFO.into())
                .from_env_lossy(),
        );

    // create a new prometheus registry
    let prometheus_registry = prometheus::Registry::new();

    // configure OpenTelemetry to use this registry
    let exporter = opentelemetry_prometheus::exporter()
        .with_registry(prometheus_registry.clone())
        .build()
        .unwrap();
    let provider = SdkMeterProvider::builder()
        .with_reader(exporter)
        .build();
    let opentelemetry_metrics = MetricsLayer::new(provider);
    tracing_subscriber::Registry::default()
        .with(opentelemetry_metrics)
        .with(fmt_layer)
        .try_init()
        .expect("Failed to init tracers");
    prometheus_registry
}

when I up deps to:

opentelemetry-prometheus = "0.16.0"
opentelemetry_sdk = { version = "0.23.0" }
#tracing-opentelemetry = { version = "0.24.0" }

every call produces error: OpenTelemetry metrics error occurred. Metrics error: reader is shut down or not registered

API Version

0.23.0, 0.22.0

SDK Version

0.23.0, 0.22.1

What Exporter(s) are you seeing the problem on?

Prometheus

Relevant log output

OpenTelemetry metrics error occurred. Metrics error: reader is shut down or not registered
451846939 commented 3 months ago

I also encountered this problem.The provider is dropped after the call to init_tracing is completed, which results in the loss of the weak reference. Extending the lifetime of the provider should resolve the issue.

fraillt commented 2 months ago

It might be this issue: You need to explicitly shutdown metrics and tracing before tokio runtime is shutdown. Basically what happens is that once you register them, they are assigned to global static variable, which is destroyed after you exit from main function. Metrics and tracing has shutdown functions in order to flush the state before exiting.

If I'm correct, then adding these lines at the end of main function should solve your issue :)

        global::shutdown_tracer_provider();
        // reset meter provider, so the real thing would shutdown
        global::set_meter_provider(NoopMeterProvider::default());