Open joshtriplett opened 3 years ago
Can you explain what is the end goal here? I understand this is an old issue, but want to make sure we grasp the issue well.!
@cijothomas The most convenient setup for opentelemetry-otlp lets it set up the pipeline automatically. Right now, however, I set up a pipeline by hand, so that I can aggregate spans from lightweight servers I spawn:
HttpExporterBuilder
and SpanExporterBuilder
.TracerProvider::builder()
and with_simple_exporter
that exports traces via a simple impl of SpanExporter
that sends them over a flume channel.Vec<SpanData>
from the other end of the channel, and sends them to the "real" exporter.Vec<SpanData>
instances (using the Serialize/Deserialize impls that exist in older versions, which is why I've been unable to upgrade) from lightweight servers spawned by this one, and feed them into the channel as well, so that they all get uploaded to Honeycomb.The lightweight servers start and finish too quickly to let them rely on connecting directly to Honeycomb, and it'd be much more complex to have them sending OTLP to the main server and process that on the main server (plus, I'd still need to aggregate the resulting spans). The lightweight servers are ones where I'm counting milliseconds, and they have only a single binary running on them.
This approach lets me get all the spans in one place, log them both to stderr and to Honeycomb, and put as little logic as possible into the tracing on the lightweight servers.
Thanks for the explanation! This is not something covered by the spec, and hence it is unlikely we'll give an explicit support.
For the purposes of batch processing or testing, would it be possible to import a
Vec<SpanData>
into opentelemetry to be sent out via opentelemetry-otlp or the stdout pipeline? Concretely, I'd love to have a method that accepts aVec<SpanData>
and sends it through the same pipeline as those collected and fed in via tracing-opentelemetry.