Open CMCDragonkai opened 2 years ago
It seems alot of the complexity is due to the vendors fragmentation and they are trying to make everything compatible.
Most tracing tools like https://nodejs.org/api/tracing.html and chrome:://tracing
expect a finite dataset, that is expected that a trace has a beginning and end. That's why it's always been "request" driven. Open telemetry is just deriving stuff that came before like in https://github.com/gaogaotiantian/viztracer https://github.com/janestreet/magic-trace https://github.com/kunalb/panopticon and more.
I'm interested in more than just request-driven tracing but live infinite traces (call it continuous tracing that shows finished and live spans at the same time), and correlates them too. I'm guessing we need zoomable levels of detail the ability to filter out irrelevant information dynamically.
Open telemetry in particular does not appear to emit a span until it is done. I'd imagine knowing when a span started even if it did not end yet would be useful for live continuous tracing.
Here's an old blog post demonstrating the integration of opentracing to an ES6 promise.
This code is quite outdated, as can be seen by our initial experiments with opentracing, the tracing format isn't exactly what we want, since the spans are only output at the very end, and is not conducive to both live and infinite/non-terminating visualisation.
However the code does show that at one point opentracing was simple enough to be easily extended upon, and one just uses the opentracing core library rather than bringing in so many dependencies now.
What we want is something like this:
The tracing goes from top to bottom, and represents an "infinite" live visualisation of what the current state (lifecycles) of the system is.
Specification
OpenTelemetry is an overly complicated beast. It's far too complex to adopt into a logging system. However the basic principles of tracing makes sense. Here I'm showing how you can set one up for comparison testing, for us to derive a tracing schema and later visualise it ourselves or by passing it into an OTLP compatible visualiser.
The above command runs jaeger. Take note of 4318 port which is the OTLP protocol over HTTP.
Visit
localhost:16686
to be able to view the jaeger system.Then any example code, like for example https://github.com/open-telemetry/opentelemetry-js/blob/main/examples/basic-tracer-node/index.js can run and push traces directly to the docker container.
What is frustrating is:
console.log
and produce pretty printed results that are not actual JSON. Thus you cannot just pipe it to a relevant location.The plan:
Additional context
14
Tasks