Closed zachdaniel closed 6 years ago
I think we should make those ids not datadog specific, as this is meant to be platform agnostic (even though there is only one adapter at the moment), so perhaps just x-parent-trace-id
or something along those lines.
I would go for adapter.headers
then each of adapters can implement own headers or add them in addition to agnostic ones - also I would suggest to check what's the convention for general use case
hmmm...perhaps having adapters provide plugs instead of adapter.headers? I hesitate to add web concepts to the trace adapter interface. We could just change the existing plug to be datadog specific, and then add the logic in there.
so instead of Spandex.Plug.AddContext
it would be Spandex.Datadog.Plug.AddContext
and then it can do w/e it wants really.
Would we want to namespace the plugs together or the Datadog specific stuff together? I am leaning towards the latter but could see an argument for Spandex.Plugs.Datadog.*
Both seem fine to me
otherwise to easier keep the plugs agnostic, we could just store the header names as configuration or plug option values, this could be actually compiled in-line (as you definitely don't switch between such systems on the fly). As far as I read the paper on Dapper from google and saw implementations based on it, most systems have both trace and span ids, those could be named differently, but I assume that in the end you need to have both in order to proceed with general trace.
wdys?
Hmm...
Yeah that is probably best. As a rule, we never want to compile something that might be configured by an environment variable. One should never say:
@module_attr Confex.get(:foo, :bar)
This stops one from being able to configure things via the environment (a problem especially when using deployments).
if we are on it, doesn't it make more sense to actually replace the EndTrace
plug with a plug before send callback?
The main reason was because the trace itself might not necessarily be done at that point. For instance, if we wanted to consider the time that it takes to render the response and clean up after the request, then register_before_send
is not sufficient.
do you have any example how would it be done in phoenix?
Do you mean an example of the plugs in a phoenix endpoint? Or how it could be simulated with a register_before_send
? I haven't set up an example application, and our usage is in a closed source application. I could set one up if need be.
I know how I can do it with before send, I'm looking more for plugs in phoenix endpoint, or in plain Plug Router. Personally I use small wrapper around Plug Router, that's why I'm asking
it's good that we didn't added it just yet, based on DataDog/dd-trace-rb#151 the headers have been aligned to x-datadog-trace-id
and x-datadog-parent-id
Was there a conclusion on how you wished to implement this? We would like to use distributed tracing across some services - I may be able to pick this up if there's a consensus on the way go about it.
I think the best way is to modify the StartTrace plug to accept a list of headers to look in for a trace id. If it finds one of those, it uses Spandex.continue_trace
instead of starting a new trace. Feel free to take a stab at it!
Resolved by #36
as @driv3r mentioned in #5, we could easily support distributed tracing with request headers, e.g
x-ddtrace-parent_trace_id
x-ddtrace-parent_span_id