apache / datafusion

Apache DataFusion SQL Query Engine
https://datafusion.apache.org/
Apache License 2.0
5.87k stars 1.11k forks source link

Support "Tracing" / Spans #9415

Open alamb opened 6 months ago

alamb commented 6 months ago

Is your feature request related to a problem or challenge?

"Tracing" is a visualization technique for understanding how potentially concurrent operations happen, and is common in distributed systems.

You can also visualize DataFusion executions using traces. Here is an example trace we have at InfluxData that is integrated into the rest of our system and visualized using https://www.jaegertracing.io/

Screenshot 2024-03-01 at 1 13 59 PM

This visualization shows when each operator started and stopped (and the operators are annotated with how much CPU time is spent, etc). These spans are integrated into the overall trace of a request through our cloud service, which allows us to understand where a request's time is spent, both across services as well as within our DataFusion based engine.

For more background on tracing, this blog seems to give a reasonable overview: https://signoz.io/blog/distributed-tracing-span/

Describe the solution you'd like

I would like to make it easy for people to add DataFusion ExecutionPlan level tracing to their systems as well.

Given the various different libraries to generate traces I don't think picking any particular one to build into DataFusion is a good idea. However adding some way to walk the ExecutionPlan metrics and emit information that can be turned into traces would be very helpful I think

This came up twice recently, so I wanted to get it filed into a ticket

Describe alternatives you've considered

No response

Additional context

@simonvandel noted in Discord

I have been playing a bit with https://github.com/wolfpld/tracy and https://docs.rs/tracing-tracy/latest/tracing_tracy/ and thought it would be nice to have some visibility into ExecutionPlans being executed.

It also came up in slack: https://the-asf.slack.com/archives/C04RJ0C85UZ/p1709051125059619

But I want to see traces from which I want to calculate how much time spent in executing the queries and how much time got spent into data transfer from a target storage system, It would be good, if you can guide me how can I get these metrics

alamb commented 6 months ago

The way we get spans in InfludDb is to use metrics which has the start/stop times as well as counters of what rows have been produced, time the execution started, stopped, and various other things

The DataFusion metrics --> jarger span exporter for InfluxDB 3.0 can be found here https://github.com/influxdata/influxdb/blob/8fec1d636e82720389d06355d93245a06a8d90ad/iox_query/src/exec/query_tracing.rs#L93-L112

It uses our own span generation system as the rust ecosystem didn't have a standard library that we could find when we originally wrote it.

One possibility might be to refactor the code out of InfluxDB to make it generic (so it didn't depend on span, but instead was implemented in terms of some trait (which we could then impl for span

erratic-pattern commented 5 months ago

The tracing crate and related ecosystem would be a good choice for this. It is full-featured with an easy-to-use and unobtrusive API, and there is a lot of community support for tooling integration and export formats.

I think we could get most of the way there by implementing the following changes:

With these changes, all the data of the existing MetricsSet code can captured:

for compatibility with code that is reading metrics from the existing MetricsSet, we could also implement a Subscriber and/or Layer that calculates these metrics and produces a MetricsSet. However, a better approach for most users would be to simply install one of the many tracing ecosystem crates, which provide subscribers for a number of tools and data formats.

Some benefits compared to the current metrics implementation:

Downsides:

alamb commented 5 months ago

Another downside of tracing is that not all systems use that ecosystem so it might require some extra logic to use tracing than metrics

But otherwise the high level idea seems plausble, if a bunch of work. Maybe a quick POC could be done to see what it might look like in practice / how much work / change was required

erratic-pattern commented 5 months ago

Another downside of tracing is that not all systems use that ecosystem so it might require some extra logic to use tracing than metrics

I think this is where custom Subscribers could be useful for generating DataFusion-native reporting and metrics. Having a Subscriber to generate the existing MetricsSet should be possible. From the user side there is no need to include tracing-related dependencies if configured with built-in Subscribers

But otherwise the high level idea seems plausble, if a bunch of work. Maybe a quick POC could be done to see what it might look like in practice / how much work / change was required

Agreed. What would be a good scope for a POC to be both quick to implement and broad enough to cover all related cases? Maybe a subset of operators/streams that are used by a specific, but non-trivial, query.

The existing metrics code can be left as-is until we reach a point in implementation where we're confident that tracing can replace those functionalities. Also a lot of it will continue to be used I think for generating DataFusion-native metrics.

erratic-pattern commented 4 months ago

After spending some time with the optimizer, I think it would be a good candidate to PoC the tracing crate.

erratic-pattern commented 4 months ago

create_physical_plan would be a good follow up, to PoC instrumentation of async code

aditanase commented 3 months ago

Hi all, thanks for adding this and investigating the tracing crate. I'd like to suggest being a bit more specific about the goals of adding tracing, before jumping in both feet :). Maybe I can pitch in some use cases to help with this.

My team is prototyping a distributed engine on top of ballista. Since ballista doesn't yet have a great UI, we started to look at adding some end-to-end tracing (think external client -> flight SQL query -> scheduler -> enqueue job -> executors -> DF engine). As we realised there is currently no tracing in either project, we quickly found this issue.

I think the tracing crate, together with some of the community subscribers (e.g. opentelemetry stack) can solve this problem, even though there are a number of challenges:

To that end, I'd like to understand if reimplementing metrics on top of tracing is really what this issue is about, or just an attempt at consolidating some of the timing / metrics bookkeeping?

Based on my experience with other systems (mostly on the JVM, building and tuning Spark / Kafka deployments), tracing and metrics work really well together, but they are rarely conflated. This goes especially for core internal metrics that are used by the query engine (e.g. explain analyse, cost based optimizations, building a nice UI for the scheduler) as opposed to tracing that is typically done through sampling (tracing everything at a certain level becomes expensive quickly), has a lot of user input (extra context, combined with app metrics) and is configurable through the log level.

My suggestion would be to decouple adding tracing (as a tool for people that are monitoring / optimizing engines built on top of DF) from the core metrics refactoring. For metrics specifically there are other crates with more target concepts (counters, gauges, histograms) that have some integration with tracing, in order to propagate the current span context as metrics labels.

Lastly, if there is not a lot of work started here, I've already started to play around with some of the suggestions on this thread (add instrument to execute, instrument streams and async blocks, etc) and I'd be interested in contributing to this track, especially some of the lessons learned around tracing async code and streams.

alamb commented 3 months ago

Since the the internal metrics in datafusion -- aka https://docs.rs/datafusion/latest/datafusion/physical_plan/metrics/index.html have start/stop timestamps on them already, we have found it relatively straightforward to convert them to "tracing" spans -- the link to do so is above:

https://github.com/apache/datafusion/issues/9415#issuecomment-1973719267

I am not clear what additional benefit more direct tracing integration in datafusion would provide, but I may be missing something