DataDog / dd-trace-py

Datadog Python APM Client
https://ddtrace.readthedocs.io/
Other
532 stars 408 forks source link

Best Practices for Long Running tasks #10118

Open phrfpeixoto opened 1 month ago

phrfpeixoto commented 1 month ago

Summary of problem

Not a problem. I'm looking for best practices for tracing long-running tasks. I'm very sorry if this is not the right place to ask such questions, but I couldn't find anywhere else. Feel free to close the issue if that's the case.

The issue I'm trying to solve is that I have a few very-long-running (a long loop) celery tasks, and usually these traces get so big that they are dropped. I'd like to explore ways to instrument my tasks to break these traces up, having multiple traces per task, ideally with all of them having a tag that contains the task ID, so I can later correlate them through DD's dashboard

Which version of dd-trace-py are you using?

ddtrace==2.6.3
celery==5.3.5

Which version of pip are you using?

pip 23.2.1

What is the result that you get?

Traces grow so much they get dropped. WHen they do manage to get submitted, they are very large and impossible to navigate.

What is the result that you expected?

A way/example to programmatically create and submit a new trace within the scope of a running celery task.

emmettbutler commented 1 month ago

Thanks for asking, @phrfpeixoto. We'll think on it and provide some sort of custom instrumentation example code.

ymguerra commented 1 day ago

Hello, I am also facing the same problem, any solution??