DataDog / dd-trace-py

Datadog Python APM Client
https://ddtrace.readthedocs.io/
Other
554 stars 416 forks source link

Best Practices for Long Running tasks #10118

Open phrfpeixoto opened 4 months ago

phrfpeixoto commented 4 months ago

Summary of problem

Not a problem. I'm looking for best practices for tracing long-running tasks. I'm very sorry if this is not the right place to ask such questions, but I couldn't find anywhere else. Feel free to close the issue if that's the case.

The issue I'm trying to solve is that I have a few very-long-running (a long loop) celery tasks, and usually these traces get so big that they are dropped. I'd like to explore ways to instrument my tasks to break these traces up, having multiple traces per task, ideally with all of them having a tag that contains the task ID, so I can later correlate them through DD's dashboard

Which version of dd-trace-py are you using?

ddtrace==2.6.3
celery==5.3.5

Which version of pip are you using?

pip 23.2.1

What is the result that you get?

Traces grow so much they get dropped. WHen they do manage to get submitted, they are very large and impossible to navigate.

What is the result that you expected?

A way/example to programmatically create and submit a new trace within the scope of a running celery task.

emmettbutler commented 4 months ago

Thanks for asking, @phrfpeixoto. We'll think on it and provide some sort of custom instrumentation example code.

ymguerra commented 3 months ago

Hello, I am also facing the same problem, any solution??

mintusah25 commented 2 months ago

I am also facing similar problem but for Java based application. In my case trace is available at DD but Web is not able load it as 100K spans linked there.

What I'm running