Closed miohtama closed 1 year ago
After doing this change, I am hitting another error
failed to send traces to Datadog Agent at http://localhost:8126: HTTP error status 413, reason Request Entity Too Large
I am now trying with partial flushes and lower sample rate if this would mitigate the issue
os.environ["DD_TRACER_PARTIAL_FLUSH_ENABLED"] = "true"
# We have a lot (millions) of SQL traces
# Trace only 10% of all
os.environ["DD_TRACE_SAMPLE_RATE"] = "0.1"
@miohtama The configuration options for enabling partial flushing are not documented as you describe. I will correct this as a follow-up to your issue.
We offer the partial flush and payload size configuration options you found as mitigations at the moment.
For partial flushing, the minimum version requirements are ddtrace
v1.1.1 and Datadog Agent v7.25.0.
For payload sizes, the Datadog Agent in v7.21.0 increased the maximum trace payload size from 10mb to 50mb.
I am very interested in your use case of a long-running command line program. Capturing all the work in the run of such a command as a single trace can be valuable. I wonder if you could share more on how you see this data being useful.
Thank you for a prompt and informative reply. I indeed noticed still getting dropped more traces than accepted (dropped: 1, accepted: 0)
with my example config and ddtrace 0.50.4
.
I have upgraded to ddtrace
to 1.2.0 and will see if partial flushes will get rid of my errors.
@miohtama Did enabling partial flushing resolve the problem for you?
Hey yes. I managed to solve it with a partial flush. Eventually, I ended up using the following env var hack:
os.environ["DD_TRACER_PARTIAL_FLUSH_ENABLED"] = "true"
os.environ["DD_TRACE_SAMPLE_RATE"] = "0.1"
Though I am facing another issue: nested ddtrace.opentracer.Tracer
do not seem to work with Datadog the service. I did not yet have time to investigate why is this and if it is related to partial flushs.
@miohtama if you're still having an issue with nested Tracer
objects, please open a new issue describing the problem. Thanks again for the contribution!
Which version of dd-trace-py are you using?
Which version of pip are you using?
poetry
Which version of the libraries are you using?
How can we reproduce your problem?
I have no idea.
What is the result that you get?
On my production server that runs a long-running command-line application with manually inserted traces, sometimes I get a warning
What is the result that you expected?
The warning should hint
I am doing tested traces using OpenTracing API, so I suspect having a long-running parent trace might cause this.
Workaround
As a workaround I just poke
ddtrace
internals directly: