Closed ieure closed 9 months ago
Since statsd sends a payload for every event that happens, I would expect to see Micrometer Statsd to appear in many thread dumps. But these threads are marked as RUNNABLE which indicates to me that we'd expect them to complete normally.
If the job does exceed a resource saturation threshold and is crashing, I would expect telemetry to be unavailable like the rest of the service.
What makes you believe that it is actually stuck in Timer.Sample#stop
?
If you would like us to look at this issue, please provide the requested information. If the information is not provided within the next 7 days this issue will be closed.
Closing due to lack of requested feedback. If you would like us to look at this issue, please provide the requested information and we will re-open.
We've got a system which is instrumented with Micrometer and is experiencing some issues.
We're using micrometer 1.1.1, and putting metrics into statsd on localhost, which is then pushing them to Datadog. The program is a Dropwizard HTTP service, but also runs a batch process in a background thread multiple times a day.
What we've observed is that the background process grinds to a halt, and the service consumes 100% of a core's CPU. Restarting the service and batch job allows it to complete. The interactive parts of the service continue to respond normally.
We have general metrics around database and HTTP time/rate, but I also put in metrics to diagnose why the batch job is getting wedged -- there's a counter for the number of work items left to complete, the type of item, rate of completion, and so forth.
When it gets wedged, we've seen multiple thread dumps where it appears to be stuck in micrometer code. Here's one from earlier today:
And this is one from last week:
The specific code that was being measured is different -- the first is for a DB query which supports a work item (this metric has been in place for ~6 months, since we moved from Dropwizard Metrics + Graphite to Micrometer/statsd/Datadog) -- while the second is the completion of a work item, and is one I added ~3 weeks ago. In both cases, the thing being measured has completed, but execution is stuck inside the
Sample.stop
at the end.Another thing we've noticed is that there's a correlation between the job wedging and missing data from the new metrics around batch job performance. For example, there's a counter which is set to the total number of work items, then decremented when one completes. When the job wedges, none of this data is available in Datadog. Other metrics from the same host do appear to work, but ones specifically around this job are missing.
I'm looking for some suggestions to track down what's going on here. My hypothesis is that something is preventing metrics from getting into statsd, so they're sitting around in an unbounded queue, which causes
Sample.stop
to hang.It seems like it might be related to #462 and #354. There's a suggestion, "[if] the multi thread is important you should use Queues.unboundedMultiproducer" -- However, the docs say nothing at all about this, and I'm not sure how I'd make that change (or if it's likely to change my situation).
Do you have a suggestion for further diagnosing what's happening here?