Closed trtg closed 6 years ago
Can you post some example that reproduces there issue?
i'll try to distill some toy example, but it looks like this is probably the fault of my hacked together "middleware". With uwsgi running in prethreaded mode, things seem to work ok so far.
Closing as it's been idle for 4 months.
I've also just had a situation where replacing uwsgi with gunicorn made my spans go through, but only those spans made by Flask. Spans reported by different, background threads were working flawlessly.
If I run my app using werkzeug.serving.run_simple all spans are propagated just fine and my traces/spans are error free. If I run the app under uwsgi and initialize the tracer in the uwsgi.post_fork_hook, i see the client claiming that it has reported the root span, but the collector never appears to receive it (I added logging when the collector receives spans). Once the root span is missing all the other spans that were one level below the root are marked as having an invalid parent, and the trace is screwed up.
Is there some consideration I'm missing when running jaeger in a tornado.wsgi.WSGIApplication under uwsgi? I suspect it may be an issue with my hacked together middleware that maintains a stack of the ongoing trace in thread-local storage ala py_zipkin. I understand how putting the trace context in TLS could be a problem in a multithreaded environment with concurrent requests, but this issue appears even when I'm only sending in a single request at a time as a proof of concept.
Or am I missing some subtlety related to how tracer flushes as discussed in #50 ?