Open mithrandi opened 7 years ago
Unfortunately the described feature doesn't actually fully satisfy this feature request; tracebacks are already transformed before logging to Eliot, rather than doing this in Eliot message serialization. See https://github.com/ScatterHQ/eliot/blob/190284e97ab12616ebc6bf8722e2a05a30bde847/eliot/_traceback.py#L86
This is next on my list, but about out of time for today so will probably do release with current fixes. Hopefully can get to this next week.
Perhaps write_traceback
should be using the serialization layer, rather than doing serialization itself...
Put sketch of fix in https://github.com/ScatterHQ/eliot/tree/unserialized-message-in-destination-270 - what do you think?
Some concerns: how do destinations that get unserialized objects interact with the BufferingDestination
that ensures log messages aren't lost at startup time? Does that result in memory leak?
Alternative solution: a traceback-specific API, eliot.add_traceback_handler
or something.
How do you currently workaround this limitation in Eliot? A wrapper around write_traceback
that also calls into Sentry? Why is the workaround insufficient?
Well, it's only a "leak" in the same sense that it currently is. I guess the concern is that unbuffered log messages may be larger and keep resources like sockets alive unexpectedly.
Currently we don't have a workaround for this, we just live with it: that is, we don't get stack traces in Sentry at all. We have to go look them up from the Eliot logs if needed.
Do you want only tracebacks (eliot.write_traceback
) in Sentry? Or failed actions too?
If the former, you could do every time:
try:
...
except:
eliot.write_traceback()
sentry.report_error() # or whatever
Or wrap last two in utility function. Would this work/not work? Trying to explore the solution space and requirements.
Actually I think I was a bit confused about what we were doing. It seems we do already have our own write_traceback
(actually writeFailure
) which logs the failure to Twisted's logging system, and then we have a Twisted log observer that sends failures on to Sentry. I think this issue came up because hypothetically we would want to get rid of Twisted logging, but we could still keep the writeFailure
wrapper I guess.
However this would only work for code that uses our wrapper, not code elsewhere that calls writeFailure
or write_traceback
.
Our use case for this is sending logged tracebacks to Sentry; the Sentry client (raven-python) wants the exception / traceback object for serializing the stack information. A per destination flag (or rather, two different types of destination?) would allow a Sentry log destination to opt in to this.