itamarst / eliot

Eliot: the logging system that tells you *why* it happened
https://eliot.readthedocs.io
Apache License 2.0
1.11k stars 66 forks source link

Allow destinations to receive the unserialized message #270

Open mithrandi opened 7 years ago

mithrandi commented 7 years ago

Our use case for this is sending logged tracebacks to Sentry; the Sentry client (raven-python) wants the exception / traceback object for serializing the stack information. A per destination flag (or rather, two different types of destination?) would allow a Sentry log destination to opt in to this.

mithrandi commented 7 years ago

Unfortunately the described feature doesn't actually fully satisfy this feature request; tracebacks are already transformed before logging to Eliot, rather than doing this in Eliot message serialization. See https://github.com/ScatterHQ/eliot/blob/190284e97ab12616ebc6bf8722e2a05a30bde847/eliot/_traceback.py#L86

itamarst commented 7 years ago

This is next on my list, but about out of time for today so will probably do release with current fixes. Hopefully can get to this next week.

itamarst commented 7 years ago

Perhaps write_traceback should be using the serialization layer, rather than doing serialization itself...

itamarst commented 7 years ago

Put sketch of fix in https://github.com/ScatterHQ/eliot/tree/unserialized-message-in-destination-270 - what do you think?

  1. Does backwards incompat change to unserialized traceback message matter? I suspect not.
  2. Does it solve your problem?
  3. Perhaps want nicer way to add combo of serialized+unserialized destinations, so that they can both get buffered messages from startup... or maybe destinations can mark what they want?
mithrandi commented 7 years ago
  1. I don't think so, unless it breaks eliottree (but it may be easier to just fix eliottree if it does).
  2. It looks like it does.
  3. Not sure about this, I don't think it matters for anything we're doing currently.
itamarst commented 6 years ago

Some concerns: how do destinations that get unserialized objects interact with the BufferingDestination that ensures log messages aren't lost at startup time? Does that result in memory leak?

Alternative solution: a traceback-specific API, eliot.add_traceback_handler or something.

How do you currently workaround this limitation in Eliot? A wrapper around write_traceback that also calls into Sentry? Why is the workaround insufficient?

mithrandi commented 6 years ago

Well, it's only a "leak" in the same sense that it currently is. I guess the concern is that unbuffered log messages may be larger and keep resources like sockets alive unexpectedly.

Currently we don't have a workaround for this, we just live with it: that is, we don't get stack traces in Sentry at all. We have to go look them up from the Eliot logs if needed.

itamarst commented 6 years ago

Do you want only tracebacks (eliot.write_traceback) in Sentry? Or failed actions too?

If the former, you could do every time:

try:
     ...
except:
    eliot.write_traceback()
    sentry.report_error() # or whatever

Or wrap last two in utility function. Would this work/not work? Trying to explore the solution space and requirements.

mithrandi commented 6 years ago

Actually I think I was a bit confused about what we were doing. It seems we do already have our own write_traceback (actually writeFailure) which logs the failure to Twisted's logging system, and then we have a Twisted log observer that sends failures on to Sentry. I think this issue came up because hypothetically we would want to get rid of Twisted logging, but we could still keep the writeFailure wrapper I guess.

However this would only work for code that uses our wrapper, not code elsewhere that calls writeFailure or write_traceback.