Closed ghost closed 7 years ago
I understand the desire here. I wonder if a better way to do it would be to use a SentryHander and the logging framework.
I've just raised a PR against nameko https://github.com/onefinestay/nameko/pull/313 that loads LOGGING
config on service startup.
The following config.yaml
configures nameko_sentry
and adds a logging SentryHandler
at the same time.
SENTRY:
DSN: &sentry_dsn
https://user:pass@app.getsentry.com/12345
CLIENT_CONFIG: &sentry_client_config
site: 'Test Site'
AMQP_URI: amqp://guest:guest@localhost/
LOGGING:
version: 1
disable_existing_loggers: false
formatters:
simple:
format: "%(levelname)s:%(name)s - %(message)s"
handlers:
console:
class: logging.StreamHandler
formatter: simple
sentry:
<<: *sentry_client_config
dsn: *sentry_dsn
class: raven.handlers.logging.SentryHandler
sentry_restricted:
<<: *sentry_client_config
dsn: *sentry_dsn
level: WARNING # restricted to WARNING and above
class: raven.handlers.logging.SentryHandler
loggers:
sentry:
level: INFO
handlers: [sentry]
propagate: False
root:
level: INFO
handlers: [console,sentry_restricted]
With this particular config, warnings and above in any logger go to sentry, and there's a specific sentry
logger will sends info level and above, e.g.
import logging
logging.getLogger("app").info("logs to console only")
logging.getLogger("sentry").info("logs to sentry only")
logging.getLogger("nameko").warn("logs to console and sentry")
The only snag with the above is that the built-in raven.handlers.logging.SentryHandler
is incompatible with eventlet. It works but you get a traceback on teardown. This can be fixed by subclassing the built-in one to force the handler to use the eventlet http transport:
from raven.handlers.logging import SentryHandler as RavenSentryHandler
from raven.transport import EventletHTTPTransport
class SentryHandler(RavenSentryHandler):
def __init__(self, *args, **kwargs):
kwargs['transport'] = EventletHTTPTransport
super(SentryHandler, self).__init__(*args, **kwargs)
If this is a useful solve for your problem we could add the above subclass into nameko_sentry
.
Using the logging module seems like a more natural approach.
There is one other case for having direct access to the raven client. Typically during an exception or message I'll merge is the user context like so...
self.client.context.merge({'user': player_id})
Which then populates the user detail in sentry. This could probably be handle by exposing a method in the dependency to merge context.
Using context.merge
before sending an exception doesn't seem quite right. The client
object is shared so if you have exceptions in multiple concurrent workers there's no guarantee that the merge
will happen at the correct time.
It would be better to pass the user information when you call captureException
. As it stands that requires a small modification to nameko-sentry, or a subclass. Using the logging handler though, it looks like you can just provide it using the extra
kwarg.
As an aside, are you keeping your user_id
in the data
attribute of the worker_ctx
? There's a general purpose solution whereby all information in the "context data" gets passed into the sentry client.
This is addressed by #7.
As of nameko 2.5.1, all worker lifecycle methods happen within the same greenthread, meaning that raven's thread locals keep worker contexts isolated. As a result we can safely expose the client to the worker.
Also note that #7 adds the ability to automatically suck "user" type data out of nameko's WorkerContext.context_data
and add it to the appropriate raven context.
I'll write up some docs and cut a release soon.
Would it be possible to add a get_dependency method to return the raven client? For times when you want to send a non exception message.
client.captureMessage('Somebody did something sorta strange.')