Open huangsam opened 5 years ago
We're seeing a similar issue: we're also unable to capture some exceptions in except
blocks. Calls to logger.warning
, logger.error
, client.captureException
, and client.captureMessage
do not end up making it to Sentry if they are triggered from within an except
block. In other circumstances these functions behave as expected, but the common thread among our missing events is that they're all triggered from except
blocks.
Any information about this issue is much appreciated!
Does the app shut down afterwards?
@mitsuhiko my team is running ETL scripts in the background with cron-scheduled jobs. They tend to finish after a definite time, but they're not applications per-se.
@mitsuhiko My team is also running ETL scripts, among other things, in the background with cron-scheduled Celery jobs. So far we've only observed the issue in these contexts.
Can you share the configuration?
@mitsuhiko Our configuration looks something like this:
def configure_logging_sentry():
"""Create and attach a Sentry logging handler."""
sentry_dsn = os.environ.get("SENTRY_DSN")
if sentry_dsn is not None:
import raven.conf
import raven.handlers.logging
client = raven.Client(sentry_dsn, auto_log_stacks=True)
handler = raven.handlers.logging.SentryHandler(client)
handler.setLevel(logging.WARNING)
logging.root.addHandler(handler)
logger.info("configured Sentry logging handler")
And triggering the exception is something straightforward like:
try:
raise ValueError("Test exception")
except Exception:
logger.warning(
"Warning text",
exc_info=True,
)
We've tried various combinations of including/omitting exc_info=True
and auto_log_stacks=True
with no observable differences. If the exception is not caught then it appears in Sentry, however we don't see it in Sentry when it's caught and a logger.warning()
is emitted.
I think that is a different issue. That sounds like the logging system does not properly function for you.
@untitaker slightly related to this issue but i think an explicit handler on a single logger is a usecase that the new SDK does not handle properly now. We probably need to find a way to elevate specific loggers to send on different levels.
If we replace the call to logger.warning
in the above snippet with something like client.captureException
or client.captureMessage
then it also does not make it to Sentry. However, if we call logger.warning
or client.captureMessage
outside of the except
block (and do not trigger an exception) then we will see the warning/message in Sentry.
Interestingly, this case results in missed messages as well:
client.captureMessage("Test")
try:
raise ValueError()
except Exception:
pass
@mitsuhiko we apply this decorator to every ETL function that we invoke:
def alert(func):
"""Decorator to alert admins if function raises an exception."""
@wraps(func)
def wrapper(*args, **kwargs):
try:
func(*args, **kwargs)
except Exception:
# Sentry capture exception
client = sentry_con()
client.captureException()
# Email
err_path = path.basename(traceback.extract_stack()[2].filename)
err_data = traceback.format_exc()
# Assumes that a settings.json file exists in the same directory as the module.
with open(path.join(path.dirname(__file__), 'settings.json'), 'r') as settings:
admins = load(settings)['admins']
mail = mail_con()
mail.send_message(
subject="Error while running {}".format(err_path),
to=admins,
body=err_data
)
return wrapper
I have similar issue as @huangsam. I forked your code but still have some.
We get some of our
TimeoutError
andConnectionError
instances from ourclient.captureException
call in anexcept
block but not always. I've been runningping
againstgoogle.com
andsentry.io
from the server and there seems to be no dropping of packets. This concerns the simple Python integration, and not the Flask/Django integration.