Closed mvaled closed 8 years ago
The following pictures show the current status of the failed merges. The first picture is all the queue of merge retries, and the second shows this task has been retried more than 800 times...
Backlog of merges:
One merge:
That traceback is definitely not from merging (even the beginning of it shows that its from an HTTP request). Can you give the traceback you're seeing in the actual failed task, it appears different.
Also regarding the above (again, a different area), knowing which plugin is setting that up would be useful, as it's not correctly binding Event data.
Sorry. I think I got the wrong traceback (related with the sentry-gitlab plugin).
The traceback in flower is:
Traceback (most recent call last):
File "/srv/sentry8/local/lib/python2.7/site-packages/celery/app/trace.py", line 240, in trace_task
R = retval = fun(*args, **kwargs)
File "/srv/sentry8/local/lib/python2.7/site-packages/celery/app/trace.py", line 438, in __protected_call__
return self.run(*args, **kwargs)
File "/srv/sentry8/src/sentry/src/sentry/tasks/base.py", line 47, in _wrapped
result = func(*args, **kwargs)
File "/srv/sentry8/src/sentry/src/sentry/tasks/base.py", line 61, in wrapped
current.retry(exc=exc)
File "/srv/sentry8/local/lib/python2.7/site-packages/celery/app/task.py", line 684, in retry
raise ret
Retry: Retry in 300s: NodeUnpopulated('You should populate node data before accessing it.',)
But that doesn't give much information either. It seems the worker is not properly logging the errors into sentry itself... Any ideas on how to get more info?
Yeah not overly useful. I have a feeling Celery is obscuring it. Will play with it locally and see if we can figure it out.
Can you confirm the version of Sentry?
Master branch at 4af2e215d5d2307db3434ba57f6dc26c4a3dd1c5.
Yet I tried to do a "manual" merge:
$ sentry --config=etc/sentry.conf.py django shell
>>> from sentry.tasks.merge import merge_group
>>> merge_group(from_object_id=1390, to_object_id=1391)
It didn't fail. So only when I do it via the UI it's when I get the failing jobs.
@mvaled merge_group actually gets called continously, so its possible it as only erroring when it got to a certain point
I updated to "latest" branch yesterday and this has gone away: now I can merge happily... ;)
I'm having the same issue with sentry-jira plugin.
Somehow on models/event.py
in get_interfaces()
method, it cannot iterate over NodeData
object.
for key, data in self.data.iteritems():
where data
is <NodeData: id=9vcAqDXlQt+ykm+SK50qiQ==>
I'm on sentry version 8.2.4
Whenever I try to merge two or more groups of events, I notice the background task fails and is retried. The traceback from the sentry backend project is: