Open zionsofer opened 3 years ago
This issue was marked stale due to lack of activity. It will be closed in 30 days.
Any news with this one that is problematic for celery usage. any workaround advice?
I don't think there is any work around here other than fixing the bug.
@owais, @lonewolf3739 @zionsofer and I found a workaround for this.
Workaround
We have found that adding the task definition ("second.ping") to the first Service i.e.
adding a task to the worker tasks registry, would enable the
_trace_before_publish
function to find the task in the registry.
It would be suffice to add for example:
@app.task(name="second.ping")
def ping():
pass
Now,
task = utils.retrieve_task_from_sender(kwargs)
can get the task.
and from there it would mostly work and the trace would contain both span is expected.
if the workaround is ok, we can update the documentation. as it is a major use case when using celery, since, usually, a worker (first service) publishes a task that would be consumed by another service (second worker).
@owais another thing is the SpanKind
that is created for the current processing worker.
in the function _trace_prerun
I see that the SpanKind=CONSUMER
and it seems that SpanKind=SERVER
is what should be there.
The current resulting Service Map, while using the workaround mentioned above will not show a node for the Second Celery worker, while it should as it is a server in the system.
Changing the SpanKind=SERVER
fixes that.
WDYT ?
SpanKind should definitely NOT be SERVER. This looks like a limitation in the APM backend you are using.
I don't think users should have to worry about the workaround. Ideally, instrumenting with Otel should require zero code changes especially to existing non-otel related code. IMO this is a bug in the Celery instrumentation and should be fixed so it works for all users.
IMO this is a bug in the Celery instrumentation and should be fixed so it works for all users.
@owais - I run into this issue and wondering what the right fix is.
I see that now we use the task instance to store a dictionary from task_id to span.
Since we store the dict
on the task object with setattr
, we cannot support str
tasks.
I wonder if it makes sense to instead store the open spans on a celery-global dictionary (stored as instrumentation property) with key (task_name, task_id, is_publish)
in order to resolve this issue.
@blumamir You might want to check this proposed WIP fix by a contributor https://github.com/goatsthatcode/opentelemetry-python-contrib/pull/1. I think there are multiple Github issues created for this same problem.
@blumamir You might want to check this proposed WIP fix by a contributor goatsthatcode#1. I think there are multiple Github issues created for this same problem.
Thanks! I commented on the PR
Describe your environment python==3.7.4/3.7.6 Platform==Local - MacOS(Darwin-20.5.0-x86_64-i386-64bit), Remote - Linux (many distros) Otel release==0.22b0 (verified on OTEL main as well - same code and same behavior)
Steps to reproduce When running two celery apps where one uses another, when instrumenting the celery apps with OpenTelemetry, the second celery worker creates a span within a separate trace from the first one.
Reproduction example:
otel_celery_first.py
otel_celery_second.py
test_celery.py
Running the workers:
celery -A otel_celery_first worker -n first@%h -Q first
celery -A otel_celery_second worker -n second@%h -Q second
Sending the task:
python test_celery.py
What is the expected behavior? Two spans, with the same trace ID.
First celery worker output:
Second celery worker output:
What is the actual behavior? Two spans, with a different trace ID for each:
First celery worker output:
Second celery worker output:
I would expect a span of
apply_async
which is also missing from the first worker.Additional context I believe this has got to do with the
_trace_before_publish
signal handler:The task is tried to be retrieved from the celery registry by the sender name, which is the task name that we send to. But, the first worker does not explicitly declare the tasks it sends as part of its registry, so the task is not found and thus returns None which causes the function to exit prematurely. Perhaps there's a better to way to handle this?