open-telemetry / opentelemetry-python

OpenTelemetry Python API and SDK
https://opentelemetry.io
Apache License 2.0
1.79k stars 622 forks source link

Segmentation fault with auto instrumentation in asgi applications #2030

Open potatochip opened 3 years ago

potatochip commented 3 years ago

Describe your environment I am experiencing a segmentation fault when using auto instrumentation with any asgi application. I have tested with both starlette and fastapi.

Steps to reproduce Can be created with the following repo. Steps in README.md. https://github.com/potatochip/opentelemetry-segfault-example

What is the expected behavior? No segfault

What is the actual behavior? Segfault

Additional context Load testing sometimes needs to happen a couple times before a segfault occurs, but usually happens on the first try.

owais commented 3 years ago

Thanks for the report and reproducible example. Could you please also attach a traceback of the failure?

potatochip commented 3 years ago

The container exits with 139. These are the logs

[2021-08-10 20:59:12 +0000] [8] [INFO] Starting gunicorn 20.1.0
[2021-08-10 20:59:12 +0000] [8] [INFO] Listening at: http://0.0.0.0:8000 (8)
[2021-08-10 20:59:12 +0000] [8] [INFO] Using worker: uvicorn.workers.UvicornWorker
[2021-08-10 20:59:12 +0000] [14] [INFO] Booting worker with pid: 14
Overriding of current TracerProvider is not allowed
[2021-08-10 20:59:12 +0000] [14] [INFO] Started server process [14]
[2021-08-10 20:59:12 +0000] [14] [INFO] Waiting for application startup.
[2021-08-10 20:59:12 +0000] [14] [INFO] Application startup complete.
Queue is full, likely spans will be dropped.
Queue is full, likely spans will be dropped.
scripts/auto: line 2:     8 Segmentation fault      opentelemetry-instrument gunicorn -c gunicorn_conf.py auto_main:app
potatochip commented 3 years ago

Immediate difference that I see is that the message "Queue is full, likely spans will be dropped." occurs twice with automatic instrumentation, but only once with manual instrumentation.

owais commented 3 years ago

Does this only when auto instrumenting and not with manual setup? From the logs, it does look like that the tracing pipeline is set up twice. Not sure if that is the issue though. Can you please try setting OTEL_BSP_MAX_QUEUE_SIZE to a very large value (it's 2048 by default) and see if that makes the segfault go away or take a longer time to happen? Alternatively, you could set it to a very low number such as 10 and see if it happens almost immediately. It could help us narrow down the issue a bit.

potatochip commented 3 years ago

This only happens with auto instrumentation, not manual. I set OTEL_BSP_MAX_QUEUE_SIZE=999999 and segfault still occurs. No change in the amount of time it takes. The only difference is that the queue full message does not appear before the segfault occurs.

owais commented 3 years ago

Thanks. That means the queue is not the issue. My initial naive guess is that somehow we are instrumenting things multiple times and that is causing weird memory access issues. Ill try to reproduce with you example and debug later this week.

srikanthccv commented 3 years ago

I tried to run this on my machine. I could recreate the issue. Here is the stack trace with more info

srikanthc@FL-LPT-388s-MacBook-Pro opentelemetry-segfault-example % docker run -it -p 8000:8000 --entrypoint scripts/auto otseg
[2021-08-12 01:21:14 +0000] [7] [INFO] Starting gunicorn 20.1.0
[2021-08-12 01:21:14 +0000] [7] [INFO] Listening at: http://0.0.0.0:8000 (7)
[2021-08-12 01:21:14 +0000] [7] [INFO] Using worker: uvicorn.workers.UvicornWorker
[2021-08-12 01:21:14 +0000] [13] [INFO] Booting worker with pid: 13
WARNING:opentelemetry.trace:Overriding of current TracerProvider is not allowed
[2021-08-12 01:21:14 +0000] [13] [INFO] Started server process [13]
[2021-08-12 01:21:14 +0000] [13] [INFO] Waiting for application startup.
[2021-08-12 01:21:14 +0000] [13] [INFO] Application startup complete.
Fatal Python error: Segmentation fault

Thread 0x00007fc5877fe700 (most recent call first):
  File "/usr/local/lib/python3.8/threading.py", line 306 in wait
  File "/usr/local/lib/python3.8/site-packages/opentelemetry/sdk/trace/export/__init__.py", line 230 in worker
  File "/usr/local/lib/python3.8/threading.py", line 870 in run
  File "/usr/local/lib/python3.8/threading.py", line 932 in _bootstrap_inner
  File "/usr/local/lib/python3.8/threading.py", line 890 in _bootstrap

Thread 0x00007fc58f627740 (most recent call first):
  File "/usr/local/lib/python3.8/site-packages/gunicorn/arbiter.py", line 357 in sleep
  File "/usr/local/lib/python3.8/site-packages/gunicorn/arbiter.py", line 209 in run
  File "/usr/local/lib/python3.8/site-packages/gunicorn/app/base.py", line 72 in run
  File "/usr/local/lib/python3.8/site-packages/gunicorn/app/base.py", line 231 in run
  File "/usr/local/lib/python3.8/site-packages/gunicorn/app/wsgiapp.py", line 67 in run
  File "/usr/local/bin/gunicorn", line 8 in <module>
scripts/auto: line 2:     7 Segmentation fault      opentelemetry-instrument gunicorn -c gunicorn_conf.py auto_main:app
owais commented 3 years ago

opentelemetry-instrument gunicorn -c gunicorn_conf.py auto_main:app

gunicorn_config.py already sets up the tracing pipeline and I see opentelemtry-distro is installed as well meaning the instrument command would definitely setup two pipeline. We should update docs to not recommend using the instrument command with gunicorn but instead enable the instrumentations in gunicorn_config.py. I think that should solve this specific issue.

That said, we still want to support setting up multiple pipelines when a user really wants it so we should figure out if setting up multiple pipelines is actually what causes the issue and fix it.

srikanthccv commented 3 years ago

Interestingly I am unable to reproduce this issue if I don't use docker.

rhymes commented 1 year ago

I'm experiencing a similar error in production:

Dec 14 05:57:28 PM  Fatal Python error: Segmentation fault
Dec 14 05:57:28 PM
Dec 14 05:57:28 PM  Thread 0x00007f1e0332f700 (most recent call first):
Dec 14 05:57:28 PM    File "/opt/render/project/python/Python-3.9.15/lib/python3.9/threading.py", line 316 in wait
Dec 14 05:57:28 PM    File "/opt/render/project/src/.venv/lib/python3.9/site-packages/opentelemetry/sdk/trace/export/__init__.py", line 254 in worker
Dec 14 05:57:28 PM    File "/opt/render/project/python/Python-3.9.15/lib/python3.9/threading.py", line 917 in run
Dec 14 05:57:28 PM    File "/opt/render/project/src/.venv/lib/python3.9/site-packages/sentry_sdk/integrations/threading.py", line 67 in run
Dec 14 05:57:28 PM    File "/opt/render/project/python/Python-3.9.15/lib/python3.9/threading.py", line 980 in _bootstrap_inner
Dec 14 05:57:28 PM    File "/opt/render/project/python/Python-3.9.15/lib/python3.9/threading.py", line 937 in _bootstrap
Dec 14 05:57:28 PM
Dec 14 05:57:28 PM  Thread 0x00007f1e0e3e6740 (most recent call first):
Dec 14 05:57:28 PM    File "/opt/render/project/src/.venv/lib/python3.9/site-packages/gunicorn/arbiter.py", line 357 in sleep
Dec 14 05:57:28 PM    File "/opt/render/project/src/.venv/lib/python3.9/site-packages/gunicorn/arbiter.py", line 209 in run
Dec 14 05:57:28 PM    File "/opt/render/project/src/.venv/lib/python3.9/site-packages/gunicorn/app/base.py", line 72 in run
Dec 14 05:57:28 PM    File "/opt/render/project/src/.venv/lib/python3.9/site-packages/gunicorn/app/base.py", line 231 in run
Dec 14 05:57:28 PM    File "/opt/render/project/src/.venv/lib/python3.9/site-packages/gunicorn/app/wsgiapp.py", line 67 in run
Dec 14 05:57:28 PM    File "/opt/render/project/src/.venv/bin/gunicorn", line 8 in <module>

I have seen this quite often during the day, though at irregular intervals. We do have a quite complex instrumentation setup that's entirely manual.

Versions:

And these are the OpenTelemetry packages we use directly:

opentelemetry-api                        1.7.1
opentelemetry-exporter-otlp-proto-grpc   1.7.1
opentelemetry-instrumentation-botocore   0.26b1
opentelemetry-instrumentation-celery     0.26b1
opentelemetry-instrumentation-flask      0.26b1
opentelemetry-instrumentation-redis      0.26b1
opentelemetry-instrumentation-requests   0.26b1
opentelemetry-instrumentation-sqlalchemy 0.26b1
opentelemetry-sdk                        1.7.1

I checked the changelog for the newer versions of the library but I haven't seen anything related.

Any ideas?

srikanthccv commented 1 year ago

There wasn't an exact fix, but there were many related fixes that happened. Can you check if this still happens on 1.15.0 and 0.36b0?

rhymes commented 1 year ago

@srikanthccv thanks for the tip! I will attempt to upgrade and report back :-) It'll take a bit as it's not a smooth upgrade

rhymes commented 1 year ago

Hi @srikanthccv,

unfortunately it didn't work. It might be due to OpenTelemetry's interaction with Sentry SDK within the Flask app, I've also tried to invert the order of the initialization of the two but nothing gained. I'll try using a CPU and memory profiler to see if I can get to the bottom of this.

Thanks in the meantime!