Open CYOT opened 9 months ago
Getting the same issue with these library versions:
numpy 1.24.4
safetensors 0.4.0
scipy 1.10.1
sentence-transformers 2.2.2
sentencepiece 0.1.99
tokenizers 0.14.1
torch 2.1.0
torchvision 0.16.0
transformers 4.34.0
The seg fault happens on the same model.encode(...)
step when running in docker container, but works fine on my local python 3.8 env on my Mac M1 machine.
Relevant Dockerfile base portion:
FROM python:3.8-slim-buster AS ApiImage
ENV PYTHONUNBUFFERED 1
ENV PYTHONDONTWRITEBYTECODE 1
RUN python3 -m pip install --upgrade pip setuptools wheel
I am having the same issue but wanted to point out I'm getting the error when trying to encode with 'clip-ViT-B-32'
and 'all-MiniLM-L6-v2'
Docker version 24.0.6, build ed223bc
Also mentioned here https://github.com/UKPLab/sentence-transformers/issues/2228
Has anyone here been able to resolve the issue?
setting the environment variable as follows fixes it for me: OMP_NUM_THREADS=1
Hello,
I am encountering a segmentation fault issue while using the Sentence Transformer library on my Nvidia Jetson Xavier NX device. The issue occurs when I attempt to encode text with the "paraphrase-mpnet-base-v2" model.
Here are some details about my setup:
Hardware: Nvidia Jetson Xavier NX (15GB GPU, 8GB RAM, Arch) Software: Numpy version 1.26.0, Sentence Transformer version 2.2.2 The error message I am receiving is as follows:
> /home/nvidia/Documents/alpaca-python/faisstest.py(10)
-> vectors = encoder.encode(text)
Segmentation fault (core dumped)
Could you please provide guidance on how to resolve this issue? Any insights, tips, or information on potential solutions would be greatly appreciated.
Thank you for your assistance.