SYSTRAN / faster-whisper

Faster Whisper transcription with CTranslate2
MIT License
12.67k stars 1.06k forks source link

Error #15: Initializing libiomp5md.dll, but found libomp140.x86_64.dll already initialized. #967

Open SiriusArtLtd opened 3 months ago

SiriusArtLtd commented 3 months ago

I had fastest-whisper working on MacBook M3, but when trying to run the same code on Windows laptop, there been problems.

OMP: Error #15: Initializing libiomp5md.dll, but found libomp140.x86_64.dll already initialized. OMP: Hint This means that multiple copies of the OpenMP runtime have been linked into the program. That is dangerous, since it can degrade performance or cause incorrect results. The best thing to do is to ensure that only a single OpenMP runtime is linked into the process, e.g. by avoiding static linking of the OpenMP runtime in any library. As an unsafe, unsupported, undocumented workaround you can set the environment variable KMP_DUPLICATE_LIB_OK=TRUE to allow the program to continue to execute, but that may cause crashes or silently produce incorrect results. For more information, please see http://www.intel.com/software/products/support/.

When activated os.environ['KMP_DUPLICATE_LIB_OK']='True'

new error appeared:

[2024-08-18 20:34:06.149] [ctranslate2] [thread 7412] [warning] The compute type inferred from the saved model is float16, but the target device or backend do not support efficient float16 computation. The model weights have been automatically converted to use the float32 compute type instead. Traceback (most recent call last): File "C:\Users\USERNAME\PycharmProjects\voice3\main.py", line 69, in for segment in segments: File "C:\Users\USERNAME\PycharmProjects\voice3.venv\Lib\site-packages\faster_whisper\transcribe.py", line 510, in generate_segments encoder_output = self.encode(segment) ^^^^^^^^^^^^^^^^^^^^ File "C:\Users\USERNAME\PycharmProjects\voice3.venv\Lib\site-packages\faster_whisper\transcribe.py", line 769, in encode return self.model.encode(features, to_cpu=to_cpu) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ RuntimeError: parallel_for failed: cudaErrorNoKernelImageForDevice: no kernel image is available for execution on the device

CUDA current setup: version: 2.4.0+cu121 available: True zeros: tensor([0.], device='cuda:0') count: 1 name: Quadro M2000M

aiohappyeyeballs 2.3.7 aiohttp 3.10.4 aiosignal 1.3.1 alembic 1.13.2 antlr4-python3-runtime 4.9.3 asteroid-filterbanks 0.4.0 attrs 24.2.0 audioread 3.0.1 av 11.0.0 certifi 2024.7.4 cffi 1.17.0 charset-normalizer 3.3.2 click 8.1.7 colorama 0.4.6 coloredlogs 15.0.1 colorlog 6.8.2 contourpy 1.2.1 ctranslate2 4.3.1 cuda-python 12.6.0 cycler 0.12.1 decorator 5.1.1 docopt 0.6.2 einops 0.8.0 faster-whisper 1.0.0 filelock 3.13.1 flatbuffers 24.3.25 fonttools 4.53.1 frozenlist 1.4.1 fsspec 2024.2.0 greenlet 3.0.3 huggingface-hub 0.24.5 humanfriendly 10.0 HyperPyYAML 1.2.2 idna 3.7 inquirerpy 0.3.4 Jinja2 3.1.3 joblib 1.4.2 julius 0.2.7 kiwisolver 1.4.5 lazy_loader 0.4 librosa 0.10.2.post1 lightning 2.4.0 lightning-utilities 0.11.6 llvmlite 0.43.0 Mako 1.3.5 markdown-it-py 3.0.0 MarkupSafe 2.1.5 matplotlib 3.9.2 mdurl 0.1.2 mpmath 1.3.0 msgpack 1.0.8 multidict 6.0.5 networkx 3.2.1 nltk 3.8.1 numba 0.60.0 numpy 1.26.0 omegaconf 2.3.0 onnxruntime 1.19.0 optuna 3.6.1 packaging 24.1 pandas 2.2.2 pfzy 0.3.4 pillow 10.2.0 pip 23.2.1 platformdirs 4.2.2 pooch 1.8.2 primePy 1.3 prompt_toolkit 3.0.47 protobuf 5.27.3 pyannote.audio 3.1.1 pyannote.core 5.0.0 pyannote.database 5.1.0 pyannote.metrics 3.2.1 pyannote.pipeline 3.0.1 pycparser 2.22 Pygments 2.18.0 pyparsing 3.1.2 pyreadline3 3.4.1 python-dateutil 2.9.0.post0 pytorch-lightning 2.4.0 pytorch-metric-learning 2.6.0 pytz 2024.1 pywin32 306 PyYAML 6.0.2 regex 2024.7.24 requests 2.32.3 rich 13.7.1 ruamel.yaml 0.18.6 ruamel.yaml.clib 0.2.8 safetensors 0.4.4 scikit-learn 1.5.1 scipy 1.14.0 semver 3.0.2 sentencepiece 0.2.0 setuptools 72.2.0 shellingham 1.5.4 six 1.16.0 sortedcontainers 2.4.0 soundfile 0.12.1 soxr 0.4.0 speechbrain 1.0.0 SQLAlchemy 2.0.32 sympy 1.12 tabulate 0.9.0 tensorboardX 2.6.2.2 threadpoolctl 3.5.0 tokenizers 0.15.2 torch 2.4.0+cu121 torch-audiomentations 0.11.1 torch-pitch-shift 1.2.4 torchaudio 2.4.0+cu121 torchmetrics 1.4.1 torchvision 0.19.0+cu121 tqdm 4.66.5 transformers 4.39.3 typer 0.12.4 typing_extensions 4.9.0 tzdata 2024.1 urllib3 2.2.2 wcwidth 0.2.13 yarl 1.9.4

SiriusArtLtd commented 3 months ago

Could someone post currently working setup for faster-whisper on PC?

InaKrapp commented 1 month ago

Could someone post currently working setup for faster-whisper on PC?

Hello, here is the setup that works for my PC (Windows 10):

I use a virtual environment with the following packages: av==12.3.0 certifi==2024.8.30 charset-normalizer==3.3.2 colorama==0.4.6 coloredlogs==15.0.1 ctranslate2==4.4.0 faster-whisper==1.0.3 filelock==3.16.1 flatbuffers==24.3.25 fsspec==2024.9.0 huggingface-hub==0.25.1 humanfriendly==10.0 idna==3.10 Jinja2==3.1.4 MarkupSafe==2.1.5 mpmath==1.3.0 networkx==3.3 numpy==2.1.2 onnxruntime==1.19.2 packaging==24.1 protobuf==5.28.2 PyAudio==0.2.14 pydub==0.25.1 PyQt6==6.7.1 PyQt6-Qt6==6.7.3 PyQt6_sip==13.8.0 pyreadline3==3.5.4 PyYAML==6.0.2 requests==2.32.3 sympy==1.13.3 tokenizers==0.20.0 torch==2.4.1 tqdm==4.66.5 typing_extensions==4.12.2 urllib3==2.2.3

From my experience, the 'OMP: Error https://github.com/SYSTRAN/faster-whisper/issues/15: Initializing libiomp5md.dll, but found libomp140.x86_64.dll already initialized' error or similar ones are usually caused by some packages which try to use the same or very similiar .dll-files (Windows appears to be worse at managing this than MacBooks). Unfortunately, the numpy package may cause this, so it is a very frequent issue. Some people reported that deleting and reinstalling numpy or updating it solved the issue for them.

I mostly manage to avoid it by creating virtual environments in python and installing as few packages as possible. The packages above are those I use to run a small GUI for faster-whisper, so if you only want to run faster-whisper on the command line, using even fewer packages should work.

I hope this helps.