Closed IgorEzerskiy closed 2 months ago
It's caused by faster-whisper and the easy solution there is to pin torch==2.3.1 - it breaks with 2.4.
The hard solution is library parsing and dynamically adding it to (PYTHON)PATH.
It's caused by faster-whisper and the easy solution there is to pin torch==2.3.1 - it breaks with 2.4.
The hard solution is library parsing and dynamically adding it to (PYTHON)PATH.
Thank you very much, you just saved me, helped to return the torch and torchaudio libraries to version 2.3.1
torch==2.3.1
Thanks help me to sort out issue.
@saveli you are literally my hero. I've been struggling with this issue for more than 4 days I tested 3 operating systems 3 different CUDA versions and a lot more. thank you soooo much.
I'm encountering an issue when using an EC2 g5.2xlarge instance and I'm getting the error mentioned in the title.
Description: I'm using Whisper-X and Mistral in the same Docker container. In this setup, Mistral is working, but Whisper-X is not.
Packages: NVIDIA-SMI 555.42.02 CUDA Version: 12.5 NVIDIA A10G Docker version 27.1.2, build d01f264
Python Packages: ctranslate2 4.3.1 faster-whisper 1.0.0 nvidia-cublas-cu12 12.1.3.1 nvidia-cuda-cupti-cu12 12.1.105 nvidia-cuda-nvrtc-cu2 12.1.105 nvidia-cuda-runtime-cu12 12.1.105 nvidia-cudnn-cu12 9.1.0.70 nvidia-cufft-cu12 11.0.2.54 nvidia-curand-cu12 10.3.2.106 nvidia-cusolver-cu12 11.4.5.107 nvidia-cusparse-cu12 12.1.0.106 nvidia-nccl-cu12 2.20.5 nvidia-nvjitlink-cu12 12.6.20 nvidia-nvtx-cu12 12.1.105 torch 2.4.0 whisperx 3.1.1
I've found similar errors related to torch, but in my case, torch is working within the container. Please help, I've spent a lot of time on this unsuccessfully.