Closed SwAt1563 closed 2 weeks ago
If you're using docker you can try changing https://github.com/SYSTRAN/faster-whisper/blob/814472fdbf7faf5d77d65cdb81b1528c0dead02a/docker/Dockerfile#L5
into
RUN pip3 install faster-whisper torch --index-url https://download.pytorch.org/whl/cu124
Currently there's a strict version matrix between torch
and faster-whisper
: https://github.com/SYSTRAN/faster-whisper/issues/1086#issue-2612158224
Thank you for your guidance, but it still doesn't work.
Issue Summary: I'm attempting to run
faster-whisper
with GPU support on my device, which has CUDA 12.6 installed. The following package versions are being used:faster-whisper==1.0.3
ctranslate2==4.4.0
wyoming==1.5.3
The model runs successfully on the CPU; however, it does not utilize the GPU as expected. I am looking for a solution to enable GPU acceleration for
faster-whisper
using CUDA 12.6.