Open facundobatista opened 1 month ago
Hi,
You are using whisperx
but that tool is not part of this repository.
But the error is inside the library:
File "/home/facundo/devel/envwhisperx/lib/python3.10/site-packages/faster_whisper/transcribe.py", line 130, in __init__
self.model = ctranslate2.models.Whisper(
Maybe there's a better way to reproduce it only for the library?
Oh sorry. Its actually unrelated to the libraries. Your GPU is too old
INT8 precision requires a CUDA GPU with a compute capability of 6.1, 7.0, or higher
whereas GT740's compute capability is 3.0 https://developer.nvidia.com/cuda-gpus
You should simply use float32
or auto
as compute-type
Ah, my bad, sorry, I thought that int8
was the "simplest" one.
BTW, float32
doesn't work either, but I don't really know what to report, because I just get a segmentation fault :(
+1 on this i get a segmentation fault while using the usage example from the project using a GTX 960 with float32
Hello!
I'm getting that error from this lib when running whisperx on a mp3. The complete traceback is:
I'm running it like this:
whisperx encuentro.mp3 --compute_type int8 --model large-v2 --verbose True --language es
I'm in a virtualenv, created with this dependencies:
ctranslate2
is pinned to less than 4 (actually using 3.24.0) because of CUDA driver 11.4 of Nvidia driver 470 for Geforce GT740.GPU details:
Any idea of what is going on? How can I fix it, or workaround it? thanks!!