MahmoudAshraf97 / whisper-diarization

Automatic Speech Recognition with Speaker Diarization based on OpenAI Whisper
BSD 2-Clause "Simplified" License
3.77k stars 331 forks source link

Requested float16 compute type, tesla p40 #216

Closed danieladi98 closed 2 months ago

danieladi98 commented 2 months ago

Traceback (most recent call last): File "diarize.py", line 115, in whisper_results, language, audio_waveform = transcribe_batched( File "/whisper/whisper-diarization/transcription_helpers.py", line 64, in transcribe_batched whisper_model = whisperx.load_model( File "/whisper/whisper-diarization/venv/lib/python3.8/site-packages/whisperx/asr.py", line 288, in load_model model = model or WhisperModel(whisper_arch, File "/whisper/whisper-diarization/venv/lib/python3.8/site-packages/faster_whisper/transcribe.py", line 133, in init self.model = ctranslate2.models.Whisper( ValueError: Requested float16 compute type, but the target device or backend do not support efficient float16 computation.

i know that this error are related to device but i already make sure that my tesla p40 support float 16 (https://www.techpowerup.com/gpu-specs/tesla-p40.c2878), can you guys help me ?

danieladi98 commented 2 months ago

just change it to float32, works perfectly fine