Traceback (most recent call last):
File "diarize.py", line 115, in
whisper_results, language, audio_waveform = transcribe_batched(
File "/whisper/whisper-diarization/transcription_helpers.py", line 64, in transcribe_batched
whisper_model = whisperx.load_model(
File "/whisper/whisper-diarization/venv/lib/python3.8/site-packages/whisperx/asr.py", line 288, in load_model
model = model or WhisperModel(whisper_arch,
File "/whisper/whisper-diarization/venv/lib/python3.8/site-packages/faster_whisper/transcribe.py", line 133, in init
self.model = ctranslate2.models.Whisper(
ValueError: Requested float16 compute type, but the target device or backend do not support efficient float16 computation.
Traceback (most recent call last): File "diarize.py", line 115, in
whisper_results, language, audio_waveform = transcribe_batched(
File "/whisper/whisper-diarization/transcription_helpers.py", line 64, in transcribe_batched
whisper_model = whisperx.load_model(
File "/whisper/whisper-diarization/venv/lib/python3.8/site-packages/whisperx/asr.py", line 288, in load_model
model = model or WhisperModel(whisper_arch,
File "/whisper/whisper-diarization/venv/lib/python3.8/site-packages/faster_whisper/transcribe.py", line 133, in init
self.model = ctranslate2.models.Whisper(
ValueError: Requested float16 compute type, but the target device or backend do not support efficient float16 computation.
i know that this error are related to device but i already make sure that my tesla p40 support float 16 (https://www.techpowerup.com/gpu-specs/tesla-p40.c2878), can you guys help me ?