Open dkutz78 opened 5 months ago
You can use this project instead:
Thanks Shmuel, it looks promising. I wish I could make your version work :)
Thanks Shmuel, it looks promising. I wish I could make your version work :)
I need to make an updated version. trying to find time for this...
After waiting 10 minutes I get this message 🤷♂️
Due to a bug fix in https://github.com/huggingface/transformers/pull/28687 transcription using a multilingual Whisper will default to language detection followed by transcription instead of translation to English.This might be a breaking change for your use case. If you want to instead always translate your audio to English, make sure to pass
language='en'
. C:\Ai\hebrew_whisper\venv\Lib\site-packages\transformers\models\whisper\modeling_whisper.py:691: UserWarning: 1Torch was not compiled with flash attention. (Triggered internally at ..\aten\src\ATen\native\transformers\cuda\sdp_utils.cpp:455.) attn_output = torch.nn.functional.scaled_dot_product_attention(