Open vytskalt opened 1 month ago
There seems to be a lot more that broke from those changes with the sudden change from numpy to pytorch.
Yes, very weird they would do that in a minor release.
I tried running stable-ts[fw] in jpy notebook, it crashed at model.transcribe. It works on the prompt tho, dunno what causes it.
I tried running stable-ts[fw] in jpy notebook, it crashed at model.transcribe. It works on the prompt tho, dunno what causes it.
stable-ts[fw]
installs the latest Faster-Whisper version (1.0.3) on PyPI, so aforementioned changes (occured after 1.0.3) do not affect it.
For Faster-Whisper models, the transcribe()
method is the original Faster-Whisper transcription method. To use Stable-ts, use model.transcribe_stable()
instead.
But if transcribe()
is crashing then it's likely a Faster-Whisper issue. A similar issue seems to be on their repo already: https://github.com/SYSTRAN/faster-whisper/issues/820.
I tried running stable-ts[fw] in jpy notebook, it crashed at model.transcribe. It works on the prompt tho, dunno what causes it.
stable-ts[fw]
installs the latest Faster-Whisper version (1.0.3) on PyPI, so aforementioned changes (occured after 1.0.3) do not affect it. For Faster-Whisper models, thetranscribe()
method is the original Faster-Whisper transcription method. To use Stable-ts, usemodel.transcribe_stable()
instead. But iftranscribe()
is crashing then it's likely a Faster-Whisper issue. A similar issue seems to be on their repo already: SYSTRAN/faster-whisper#820.
Thanks for this! Solved the crashing by downgrading faster-whisper to 1.0.0, changed my CUDA to 12.1, and changed my pytorch to cu121
Looks like recent changes to faster-whisper broke compatibility with stable-ts, giving errors like this: