Open Loofy24 opened 1 year ago
I'm afraid it's not. Unfortunately CUDA architecture is necessary to run speech-to-text with Openai Whisper locally. If you are handy with coding you could run the same engine online. That function (Assistant /get_audio.py/whisper_wav_to_text )is pretty stand alone so changing it should not be too hard
@gia-guar can we use google collab or similar thing that you would recommend if we dont have cuda support ?
Is amd gpu compatible? Because amd doesn’t use cuda