Open czerr opened 6 days ago
Oh that is cool, i did not know LocalAI had whisper model support. We can indeed add it as a supported provider so you can get faster speeds
Hi Thimoty,
Yes ! You can !
Look here -> https://localai.io/features/audio-to-text/
As you did for LLM, the possibility of choosing your own Local AI Whisper server would be TOP! My Local AI Whisper server runs on Docker Desktop, which is optimized for Cuda. Thanks for your feedback ! Christophe.
What would you like to see?
Hello everyone, First of all, thank you for this superb project. Would it be possible to use LocalAI for Whisper? Currently the model is Xenova Whisper which uses the CPU... I have a LocalAI WHISPER Large server optimized for NVIDIA GPU that I would like to use to speed up Audio transcription... Thanks for your feedback. Christophe