When view the available models for Transcription providers for the native built in we only support downloading and running of Xenova/whisper-small, but we can enable Xenova/whisper-large for those with more resources to leverage.
This is a large model though, so we should warn this as it is a very resource intensive model
What would you like to see?
When view the available models for Transcription providers for the native built in we only support downloading and running of Xenova/whisper-small, but we can enable Xenova/whisper-large for those with more resources to leverage.
This is a large model though, so we should warn this as it is a very resource intensive model