Closed khalob closed 8 months ago
Just realizing this same question was asked previously. Woops. https://github.com/rhasspy/wyoming-faster-whisper/issues/10 for context. There is a forked repo for GPU and distil model usage. Would really love to have it implemented natively though
I second this. It would be great to have more models, especially because there are a lot of them already on Huggingface. So I see two major ways this could be accomplished:
If there is help required, I would be glad to offer.
This has been done in the 2.0.0 release: https://github.com/rhasspy/wyoming-faster-whisper/releases/tag/v2.0.0
I uploaded the int8 variants to HuggingFace, and the --model
argument can now be anything that WhisperModel supports, such as tiny.en, tiny, base.en, base, small.en, small, medium.en, medium, large-v1, large-v2, large-v3, large, distil-large-v2, distil-medium.en, distil-small.en, or a HuggingFace model ID.
@synesthesiam you're the best. Thank you!
@synesthesiam Very nice, thank you!
@synesthesiam I appreciate all the work you and the HA team are doing. Was curious though, if you had any thoughts on the following:
It may be a stupid question / I might not fully understanding, but would this project see improvements if https://github.com/huggingface/distil-whisper models were used?
Currently, as you mentioned elsewhere, this project is pulling models from a static list that corresponds to some GitHub downloadable models that you're supplying. Would you be opposed to a PR allowing local (already downloaded) models?
Thanks again :)