Open satvikpendem opened 9 months ago
The inference for the currently released models is quite well optimized in PyTorch. If this is too slow then there are also smaller models available (tiny
and base
) which are quite a bit faster.
Unfortunately I am not an expert in how difficult it is to export these autoregressive models to CoreML or TensorFlow-Lite. They are quite a bit different from traditional image recognition models since we have to run the model hundreds of times to generate the whole sequence.
That said any tutorial on running OpenAI Whisper or Large Language Models on mobile devices would be a great resource since WhisperSpeech is works very much like Whisper or a small LLM.
Is there a way that this can be integrated as a realtime text-to-speech engine for stuff like mobile phones? As well, is WhisperSpeech at that level of realtime inference yet or is there more to be done?