Closed AK391 closed 3 years ago
Thank you for your submission! We really appreciate it. Like many open source projects, we ask that you sign our Contributor License Agreement before we can accept your contribution.
You have signed the CLA already but the status is still pending? Let us recheck it.
Is there a way to speed on inference on cpu?
@ivanvovk
@AK391 The only thing you can try is to set a lower number of timesteps during generation. In that case, expect quality degradation. Try setting temperature
to 5 (or even higher) and timesteps
to 1-5, and check the quality. 1-3 iterations should be enough to be faster than real-time on CPU.
@ivanvovk thanks changed temp to 5 and timesteps to 3, it is faster now on cpu with some quality degradation but seems reasonable.
@ivanvovk trying to add this gradio demo to huggingface spaces, getting this error https://github.com/huawei-noah/Speech-Backbones/issues/4 possibly because the install step for monotonic align is not completing due to permission issues on huggingface, do you know a way around this, thanks
This is a separated demo system, better to keep it in a separated repository.
gradio web demo