Open csukuangfj opened 1 year ago
If you want to run it on GPU, please see the following colab
If you want to run it on GPU, please see the following colab
It's strange that GPU has a lower RTF value:
Real time factor (RTF): 27.556 / 28.165 = 0.978
Why is it strange?
Lower RTF -> Faster
Why is it strange?
Lower RTF -> Faster
Sorry, you are right. I misunderstood that.
FYI: We have supported exporting distil-whisper via onnx and run it with sherpa-onnx
You can find a colab notebook below for illustration.
sherpa-onnx is implemented in C++ and provides various APIs for different languages, e.g., Python/C#/Go/C/Kotlin/Swift/C#, etc. It supports Windows/Linux/macOS and Android/iOS/Raspberry Pi, etc.
The current medium model is still very large and its RTF is greater than 1 on CPU.
Hope that tiny/base/small models will be available soon.