Since I am facing the issue of Huggingface being down and not being able to download Whisper and test the setup, the next step would be to add support for loading your own model. This will not only remove the Huggingface dependency, but also speed up the execution time
Since I am facing the issue of Huggingface being down and not being able to download Whisper and test the setup, the next step would be to add support for loading your own model. This will not only remove the Huggingface dependency, but also speed up the execution time