Closed rdoshi96 closed 3 years ago
My w2l CLI has a model exporter, and I’ve implemented a basic importer for pytorch that works with the librispeech recipe. If you’re planning to share your code / methodology / results, I’m very happy to send you what you need, as I plan to experiment with quantization as well.
That said, you should also probably be starting from my 3k hour dataset reduced parameter (400MB epoch 125) model at https://talonvoice.com/research/ instead of the 1.6GB librispeech recipe.
I also have other improved models that I haven’t posted yet.
My w2l CLI has a model exporter, and I’ve implemented a basic importer for pytorch that works with the librispeech recipe. If you’re planning to share your code / methodology / results, I’m very happy to send you what you need, as I plan to experiment with quantization as well.
That said, you should also probably be starting from my 3k hour dataset reduced parameter (400MB epoch 125) model at https://talonvoice.com/research/ instead of the 1.6GB librispeech recipe.
I also have other improved models that I haven’t posted yet.
@lunixbochs would you be able to share the links to your W2L CLI importer for pytorch? Sounds super useful.
close due to inactivity + you need to create your own script to convert w2l model into pytorch, we are not supporting this.
I wanted to work on Quantizing models produced by wav2letter using the Pytorch distiller framework. If I have a file containing a trained model (the pretrained librispeech model, for example), what is the best way to port it to Pytorch?