Closed kingsword5566 closed 3 years ago
Hi,
Thank you for your interest in our work. We will start by open-sourcing the librispeech (100h) setup. Others will follow later. Following is the rough plan:
Awesome! Actually I also trained a transformer pre-trained model called TERA (same authors w/ MockingJay). Any plan to adapt to another e2e structure (eg. RNN-T/transformer)? Can't wait!
For the moment, we will release flat-start lfmmi training code. In the future, we plan to explore Transducers as well (most likely Transformer-Transducers)
Hi Any progress? Hope that everything goes well!! BTW, I found that there maybe some typo in your 2020 paper.
3.2.1. Librispeech (100 hours)
...Table 1. For decoding, we use the model that achieves lowest WER on the dev-clean
set...
According to your pkwrap
paper. I think the results of Table 1 are WER on the test-clean
?
Hi,
Thank you for pointing it out. We meant we use the dev-clean (development) set to select the best model for decoding the test dataset. The results in Table 1 are on the test-clean and test-other portion.
We have now cleaned up the code and requested open-sourcing. There are some formalities which hopefully should be done in another week.
Thank you for waiting.
Best, Apoorv
Hi,
The scripts to reproduce the Librispeech (100h) experiment are now available. We have the experiment setup as a new repository to support future experiments as well as its own dependencies. Following is the link:
https://github.com/idiap/apam/
I will close the issue. Feel free to open a new issue on the other repository.
Thanks, Apoorv
Hi I'm very interested in your new work 'LATTICE-FREE MMI ADAPTATION OF SELF-SUPERVISED PRETRAINED ACOUSTIC MODELS" When would you open the source code? Or some develop branch can access?