A No-Recurrence Sequence-to-Sequence Model for Speech Recognition
MIT License
372
stars
66
forks
source link
Does this repo support "Listen Attentively, and Spell Once: Whole Sentence Generation via a Non-Autoregressive Architecture for Low-Latency Speech Recognition"? #34
Thank you for publishing this interesting repo! I am very interested in your work "Listen Attentively, and Spell Once: Whole Sentence Generation via a Non-Autoregressive Architecture for Low-Latency Speech Recognition", can I reproduce your results using this repo?
Thanks for your attention on our work. I have no plan to support LASO. We will soon release a baseline code of non-autoregressive transformer(https://github.com/ZhengkunTian/Speech-NART).
Thank you for publishing this interesting repo! I am very interested in your work "Listen Attentively, and Spell Once: Whole Sentence Generation via a Non-Autoregressive Architecture for Low-Latency Speech Recognition", can I reproduce your results using this repo?