ZhengkunTian / OpenTransformer

A No-Recurrence Sequence-to-Sequence Model for Speech Recognition
MIT License
372 stars 66 forks source link

Does this repo support "Listen Attentively, and Spell Once: Whole Sentence Generation via a Non-Autoregressive Architecture for Low-Latency Speech Recognition"? #34

Closed houwenxin closed 3 years ago

houwenxin commented 3 years ago

Thank you for publishing this interesting repo! I am very interested in your work "Listen Attentively, and Spell Once: Whole Sentence Generation via a Non-Autoregressive Architecture for Low-Latency Speech Recognition", can I reproduce your results using this repo?

ZhengkunTian commented 3 years ago

Thanks for your attention on our work. I have no plan to support LASO. We will soon release a baseline code of non-autoregressive transformer(https://github.com/ZhengkunTian/Speech-NART).