ZhengkunTian / OpenTransformer

A No-Recurrence Sequence-to-Sequence Model for Speech Recognition
MIT License
372 stars 66 forks source link

long sentence performance #50

Open zhangxixi0904 opened 2 years ago

zhangxixi0904 commented 2 years ago

Hi, thanks for the great jobs. I encountered two problems and hope you can help.

  1. when testing with long audio, saying 30 seconds or longer, lots of sentence and words will be lost in the prediction. I have adjusted the max encoding length max_len and length penalty, it helps a little but still not so well. Is it necessary to add some long samples to train? if I want to test one minute audio, i have to use one minute audio for training?

  2. I notice you comment the relative positional in decoder, and set False as default. Did you find it not work in decoder? Dose the position encoding method have any relation with the question one?