harvardnlp / seq2seq-attn

Sequence-to-sequence model with LSTM encoder/decoders and attention
http://nlp.seas.harvard.edu/code
MIT License
1.26k stars 278 forks source link

I got an idea.... #98

Open SeekPoint opened 7 years ago

SeekPoint commented 7 years ago

for a large dataset about 10M QA pairs would it be a better performance on accuracy if we divide the dataset by the length of sentences. and feed it to different training model and decoding it accordingly(maybe different parameters on RNN size, layers for the different model) ?

any comments!!!!