dandelin / Dynamic-memory-networks-plus-Pytorch

Implementation of Dynamic memory networks plus in Pytorch
https://arxiv.org/abs/1603.01417
126 stars 27 forks source link

postion encoding #2

Open yufengm opened 6 years ago

yufengm commented 6 years ago

Are we supposing the same length for each sentence in the position encoding?

https://github.com/dandelin/Dynamic-memory-networks-plus-Pytorch/blob/ad49955f907c03aade2f6c8ed13370ce7288d5a7/babi_main.py#L18

As in above, each sentence encoding is divided by the same number elen-1.

AveryLiu commented 6 years ago

I suppose you could pad the variated sentences length to a fixed number in the preprocessing step before feeding them into the model.