domluna / memn2n

End-To-End Memory Network using Tensorflow
MIT License
342 stars 134 forks source link

Position Encoding #20

Closed anonymous012345678 closed 7 years ago

anonymous012345678 commented 7 years ago

Hi domluna, How did you get the equation in position_encoding? It seems different from the one in the paper, unless I made a silly algebra mistake... Even then, is there an advantage in splitting out the equation into the way you wrote it? Some sort of optimization?

domluna commented 7 years ago

It came from https://github.com/facebook/MemNN/blob/master/MemN2N-babi-matlab/build_model.m#L9. It's been a while but I think it worked better than what was written in the paper. I'm not sure why they did it that way, didn't explain it afaik.

domluna commented 7 years ago

If anything it was a slight improvement, I don't think it made much of a difference.

anonymous012345678 commented 7 years ago

Thank you.

kodakfu commented 6 years ago

When i implement the equation in paper and the way in code , it seems that there are some commen feature in the result.