Closed IoDmitri closed 7 years ago
Hi, I am not sure why that happens, I guess it comes from numerical approximations somewhere. It may be linked to the CUDNN implementation of the LSTM which makes the precision not perfect. I would suggest to try to reproduce this behaviour with a simple LSTM and a toy input and ask for explanations on the pytorch forum.
(Closing this issue because probably not directly linked to InferSent)
Here is my setup:
Here's the printout from vectors:
s1_enc = model.encode([s1])
Now these look similar, but aren't exactly the same.
s2_enc = model.encode([s2])
Now these are exactly the same. I'm wondering why there's a disparity between batching up the sentences and encoding them separately.