1) I went to the layer-norm repo, downloaded the lngru_may13_1700000.npz files, added the layer norm function to the Kyros's Skipthought repo, as explained in https://github.com/ryankiros/layer-norm).
2) And then I tried the Step 4 of https://github.com/ryankiros/skip-thoughts/tree/master/training. Now I can encode sentence with his model. I used a 20,000 word vocabulary that I had on my own skip thought implementation (I can't find the 20,000 words vocabulary used by Kyros for his lngru_may13_1700000.npz model)
3) In SentEval, instead of 'import skipthoughts', I imported tools in your SentEval/examples/skipthought.py file. When I am running the experiments, I get pretty different results than yours (sometimes worse, sometimes a little better on certain benchmarks).
Could you explain how you obtained these scores? Which vocabulary did you use ? Is there any special trick I am not aware of, that I didn't mention earlier?
Thanks a lot, it would be a big help :)
Hello, I am trying to get the same results as you for the ST-LN model (https://arxiv.org/pdf/1707.06320.pdf : first row of Table 2).
1) I went to the layer-norm repo, downloaded the lngru_may13_1700000.npz files, added the layer norm function to the Kyros's Skipthought repo, as explained in https://github.com/ryankiros/layer-norm). 2) And then I tried the Step 4 of https://github.com/ryankiros/skip-thoughts/tree/master/training. Now I can encode sentence with his model. I used a 20,000 word vocabulary that I had on my own skip thought implementation (I can't find the 20,000 words vocabulary used by Kyros for his lngru_may13_1700000.npz model) 3) In SentEval, instead of 'import skipthoughts', I imported tools in your SentEval/examples/skipthought.py file. When I am running the experiments, I get pretty different results than yours (sometimes worse, sometimes a little better on certain benchmarks). Could you explain how you obtained these scores? Which vocabulary did you use ? Is there any special trick I am not aware of, that I didn't mention earlier? Thanks a lot, it would be a big help :)