Thank you for your implementation, it is very helpful for me.
I run this code and can get the similar result when the number of heads equals to 1. But, I cannot get the result of original paper(73.6/82.7) when I use 8 heads, batch size 32, training step 150k, char dimension of 200 (the same setting as the original paper). I can only get around (71.27/80.58).
Same situation was ocurred when I ran the pytorch repo (https://github.com/andy840314/QANet-pytorch-).
Thank you for your implementation, it is very helpful for me. I run this code and can get the similar result when the number of heads equals to 1. But, I cannot get the result of original paper(73.6/82.7) when I use 8 heads, batch size 32, training step 150k, char dimension of 200 (the same setting as the original paper). I can only get around (71.27/80.58). Same situation was ocurred when I ran the pytorch repo (https://github.com/andy840314/QANet-pytorch-).
Any suggestions?