localminimum / QANet

A Tensorflow implementation of QANet for machine reading comprehension
MIT License
982 stars 310 forks source link

TODOs #13

Open ghost opened 6 years ago

ghost commented 6 years ago

This is an umbrella issue where we can collectively tackled some problems and improve general open source reading comprehension quality.

Goal The network is already there. We just need to add more features on top of the current model.

Model

Data

Contribution to any of these issues is welcome and please comment on this issue and let us know if you want to work on these problems.

ghost commented 6 years ago

As of f0c79cc93dc1dfdad2bc8abb712a53d078814a56, I have changed the location of dropouts to "after" layer norm from "before" layer norm. It doesn't make sense to drop input channels to layer norm as they normalize across channel dimensions, this will cause distribution mismatch during inference time and training time. We shall see how this improves the model.

alphamupsiomega commented 6 years ago

To overcome your GPU memory constraints, what about just decreasing batch size?

On a 1080 Ti (11GB), I'm able to run 128 hidden units, 8 attention heads, 300 glove_dim, 300 char_dim with a batch size of 12. At least 16 and above, CUDA is out of memory. Accuracy seems comparable so far.

ghost commented 6 years ago

You have a valid point, and I would like to know how your experiment goes. I would also suggest trying group norm instead of layer norm as they report better performance with lower batch sizes.

alphamupsiomega commented 6 years ago

Good suggestion, Min. Since the paper compares against batch norm, have you found that layer norm generally outperforms batch norm lately? One could try batch norm also for comparison. Interestingly the 'break-even' point is about batch size 12 between batch norm and group norm for those paper's conditions. Layer norm is supposedly more robust to small mini batches compared to batch norm.

Also the conditions from the above comment run fine on a 1070 gpu.

Do you have a sense if model parallelization across multiple gpus is worth it for this type of model?

localminimum commented 6 years ago

Hi @mikalyoung , I haven't tried parallelisation across multiple GPUs so I wouldn't know what the best way to go about it is. I heard that data parallelism is easier to get working than model parallelisation. It seems that from #15 using bigger hidden size and bigger number of heads in attention improves the performance, so I would try fitting the bigger model with smaller batches into multiple GPUs.

JACKHAHA363 commented 5 years ago

Right now what is the status reproducing the paper's result?