yicheng-w / CommonSenseMultiHopQA

Code for EMNLP 2018 paper "Commonsense for Generative Multi-Hop Question Answering Tasks"
MIT License
123 stars 30 forks source link

Why using learned embedding instead of GloVE or word2vec? #7

Open DbrRoxane opened 5 years ago

DbrRoxane commented 5 years ago

Hi,

I was wondering why you decided to use your own learned embedding instead of glove embedding? I can see we can use GloveVocab instead of GenModelVocab and would need theoretical explanations please,

Have a nice day,