When reading your code I found there is not a part in KGQA/LSTM that you load the transformer and use the pre-trained embeddings for sentences, as what you have done in KGQA/RoBERTa. Is that true, and why?
In LSTM, we don't use any text pretraining. This is because MetaQA has a large number of questions and the model can learn word embeddings from those questions.
Hi,
When reading your code I found there is not a part in KGQA/LSTM that you load the transformer and use the pre-trained embeddings for sentences, as what you have done in KGQA/RoBERTa. Is that true, and why?
Best, Shuang