The architecture is as follows.
I used only the apache ant dataset and it performed fair. I am using GloVe stanfords embeddings and while the model performed fair, I now have to train on datasets.
Secondly, I am also trying a CNN (from paper) using word2vec which is in progress. Will update on it soon by tomorrow.
Will be pushing the lstm code on apche-ant dataset.
The architecture is as follows. I used only the apache ant dataset and it performed fair. I am using GloVe stanfords embeddings and while the model performed fair, I now have to train on datasets. Secondly, I am also trying a CNN (from paper) using word2vec which is in progress. Will update on it soon by tomorrow. Will be pushing the lstm code on apche-ant dataset.