for 'get_conll_embeddings.py', it is saying that I have to input locations of the train, test_a, test_b, and the use_model locations.
Train, Test_a and Test_b are the txt files that I can copy from the data folder, correct?
and use_model is the pkl file that I created using the wordvec or other embedding model, correct?
If this is the case, how am I truly training the model with the corpus that I ingested from the start?
Or does that means I have to make my own train, test_a and test-b dataset?
for 'get_conll_embeddings.py', it is saying that I have to input locations of the train, test_a, test_b, and the use_model locations. Train, Test_a and Test_b are the txt files that I can copy from the data folder, correct? and use_model is the pkl file that I created using the wordvec or other embedding model, correct?
If this is the case, how am I truly training the model with the corpus that I ingested from the start? Or does that means I have to make my own train, test_a and test-b dataset?