Open esskay0000 opened 5 years ago
You may read the paper section 5.2 and the script evaluation.py to understand how the F-scores are computed on test set.
Thanks for the response. But the question was to get the predict_label for each observation text in test file. To be clear, this predict_label is compared with true_label, to arrive at F-score and other confusion matrix. F-score generation and output is being replicated by me but do not get to see which textx are mis-classified to infer results. Please suggest how to get predicted labels for each text in test.txt
The code runs fine and results as per table 6 of the paper are produced too, though with slight variance. But is there a script to label (possibly using predict_label) the individual texts in test dataset? else, how is evaluation script getting to calculating F-scores?