Closed savasy closed 2 years ago
During inference, we do not generate the prediction directly. We consider the model as the scoring function. Given a sentence, we first enumerate all possible text spans in the input sentence as named entity candidates, and then classify them into entities or non-entities based on BART scores on templates.
Thank you @Nealcly My problem is that I do not calculate the detailed performance (F1, Prec etc.) for different Label sets other than PER LOC ORG. I managed to train model with 9 labels and get a good accuracy btw. When I use your Inference.py code to run the model trained with 9 labels, I got some errors. Maybe I share it with a different issue thread
Hi, Seq2SeqModel.predict function predicts one entity at a time. e.g.