yangheng95 / PyABSA

Sentiment Analysis, Text Classification, Text Augmentation, Text Adversarial defense, etc.;
https://pyabsa.readthedocs.io
MIT License
956 stars 162 forks source link

regrad evaluation #222

Closed Astudnew closed 2 years ago

Astudnew commented 2 years ago

Hello, thank you for sharing this work. Regarding the evaluation of LCF- ATEPC model why do you just use F1 and accuracy for evaluation why yo do not use Class-wise performance why do not use class-wise precision, recall, and F1 score? can we use these performance metrics with LCF-ATEPC Thanks

yangheng95 commented 2 years ago

You can try setting config.show_metric=True

Astudnew commented 2 years ago

Thanks but these metrics (class-wise precision, recall, and F1 score) are important for LCF-ATEPC performance evaluation (for ATE and APC)? or F1 and Accuracy are sufficient as in your paper( A multi-task learning model for Chinese-oriented aspect polarity classification and aspect term extraction)?

yangheng95 commented 2 years ago

Actually it depends on what you think, important or not