kevinlu1248 / pyate

PYthon Automated Term Extraction
https://kevinlu1248.github.io/pyate/
MIT License
305 stars 37 forks source link

Precision is not as good as atr4s ? #28

Closed IshanUp closed 3 years ago

IshanUp commented 3 years ago

I ran this on the ACL rd tec 2.0 corpus and got a precision of around 50 % which is not as good as atr4s which has a 70 % precision on the same corpus. I used combo basic.

kevinlu1248 commented 3 years ago

Thanks for sending this issue. Can you send the code you used to test this?

IshanUp commented 3 years ago

Actually, on further looking into this, I have realized that this is the formula that atr4s uses. It uses Average Precision and defines it like this image However, the problem here is that there is that although we do get a ranking in our pyate predictions, there is no ranking in the gold data. I do not know how did they actually go about calculating average precision here. The formula uses recall at level i, however, to be able to do that we need a ranked gold label list as well (which ACL rd tec 2.0 does not provide).

kevinlu1248 commented 3 years ago

I see. Thanks for doing the research and sharing. It looks like there is nothing for us to do regarding this issue so I will close it for now. Please let me know regarding any updates.