Hi, I have some doubts about the metrics in verisci/evalaute/abstract_retrieval.py. Does Hit one mean that the prediction contains at least one true document id?
Hi, we didn't end up using "hit one" in our final evaluations so I forget the details. But from what I recall, "hit one" means that the top-ranked retrieval is a correct prediction.
Hi, I have some doubts about the metrics in verisci/evalaute/abstract_retrieval.py. Does Hit one mean that the prediction contains at least one true document id?
Thank you very much.