MikeDoes / thesis

1 stars 0 forks source link

Evaluation Functions #4

Open MikeDoes opened 2 years ago

MikeDoes commented 2 years ago

It should be a simple function that evaluates based on two input predictors. Clarify how other repositories have created these. Review the datasets and evaluations. the functions should be named:

evaluate_precision evaluate_recall evaluate_f1 evalute_auc

evaluate_all_metrics()

Other Implementation: Wire57: https://github.com/rali-udem/WiRe57/blob/master/code/wire_scorer.py Supervised OIE2016: https://github.com/MikeDoes/supervised-oie/blob/master/supervised-oie-benchmark/benchmark.py Re-OIE2016: https://github.com/zhanjunlang/Span_OIE/blob/master/evaluate/evaluate.py LSOIE: https://github.com/Jacobsolawetz/large-scale-oie/blob/9c135f423072b947ec6a030f7b01e4b2f3d69b7d/oie-evaluation/benchmark.py