Open coderlsb opened 5 days ago
Hi! Thanks for your attention to our work.
Sorry that we have identified an existing bug that AUC may not incorporate properly. #36 We will solve it today.
If this issue is solved, you may update the IMDL-BenCo with pip install. Then copy a train.py and change this line to the corresponding evaluator you expected.
solved with PR #38
You can follow the instructions to change the ideal evaluators with the name mentioned here.
https://github.com/scu-zjz/IMDLBenCo/blob/main/IMDLBenCo/evaluation/__init__.py
If you want to define new extactors, please reload the AbstractEvaluator
with the following class.
https://github.com/scu-zjz/IMDLBenCo/blob/main/IMDLBenCo/evaluation/abstract_class.py
This guidance will be added to the documentation soon. Sincerely thanks for pointing out the issue.
Thank you for your reply! I introduced PixelAUC in tester.py throughfrom IMDLBenCo.evaluation.AUC import PixelAUC
, and added the following code to the test_one_epoch function
I test the trained model,Are the following results reliable?
Hi, thanks for your feedback.
I introduced PixelAUC in tester.py through IMDLBenCo.evaluation.AUC import PixelAUC If you have updated the IMDL-BenCo with pip to the latest version, there could be a shorter way to import the
PixelAUC
class.from IMDLBenCo.evaluation import PixelAUC
and added the following code to the test_one_epoch function
I believe the best place to insert the evaluator is here, but the test_one_epoch
function. Please check it.
https://github.com/scu-zjz/IMDLBenCo/blob/7d5edae4f01757ba75ec52ccbdf4d588b1063c35/IMDLBenCo/training_scripts/test.py#L141-L144
For reliability, we have tested all evaluator classes with test functions like this: https://github.com/scu-zjz/IMDLBenCo/blob/7d5edae4f01757ba75ec52ccbdf4d588b1063c35/IMDLBenCo/evaluation/AUC.py#L162
Honestly, if you are writing research papers, I recommend you to do a simple double-check with standard metrics in sklearn on your test cases for the best reliability. Since this project is in early development, although we do our best to maintain the details reliably, we may have negligence in corner cases. Sorry for the inconvenience caused by this matter, but your rigor contributes to the entire academic community. Also, feel free to share the corresponding logs and metrics to help us to locate the issue. We will give our attention as soon as possible.
Thanks again for trying out IMDL-BenCo. If you have further issues or questions, please feel free to discuss them here.
How to add the evaluation indicators you need when evaluating or training the model