Open HOMGH opened 1 year ago
We did not include our used evaluation scripts in this repository. I suggest you look at the evaluation scripts included in the MagFace repository (https://github.com/IrvingMeng/MagFace/blob/main/eval/eval_quality/eval_quality.py), as this is also what we adapted for our paper. The code can be easily changed to work with any method or model.
Hi, Thanks for sharing your source codes. Based on your experiments, you have used CurricularFace as a FR model to evaluate your method. I just checked the evaluation code of CurricularFace (https://github.com/HuangYG123/CurricularFace/blob/master/evaluate.py), but it seems only the accuracy is calculated within the code. Could you please advise how you measured other metrics like AUC and FNMR? Thanks.