cavalleria / cavaface

face recognition training project(pytorch)
MIT License
456 stars 88 forks source link

Pre-trained model performance #58

Closed FeiMiBa closed 3 years ago

FeiMiBa commented 3 years ago

Hi, I tested the released GhostNet_x1.3 on LFW, CFP-FP and AgeDB-30. It's weird that the model gets much better results on CFP-FP and AgeDB-30 than LFW, have you tried these testing sets? What's the gap?

Thanks.

LFW: Acc 0.850 @ Threshold 0.461 CFP-FP: Acc 0.943 @ Threshold 0.179 AgeDB-30: Acc 0.973 @ Threshold 0.231

cavalleria commented 3 years ago

the result is below i tested Epoch 24/24, Evaluation: LFW Acc: 0.9974999999999999, CFP_FP Acc: 0.9620000000000001, AgeDB Acc: 0.9743333333333334, VGG2_FP Acc: 0.9410000000000001

FeiMiBa commented 3 years ago

the result is below i tested Epoch 24/24, Evaluation: LFW Acc: 0.9974999999999999, CFP_FP Acc: 0.9620000000000001, AgeDB Acc: 0.9743333333333334, VGG2_FP Acc: 0.9410000000000001

Do you mind sharing your testing sets, I need to double check mine.

cavalleria commented 3 years ago

you can download from https://github.com/ZhaoJ9014/face.evoLVe.PyTorch#data-zoo, you should check your evaluation code

FeiMiBa commented 3 years ago

It seems you use euclidean distance rather than cosine distance for evaluation? I printed out the best thresholds in your code and they are all greater than 1.

cavalleria commented 3 years ago

It seems you use euclidean distance rather than cosine distance for evaluation? I printed out the best thresholds in your code and they are all greater than 1.

normalized cos similarity is equal to euclidean distance

FeiMiBa commented 3 years ago

The gaps are: -LFW, my dataset was cropped to 128x128, extra resizing operation causes huge variance -CFP-FP, nosie in my dataset -AgeDB-30, variance caused by rounding