kprokofi / light-weight-face-anti-spoofing

towards the solving spoofing problem
MIT License
152 stars 43 forks source link

The results of multi task training are worse. #8

Open tfygg opened 3 years ago

tfygg commented 3 years ago

The accuracy of multi task training on celebboof is as follows: accuracy on test data = 0.946 AUC = 0.998 EER = 2.45 apcer = 0.55 bpcer = 7.39 acer = 3.97 accuracy on test data = 0.946 AUC = 0.998 EER = 2.45 apcer = 0.55 bpcer = 7.39 acer = 3.97 However, the accuracy of single task is better accuracy on test data = 0.954 AUC = 0.998 EER = 2.41 apcer = 0.83 bpcer = 6.22 acer = 3.53 accuracy on test data = 0.954 AUC = 0.998 EER = 2.41 apcer = 0.83 bpcer = 6.22 acer = 3.53 This is not consistent with the conclusion of celeb spoof's paper. Why?

kprokofi commented 3 years ago

Hello, First of all, in CelebA_Spoof paper, they are using a different model and some different setups for training. Also, when I trained on the single task the result on the cross-domain was worse like almost twice. Multi-task gives a better generalization to the model. However, my metrics are different when comparing single and multi-task approaches testing on the CelebA_Spoof only, and they still a little bit better on the multi-task. (but not so much like in the paper).