Closed YaphetS7 closed 1 year ago
@ZhangYuanhan-AI Please can you answer asap, we are writing article and want correct comparison.
@ZhangYuanhan-AI Please can you answer asap, we are writing article and want correct comparison.
Sorry for the delay. We check our code, we evaluate casia-mfsd dataset on every 8 frames of each video.
@ZhangYuanhan-AI What about threshold, can you say the value you used?
remiruzn
The threshold for what?
Threshold you used for AENet while evaluate metrics on CASIA-MFSD.
Threshold you used for AENet while evaluate metrics on CASIA-MFSD.
This threshold is defined by EER as we claimed.
Have you calculated the EER on the CASIA-MFSD test part? Did you also use every 8th frame from the video? And to clarify the first question, did you use every 8th frame, not just 8 frames from the video?
1) We took casia-mfsd dataset (test release with 30 subjects) 2) We found a face in each image (using mtcnn), marked each frame of all the videos 3) We have performed inference your network on casia-mfsd by the pipeline from this repository 4) We calculated the metrics following the issue
We obtained a metric of HTER=37.87%. In your article HTER for AENet model lies in the range [13.1%, 18.2%]. For one particular image we have compared two pipelines (ours and yours). The tensor to the net in our pipeline coincides with the tensor to the net in your pipeline (from this repo). The model outputs in the two pipelines for these tensors are also identical
(a) What could be the reason for these results? (b) Did you use a threshold? If so, what does it equal and for which class do you apply it (spoof or live)?