Closed modanesh closed 10 months ago
Hi Modanesh,
Unfortunately, we have not tested it. Before we developed QMagFace, we only made sure that we are using the same preprocessing that was used for training MagFace and that MagFace performs well. Generally, if you use a different preprocessing method, you need to retrain/finetune the face recognition system to keep the performance since the input is different from what the model learned during training.
Best Philipp
@modanesh , Hi Mohamad, could you evaluate the model on datasets like cplfw, agedb and etc.,? I have a question regarding that?
@HOMGH What do you mean by evaluating the model? You mean without fine-tuning or with?
Without fine-tuning and on LFW, I got 97.03
and 97.08
with QMagFace and Cosine, respectively.
@pterhoer Thanks. I'm aware of the need to fine-tune the model once the preprocessing is changed. However, it'd be wonderful if you could please provide the steps on how to fine-tune the model, a script to do so, or how to train the model from scratch. Thanks.
I've been working on QMagFace, and have a question. In the MTCNN work, there're 3 stages: PNet, RNet, and ONet. But in this codebase, there's a 4th stage: LNet. I'm wondering how that would affect the QMagFace's performance? Have you tested it? And if so, do you have the results in the paper?