BradyFU / DVG

[NeurIPS 2019] Dual Variational Generation for Low Shot Heterogeneous Face Recognition
MIT License
118 stars 20 forks source link

Some issues I met during reproduction #15

Closed TkyMIYA closed 3 years ago

TkyMIYA commented 3 years ago

Thank you for sharing your amazing work. I'm trying to reproduce the results shown on your paper. But I met some problems.

(1) Pre-trained model: I got the pre-trained model for LightCNN-29 provided on your project, and evaluated by using Oulu-CASIA NIR-VIS dataset with the test protocol shown in your paper. Then, I got 97.3% on VR@FAR=1% and 80.7% on VR@FAR=0.1%, which seems to so large from your results. Do you have any idea regards this difference?

(2) Adversarial loss: The Adversarial loss is missing in your code. I would like to know how much impact it has (i.e. how much is the accuracy drop when the Adversarial loss not used?).

(3) Tufts Dataset: I found your new paper namely "DVG-face", in which you use Tufts Dataset for RGB-Thermal face recognition. I also would like to evaluate your model on this Dataset. But I cannot detect the thermal face so the aligned crop images will not be created. Could you tell me how to create aligned crop images for the thermal face.

I'm looking forward for your kind help. Thank you.

BradyFU commented 3 years ago

Sincerely thanks for your interest in our work. (1) You can try to reduce the trade-off parameters of loss_real_mmd and loss_fake_mmd; (2) In our experiments, the adversarial loss has little impact on the recognition performance; (3) The facial landmarks of the thermal faces require to be annotated manually.

TkyMIYA commented 3 years ago

Thank you for your quick and kind reply!

(1) I'm sorry for the confusing explanation. I've never trained the model. I just evaluated Oulu-CASIA NIR-VIS dataset with the LightCNN-29 provided on your GoogleDrive. Fig.2 of your conference version paper shows 93.1% on VR@FAR=1% and 68.3% on VR@FAR=0.1%, respectively. But somehow I got 97.3% on VR@FAR=1% and 80.7% on VR@FAR=0.1%. Is the model you provided on GoogleDrive is something refined one?

(2) Thank you for your fruitful information. I will try to train without the adversarial loss.

(3) Thank you for teaching me. If possible, I would like you to give me the coordinate lists of the facial landmarks. I will be grateful for any help you can provide.

BradyFU commented 3 years ago

Thank you for pointing this out and we have checked the experiments carefully. The performance discrepancies of LightCNN-29 are due to some data with inaccurate preprocessing, including face detection and landmark localization. We will retest the performance of LightCNN-29 on the data with more accurate preprocessing, and update the latest results in the arXiv version.

It is unfortunate that the landmark file has been lost.

Sincerely thanks again for your interest in our work.

TkyMIYA commented 3 years ago

Thank you a lot for checking the experiments. I'm looking forward to see your revision.

BTW, I would like also to reproduce the results on your new method, DVG-Face. Would you share the new code on the project page?

BradyFU commented 3 years ago

The related code will be released after the paper is accepted. Thanks for your attention.

liuqunzhong commented 1 year ago

Is LightCNN-29 provided on your GoogleDrive only trained on MS-Celeb-1M database, or finetuned on the HFR training sets? Is LightCNN-29 provided on your GoogleDrive is same as listed in Table2?

”Table 2: Comparisons with other state-of-the-art deep HFR methods on the CASIA NIR-VIS 2.0, the Oulu-CASIA NIR-VIS, the BUAA-VisNir and the IIIT-D Viewed Sketch databases.”