zsyOAOA / DifFace

DifFace: Blind Face Restoration with Diffused Error Contraction (TPAMI, 2024)
Other
628 stars 42 forks source link

Could you release your metric calculation script please? #4

Open HWalkingMan opened 1 year ago

HWalkingMan commented 1 year ago

Your work is awesome! I have tested with your pre-trained model on CelebaTest and got amazing visual results.

However, I noticed that your paper and the VQFR's paper both provide metrics for testing VQFR on CelebaTest, and these metrics are different.

Thus, I used your model to infer on the CelebaTest dataset provided by VQFR link here and use the calculation script provided by VQFR link here, the unexpected results were obtained.

Therefore, I am very curious about the quantitative metrics mentioned in the paper. How do you calculate the metrics? Could you release your metric calculation script please?

zsyOAOA commented 1 year ago

For PSNR and LPIPS: https://github.com/chaofengc/IQA-PyTorch

For LPIPS (VGG): https://github.com/richzhang/PerceptualSimilarity

For IDS: see the script of VQFR.

For FID: https://github.com/mseitzer/pytorch-fid

As for as I know, for the metric of FID, VQFR and GFPGAN are calculated between the restored faces and the whole faces in FFHQ. However, in our paper, it is directly calculated between the restored faces and the corresponding ground truth.

HWalkingMan commented 1 year ago

Thanks for your scripts~