Closed lazy-nurd closed 2 years ago
Kind of agree with @lazy-nurd. I tried to test the code on one of the faces given as an example and it seems like it doesn't achieve to give the same results. Here is the link https://i.ibb.co/zszVpy7/Capture.png for the first picture. Maybe you got a different model I don't know
Yeah I also compared it with gfpgan and GPEN and the results are much better on GPEN and then gfpgan. I also don't know how they compare the results maybe the available pretrained model is an intermediate checkpoint and it is still in training.
Yeah I also compared it with gfpgan and GPEN and the results are much better on GPEN and then gfpgan. I also don't know how they compare the results maybe the available pretrained model is an intermediate checkpoint and it is still in training.
Hello, could you provide your compared input examples and gfpgan and gpen results. Also you can provide the command you used to inference. And I will check whether it has bugs in the demo file.
Yeah I also compared it with gfpgan and GPEN and the results are much better on GPEN and then gfpgan. I also don't know how they compare the results maybe the available pretrained model is an intermediate checkpoint and it is still in training.
Hello, could you provide your compared input examples and gfpgan and gpen results. Also you can provide the command you used to inference. And I will check whether it has bugs in the demo file.
Kind of agree with @lazy-nurd. I tried to test the code on one of the faces given as an example and it seems like it doesn't achieve to give the same results. Here is the link https://i.ibb.co/zszVpy7/Capture.png for the first picture. Maybe you got a different model I don't know
The provided pretrained model is the final one in paper. It seems you directly crop the teaser figure (with yellow box) and so the result is abnormal. For you to quick reimplement our results, we add the original LQ face of teaser figure into the inputs/cropped_face directory.
Yeah I also compared it with gfpgan and GPEN and the results are much better on GPEN and then gfpgan. I also don't know how they compare the results maybe the available pretrained model is an intermediate checkpoint and it is still in training.
Thanks for your question. The provided model is our final model in paper. Here we give the link of our inferenced results of GPEN(Google Drive|腾讯微云) and GFPGAN(v1)(Google Drive|腾讯微云) on the four datasets. Maybe you can make a comprehensive comparsion between VQFR results and GPEN/GFPGAN. Thanks.
Hey Thanks for the repository, But the results are not really good enough for blind face restoration. GFP-Gan had similar issue of high smoothing. GPEN works much better in restoration and preserving the overall texture and looks highly realistic. I hope you guys improve on keeping the overall structure and texture, so it does not look unrealistic. Thanks
Thanks for your comment. We provide the inference results of GPEN above (on all four validation results). Maybe you can make a comprehensive comparison between VQFR and GPEN. VQFR can generate more natural high-frequency details. Also, we will keep on making VQFR better for blind face restoration. If there are any other questions, feel free to reopen it.
Hey Thanks for the repository, But the results are not really good enough for blind face restoration. GFP-Gan had similar issue of high smoothing. GPEN works much better in restoration and preserving the overall texture and looks highly realistic. I hope you guys improve on keeping the overall structure and texture, so it does not look unrealistic. Thanks