Open YuzheZhang-1999 opened 1 year ago
The visualization results in fig3 are generated by "hq_demo", you can find instructions on the main page.
But I notice “hq_demo/evaluation.sh” only have the super-resolution inference, is there inpainting implement?
Yes, you can see the Colab for more demonstrations.
Thanks for your reply. But I only find face inpainting based on celebahq in the Colab, so how can I realize the Inpainting on DIV2K.
Just change the --deg
, for example, python main.py --config confs/inet256.yml --path_y data/datasets/gts/inet256/323.png --class 323 --deg "inpainting" -i butterfly_sr
.
Maybe you need to delete and conf.name=='face256'
in gaussian_diffusion.py, line 601.
Okey I see, thank you so much for you patient answer
我按照你说的去做了,但是我发现这是有分类引导的,如果我尝试复现论文中Fig3右下角的水母的图片,在 classifier_scale=0时,会生成一些桔子的图片,这是怎么一回事。所以如何复现paper里fig3中一摸一样mask设置的结果
I did as you said, but I found that this is classifier guided. the command is "python main.py --config confs/inet256.yml --path_y data/datasets/gts/inet256/paper_fig3_jellyfish.jpg --deg "inpainting" -i imagnet"
And I try to reproduce the image of jellyfish picture in the lower right of Fig3 in your paper, with the classifier_scale=0, some images of oranges will be generated, I don't know why.
So how to reproduce the result of exactly the same mask setting in fig3 in paper.
You need to set the correct class label to guide the denoiser toward that class content. For example, jellyfish is 107.
https://gist.github.com/yrevar/942d3a0ac09ec9e5eb3a you can find the class here
thank you so much!
Thanks for your great work!
I tried to reproduce the results in the bottom right of fig3 in the paper based on the “imagenet_256.yml” and “256*256_duffusion_uncond.pt” you provided, but my reconstructed image was poor.
May I ask if I didn't notice anything that caused this poor result?