Open peylnog opened 2 years ago
Hi, @peylnog
Thanks for your interest in our work. The dataset can be found in this link ExSinGAN/Input. The code in this repository is a rewritten version to show our idea clearly and haven't been tested seriously, hence the results may be different from that of the paper. For conducting the quantitative experiments, please sample about 36 images per model and calculate the average metric scores by the code SIFID and LPIPS.
Best wishes
Thanks for your reply , I will replicate this work and follow it.
I have another question. Because I am now in this filed , Is this work field's SOTA of single image Generation ?
Er, I have not followed the progress in this field for a long time. But if you are new to the single image generation, I can give you some suggestions personally.
The SIG problem essentially learns the patch distribution as we stated in the paper. From this perspective, there are many works to do it, but I suggest you to read the paper GPNN cvpr 2022 and GDPM eccv 2022 to get more intuitive understanding about the problem.
After that you may also consider that the SIG problem for texture images is very simple and can be solved efficiently. In my opinion, the SIG for more complex or high-resolution images is rarely discussed but worth to study. For the non-texture case you can read DGP, ExSinGAN, IMAGINE cvpr 2021, PetsGAN, AGE cvpr 2022 . For the high-resulotion single image generation, although we have spent a little space discussing it, it has never been studied specially and is very useful in practice (like generating the wallpaper).
Thank you so much for your patience ! ! ! I will read it carefully.
Hi, actually i guess there is paper about high-resolution single image generation, OUR-GAN. Hope this information is helpful :)
Oh, sorry for my oversight. I am very excited to see the surprising results of your paper. Looking forward to using it to generate more 4k wallpapers one day! The still wallpaper is too boring..
Hi, @peylnog
Thanks for your interest in our work. The dataset can be found in this link ExSinGAN/Input. The code in this repository is a rewritten version to show our idea clearly and haven't been tested seriously, hence the results may be different from that of the paper. For conducting the quantitative experiments, please sample about 36 images per model and calculate the average metric scores by the code SIFID and LPIPS.
Best wishes
Sorry for bother you , I want to reproduce your quantitative result. Can you provide a detailed step/code. ( a bash file or .py file is better~
Thanks a lot
Hi, @peylnog Thanks for your interest in our work. The dataset can be found in this link ExSinGAN/Input. The code in this repository is a rewritten version to show our idea clearly and haven't been tested seriously, hence the results may be different from that of the paper. For conducting the quantitative experiments, please sample about 36 images per model and calculate the average metric scores by the code SIFID and LPIPS. Best wishes
Sorry for bother you , I want to reproduce your quantitative result. Can you provide a detailed step/code. ( a bash file or .py file is better~
Thanks a lot
Sorry, I just utilize the SIFID code of SinGAN under licenses. If you have trained the models, please alter the SIFID code as appropriate.
Best lucks
I'm quite interested in following your work after reading your paper. But I ran into some issues.
Where can I get datasets in paper ?
How to replicate your quantified result in paper?
I will be appreciated for your reply, thank you very much! Best wishes