Closed caozhenxiang-kouji closed 2 years ago
Hi, which GAN method did you use?
I 'm using SWAGAN in https://github.com/rosinality/stylegan2-pytorch
StyleGAN based generators have very flexible latent space. If you manipulate enough, you can basically reconstruct 'a tree' on a StyleGAN trained on human faces. There are certain ways to limit this, i.e. by limiting the latent parameters to stay within a range or truncation trick. However, I haven't tried them myself.
The original GANFit is implemented on ProGAN which is more straightforward to backpropagate than StyleGAN. Thus I believe StyleGAN can even cause a vanishing gradient problem. Nevertheless, none of these experimented, it would be nice to explore these directions.
Hello! I 've read your paper and tried to reproduce your texture generation network with some other dataset(NJU dataset). After training, the generation network is able to produce reasonable results. However, when I try to integrate the network to the optimization pipeline, I find that it's very hard to control the quality of the generated texture since there's no direct constrain on the texture or the latent code. Have you encountered the same problem before? And do you have any idea about how to maintain the generated texture in good quality?