junyanz / pytorch-CycleGAN-and-pix2pix

Image-to-Image Translation in PyTorch
Other
22.81k stars 6.29k forks source link

non-local textile pattern #458

Open chenhaibin2019 opened 5 years ago

chenhaibin2019 commented 5 years ago

Using cyclegan, I am able to generate image close to real. However, the generated images miss the non-local textile patterns. For example in images below, the generated image missed the long range waved textile from ground truth. I am wondering if it is caused by capacity of discriminator/generator? How could I generate image include such non-local patterns? which will make the results much promising.

junyanz commented 5 years ago

Maybe you can increase the receptive field of your discriminator. (add one conv layer for example). There is a texture synthesis work related to your task. You may want to have a look.

chenhaibin2019 commented 5 years ago

thank you. The reference is exactly what I want. Though I haven't wrap up my mind on how to train it with cyclegan, as I need to train A-> B generator to generate texture right. Any suggestions on how it can be implement with cyclegan?

junyanz commented 5 years ago

Your implementation with CycleGAN looks good. Probably need to tweak some small details. In their paper, Figure 15 shows some texture transfer results with pix2pix.

chenhaibin2019 commented 5 years ago

I added one conv layer to the discriminator using nlayer = 4. The training images looks great and capture the long range pattern. However, the inference image (Fake A) clarity got compromised. It is hard to see if it captures the long range textile pattern. Is there a way, Cyclegan can tune the sharpness of the images, particularly at inference phase_.

image

junyanz commented 5 years ago

There is no explicit way of doing that. Maybe you can add your test images to your training set. In your case, it is fine to train and test on the dataset.

chenhaibin2019 commented 5 years ago

thanks for the advice, I will try your suggestion But not sure I get it. Unfortunately, I don't have ground truth of my inference (real B). I only have the inference real A images. The objective of this work is to use cyclegan to synthesize the inference Fake A close to ground truth. If I only mix the inference real A with training real A, without providing the inference real B. Does it help?

junyanz commented 5 years ago

If you have ground truth (real B), you should then use pix2pix rather than CycleGAN. Otherwise, you don't need the corresponding real B.