junyanz / pytorch-CycleGAN-and-pix2pix

Image-to-Image Translation in PyTorch
Other
22.72k stars 6.28k forks source link

optimal parameters for synthetic depth image to real depth image #735

Open frk1993 opened 5 years ago

frk1993 commented 5 years ago

Hi, I am planning to train a cycleGAN model to translate synthetic depth images to real depth images from my depth-camera. I do not have any idea how to choose the optimal parameters. My synthetic and real images have size 480x640. I am planning to resize them to 512x512 -> load_size=512 and I will use crop_size=512. I have following questions: Is this a good idea to use cycleGAN for this problem ? Should I use a different crop_size? Should I change the network architecture? Should I change the default parameters for netD and netG?

Here example synthetic and real depth images from my dataset:

Bildschirmfoto 2019-08-19 um 16 13 58

best regards

junyanz commented 5 years ago

Feel free to try CycleGAN.

frk1993 commented 5 years ago

Thanks for your response. As you see if we compare synthetic image and real depth image the translation task is not just changing the color of the image there are also many small details (noise) in real image, which are missing in the synthetic image. And generally its shown that by increasing the number of resnet layers the performance can be increased. Therefore I am planning to extend the reset_9blocks to twelve. Would this be a good idea? And I have 500images from domain A and 500 hundred from domain B. How much epochs should I train ? Generally you used 200 epochs should I train for more epochs ?

Thanks and best regards

junyanz commented 5 years ago

reset_9blocks might be enough. But you can try 12. I used 200 epochs for most of the experiments. But you can try different epochs.