Closed julingers closed 2 years ago
Yes, I changed the code a little bit. I don't remember actually what I changed. But I had to change some of the parameters. You can change the output of cyclegan to 800x600 pixels. That's what I did.
Also, you need to train for large number of epochs and save sample images. Since CycleGAN is descriminator/generator network, there is no guarantee of better result after larger training time. You just choose the epoch where source image look similar to target.
ok, I got it. Another question, as for our render image is rectangle shape, like yours 1600×1200. Through we can change the output to 800×600, it's in test mode. But in training mode, when training used by CycleGAN, the input is preprocessed through 1600×1200, for example, 256×256. Did you crop it into square training like this? @joshi-bharat
I am not exactly sure. It's been a long time. But you can create your own input data loader and try to use that data loader.
I have been busy and I looked up some of the things that would be helpful to you. So, I modified cycleGAN to train with low-resolution images and inference with 800*600 resolution. I also disabled some of the image augmentations for instance color transformation. I created a new data loader DeepCLDataLoader and you can find the code here https://drive.google.com/drive/folders/1ErzkadZK2XaOy7PuKq4Ws8RhMWgVJfJm?usp=sharing.
Apart from that, can you the images you are using as target and source? Since this is adversarial training I suggest you save sample images for every say 5 epochs and then use one which is the best. You are not guaranteed to get the best target domain images at the end.
Hello, thank you for your help before. I'm here to disturb you again.
I have completed the acquisition of the rendered image in the simulation, and successfully obtained the 2D coordinates of the 8 corner points of the 3D model, and now I am making synthetic images. Regarding cycle GAN, I would like to ask you a few questions.
Thanks to you again!