Closed MyeongJin-Kim closed 5 years ago
I only upload the second image translation result. As you have found, cyclegan sometimes translate sky into trees. It is caused by the different spatial layouts between source and target domain and this problem is hard to be solved. The bidirectional learning method we proposed can make the translation model better by adding the supervision given by the well trained segmentation model.
Thanks! Then when you train first segmentation network using first translation network's result, did you use that falsely translated image?
I don't think they are falsely translated images. Actually, I can still get a good performance (mIoU > 42 for GTA5->Cityscapes) and when use self-training, it can be improved to ~47. To answer your question, I use those images, because those are only translated images I have. As the qualify of the images is not very good, when I translate the image for the second time, I just do it based on the original synthetic image not the first time translated images. Hope my answer can help your research.
Thank you very much! It helps a lot!
When I first train translation model, CycleGAN transforms sky into vegetation or building. So, when train segmentation model using translated source image, there is many noisy label to disrupt training.
Question is