Open jizefeng0810 opened 4 years ago
You can generate the images in half-size (512, 1024), since eventually, we set the faster-rcnn to resize the shortest side of the image to 500. P.S. remember to rescale when using the original label so that the bounding boxes are correct.
When performing the task K->C, the car class is only used when training the KITTI dataset? Or do all classes train and only show the AP of the car class in the final result?
I find that using cyclegan to generate the image of the intermediate domain, the classifier training a network can not distinguish from the image of the target domain. It is impossible to train a two classifier only by using classification network. Please tell me some details about debugging classifiers plus GRL.
For the K->C task, only the car class is trained.
As for your second question, I’m assuming the classifier you mention is the discriminator. The Discriminator’s job is to distinguish between source and target domain images, but at the same time, the Gradient Reverse Layer (GRL) reverses gradients so that the backbone, which is generating the image features, will try to fool the Discriminator. Our end goal is to have a generalized backbone generating features that are indistinguishable between the two domains. So when you see that the Discriminator is not able to distinguish from the target domain images, it should be a positive sign meaning both the backbone and the discriminator are doing their job.
Thank you very much for sharing the code. Is it a fake image that generates the original image size? When the image size is (1024, 2048), GPU memory is very large. Is there a better way to handle this situation?