Closed sudharavali closed 5 years ago
It seems that drones have different sizes and locations in your source and target domains. One thing you can do is to (1) apply an object (drone) detection method to predict the bounding box of the drone, (2) crop the drone based on its bounding box and resize it to 256x256. (3) apply Cyclegan on cropped images.
Ohh that seems to be a good solution, when you asked me to collect only "cropped images", did you mean to that for real images or images belonging to both the datasets ? The real images indeed are very far from the camera and not zoomed in.
Please let me know, which type of images you were referring to. Thanks alot !
For "cropped images", I mean the output image from step (2). You crop the object and resize it to the same size.
Another thing that you can do is for your simulation environment, you adjust the camera parameters so that your CG drone has the same size as the real drone.
Alright, correct me if I am wrong. From what I understood, in the first solution given, we need to crop images from the real dataset from its bounding box so that the images are cropped up or (ii) as mentioned in the second solution, the size of the drones in the simulated environment must match the size of the real drone. Let me know if this is what you meant, thanks alot for your advice.
Yes.
Thanks alot !!!
Hello,
Thank you for your great work. I trained CycleGAN to generated realistic images from synthetic ones. After training I observed a disturbance :
This is my input synthetic image :
and this is my generated image :
You can notice how the position of the drone in the generated image with respect to the input image has changed.
Why do you think that is and what can I do to make sure that the drone remains in the same position.
Please let me know. Thank you !