Closed mjohn123 closed 8 years ago
Does your script provide code for matching these two images?
Matching them in what sense? Could you please clarify the question?
@mohomran : Matching goals to make a wider view (Ref. Vision-based Offline-Online Perception Paradigm for Autonomous Driving). We have both left and right images, so I would like to use them to help semantic segmentation
Hi,
the two cameras form a stereo setup. We provide pre-computed disparity maps that give you the matches between left and right images. Annotations were done in the left image only, if you really need them for the right image, you could try to warp them using the disparity maps.
Best, Marius
2016-07-14 4:46 GMT+02:00 mjohn123 notifications@github.com:
@mohomran https://github.com/mohomran : Matching goals to make a wider view (Ref. Vision-based Offline-Online Perception Paradigm for Autonomous Driving). We have both left and right images, so I would like to use them to help semantic segmentation
— You are receiving this because you are subscribed to this thread. Reply to this email directly, view it on GitHub https://github.com/mcordts/cityscapesScripts/issues/7#issuecomment-232542709, or mute the thread https://github.com/notifications/unsubscribe/AJFqlmRhXmt4x6ODW1Pv1WruGXTemX0gks5qVaMAgaJpZM4JLaTT .
Hello CityscapesTeam,
As your available data, you provided two kinds of data right and left images. If I want to match them to make a wide view. Does your script provide code for matching these two images? In additions, if it is available, do you provide labels (gtFine) for the matching image?
I ask the question because I found some papers that use a deep information to enhance accuracy but I do not know how they do if these labels are not available to download.
Thank all.