Closed neil-leon-menezes closed 4 years ago
Hi @skoarkid The GG-CNN is trained on the cropped patches of the cornell data, as this more closely resembles the scene for the real robot (i.e. single object, close up on a table). The cornell data images are taken from a long way away and contain lots of background.
If you're interested in using this for real time grasping, I'd recommend looking at my other repo which contains ROS nodes for using this on a real robot. https://github.com/dougsm/mvp_grasp
I was trying to use this code for real time grasping. As a first step I tried to run eval on a tiff image from cornell dataset without using the labels.I observed that before feeding the depth image to the net, the code is cropping the image using the dataset labels. If I comment out the cropping function the results are not accurate.