Detecting robot grasping positions with deep neural networks. The model is trained on Cornell Grasping Dataset. This is an implementation mainly based on the paper 'Real-Time Grasp Detection Using Convolutional Neural Networks' from Redmon and Angelova.
The article you mentioned "Real-time Grasp Detection..." has the image depth information included when training the CNN. While the title of your repository "robot-grasp-detection" is "Detecting grasping... using RGB images".
Does it mean when training your network, you only used RGB images, and depth info. is not used ?
Further, the Cornell Grasping Dataset, has point cloud information of all images. If your network is trained with this dataset, depth information should be included. Am I correct ?
Thanks.
@qiushenjie Just replace the B (in RGB) channel by D (depth image). Or pass RGB and DDD to 2 separate Networks, then concatenate the 2 feature maps which then are fed into FC layers.
Hi,
The article you mentioned "Real-time Grasp Detection..." has the image depth information included when training the CNN. While the title of your repository "robot-grasp-detection" is "Detecting grasping... using RGB images".
Does it mean when training your network, you only used RGB images, and depth info. is not used ? Further, the Cornell Grasping Dataset, has point cloud information of all images. If your network is trained with this dataset, depth information should be included. Am I correct ? Thanks.