tnikolla / robot-grasp-detection

Detecting robot grasping positions with deep neural networks. The model is trained on Cornell Grasping Dataset. This is an implementation mainly based on the paper 'Real-Time Grasp Detection Using Convolutional Neural Networks' from Redmon and Angelova.
Apache License 2.0
232 stars 84 forks source link

About depth information #21

Open qiushenjie opened 6 years ago

qiushenjie commented 6 years ago

Hi,

The article you mentioned "Real-time Grasp Detection..." has the image depth information included when training the CNN. While the title of your repository "robot-grasp-detection" is "Detecting grasping... using RGB images".

Does it mean when training your network, you only used RGB images, and depth info. is not used ? Further, the Cornell Grasping Dataset, has point cloud information of all images. If your network is trained with this dataset, depth information should be included. Am I correct ? Thanks.

edwardnguyen1705 commented 4 years ago

@qiushenjie Just replace the B (in RGB) channel by D (depth image). Or pass RGB and DDD to 2 separate Networks, then concatenate the 2 feature maps which then are fed into FC layers.