dougsm / ggcnn

Generative Grasping CNN from "Closing the Loop for Robotic Grasping: A Real-time, Generative Grasp Synthesis Approach" (RSS 2018)
BSD 3-Clause "New" or "Revised" License
484 stars 138 forks source link

About the input images #25

Closed leo4707 closed 4 years ago

leo4707 commented 4 years ago

Can i input the image by the kinect ,then use the model to find the grasp point?

dougsm commented 4 years ago

Yes you definitely can. If you want to use this with a real robot, I'd recommend looking at this repo: https://github.com/dougsm/mvp_grasp There is a ggcnn package for ROS, and also example of interfacing with cameras and a robot (however if your hardware is different you will need to adapt the code).

leo4707 commented 4 years ago

@dougsm In the eval_ggcnn.py, whether the input images are the picture in the Cornell data,or not? Can i edit the code that it is able to input normal RGB-D image?

dougsm commented 4 years ago

That's correct, you can use that to input custom depth images. Keep in mind that the network is, by default, trained on depth-only images, not RGB-D. However you can train it as such.

Again, the ggcnn ROS package here has a good example of using custom depth images, even if you aren't using ROS.

https://github.com/dougsm/mvp_grasp/blob/master/ggcnn/src/ggcnn/ggcnn.py This file contains scripts for processing any depth image from a camera (including kinect) and producing a network output.

leo4707 commented 4 years ago

@dougsm So, i put the depth image to process_depth_image,then predict . I can get the grasp point image right? Sorry, i am new to python and deep learning; therefore, i have plenty of problems.

dougsm commented 4 years ago

Yes, that is correct.
One important thing: The network is expecting the depth image to be in the format of metres from the camera.

leo4707 commented 4 years ago

@dougsm After running prediction, how do i put the rectangle on the rgb img?

dougsm commented 4 years ago

There is a function for visualising the output with the grasping rectangles here: https://github.com/dougsm/ggcnn/blob/ad48bc5f768fe0a9ba9fd47729638e0aed46e47b/utils/dataset_processing/evaluation.py#L7

There is an example of using this in the evaluation script: https://github.com/dougsm/ggcnn/blob/master/eval_ggcnn.py#L100

Hope that helps!

leo4707 commented 4 years ago

123 I try to use the function, but i get the result. I don't know the rectangle doesn't on the item.

dougsm commented 4 years ago

Hello, the rectangle is not on the item because it is plotted on the best detected grasp (which in this scene is the ege of the table). You should not be using the full images, as there is too much information, like the floor etc. The GG-CNN is trained on images cropped around the items, so the item is only on a flat background. Are you using the eval_ggcnn.py script? The dataset loader should be automatically cropping the images appropriately.