Closed rickstaa closed 4 years ago
Hi @rickstaa,
Wow, this is really cool-amazing work! I think this is the first time someone has developed another open-source project utilizing GQ-CNNs so I'm excited 🎉.
I believe this is perfectly fine, however I'm going to cc @jeffmahler since he has far more experience on this sort of thing.
Thanks, Vishal
@visatish Thanks a lot! I am impressed with how good your pre-trained GQ-CNN and FC-CNN's work with my setup which uses a Kinect instead of a Photoneo PhoXi S sensor. I am planning to retrain them in the future after adding some additional grasping methods.
@rickstaa Glad to hear that and looking forward to your future plans!
I created an autonomous grasping solution for the kinect v2 and the Panda Emika Franka robot using your pretrained GQ-CNN model. And I have to say that it performs reasonably well considering it was not retrained for my specific setup. The code of this solution can be found here. I just wanted to do a quick check whether I am allowed to publish my grasping solution which uses your pre-trained CNN the way It is currently published. I published my code under an MIT licence and added a note that the license of the submodules, including yours should be respected.
Hi @rickstaa ,I am using the kinectV2 to test the GQ-CNN4.0-PJ,but I don't know how to connect my camera with the gqcnn package,I want to have such depth_3.npy and segmask_3.png,could tell me how to do this? Thank you very much.
Hey @baihaisheng, Of course, I created the camera connection by using the IAI_kinect2 ROS package while modifying the original GQCNN python ROS node script. You can take a lock at the repository to see how this is done. I did not include a segmask yet. The repository can be found here.
Hey @baihaisheng, Of course, I created the camera connection by using the IAI_kinect2 ROS package while modifying the original GQCNN python ROS node script. You can take a lock at the repository to see how this is done. I did not include a segmask yet. The repository can be found here.
Hi, @rickstaa Thank you for your quick reply, I will take a good look at your repository. By the way, did you modify the _example/policyros.py? I just don't know how to modify this with my kinect2, could you help me ? Thanks.
Hi, @baihaisheng.
To connect the GQCNN package to the Kinect camera. I first start the IAI_kinect2 processing node using the following code:
panda_autograsp.launch#L79-L84
I then launch my modified grasp_planner_node.py together with my panda_autograsp_server_node. In this node, I then subscribe to the camera topics that are created by the IAI_kinect2 processing node:
panda_autograsp_server_ros.py#L491-L533
Lastly, I call the grasp_planner
service using the camera messages I received from the IAI_kinect2 messages:
panda_autograsp_server_ros.py#L616-L627
Hope that helps let me know if you run into problems.
Hi, @baihaisheng.
To connect the GQCNN package to the Kinect camera. I first start the IAI_kinect2 processing node using the following code:
panda_autograsp.launch#L79-L84
I then launch my modified grasp_planner_node.py together with my panda_autograsp_server_node. In this node, I then subscribe to the camera topics that are created by the IAI_kinect2 processing node:
panda_autograsp_server_ros.py#L491-L533
Lastly, I call the
grasp_planner
service using the camera messages I received from the IAI_kinect2 messages:panda_autograsp_server_ros.py#L616-L627
Hope that helps let me know if you run into problems.
Hi, @rickstaa Thank you so much for your reply, now I can test my own depth_image with GQCNN-4.0-PJ, and I got the pose and my commands are as follows: $ roslaunch gqcnn grasp_planning_service.launch model_name:=GQCNN-4.0-PJ $ python examples/policy_ros.py --depth_image data/examples/clutter/phoxi/dex-net_4.0/depth_3.npy --segmask data/examples/clutter/phoxi/dex-net_4.0/segmask_3.png --camera_intr data/calib/phoxi/phoxi.intr Now I want to test the FC-GQCNN-4.0-PJ in ROS, which commands can use? Thanks.
@baihaisheng Please see https://berkeleyautomation.github.io/gqcnn/tutorials/tutorial.html#with-ros.
@baihaisheng Please see https://berkeleyautomation.github.io/gqcnn/tutorials/tutorial.html#with-ros.
Hi, @visatish I observed that the gqcnn package was modified, do I need to run the git pull command to synchronize?
@baihaisheng Please see https://berkeleyautomation.github.io/gqcnn/tutorials/tutorial.html#with-ros.
Hi, @visatish I run the git pull command to update gqcnn package, but when I run the roslaunch command, something went wrong.
$ roslaunch gqcnn grasp_planning_service.launch model_name:=GQCNN-4.0-PJ
Then I try to run the $ roslaunch gqcnn grasp_planning_service.launch model_name:=FC-GQCNN-4.0-PJ fully_conv:=True Nothing was wrong.
Hi @baihaisheng,
Sorry about that, fixed in https://github.com/BerkeleyAutomation/gqcnn/pull/96!
Thanks, Vishal
Hi @baihaisheng,
Sorry about that, fixed in #96!
Thanks, Vishal
Hi, @visatish @jeffmahler @rickstaa I have finished the ROS tutorial, now I can get the grasp_pose showed in the image just like this the command that I run as follows: $ roslaunch gqcnn grasp_planning_service.launch model_name:=GQCNN-4.0-PJ $ python examples/policy_ros.py --depth_image data/examples/clutter/phoxi/dex-net_4.0/depth_3.npy --segmask data/examples/clutter/phoxi/dex-net_4.0/segmask_3.png --camera_intr data/calib/phoxi/phoxi.intr Then I want to control the UR5 robot to grasp the object, so I run the command rostopic list and I find the _/gqcnngrasp/pose, I run the _rostopic echo /gqcnngrasp/pose and I got the pose.
but I find that the orientation maybe not right, I want to know the object coordinate. Can you help me to answer this, thank you so much.
Hi, @baihaisheng.
To connect the GQCNN package to the Kinect camera. I first start the IAI_kinect2 processing node using the following code:
panda_autograsp.launch#L79-L84
I then launch my modified grasp_planner_node.py together with my panda_autograsp_server_node. In this node, I then subscribe to the camera topics that are created by the IAI_kinect2 processing node:
panda_autograsp_server_ros.py#L491-L533
Lastly, I call the
grasp_planner
service using the camera messages I received from the IAI_kinect2 messages:panda_autograsp_server_ros.py#L616-L627
Hope that helps let me know if you run into problems.
Hi, @rickstaa I git clone your package under the kinetic-devel branch, can you tell me the sequence of the command: just like this; $ roslaunch gqcnn grasp_planning_service.launch model_name:=GQCNN-4.0-PJ $ python examples/policy_ros.py --depth_image data/examples/kinect2_depth.npy --segmask data/examples/segmask.png --camera_intr data/calib/kinect2.intr thank you
I created an autonomous grasping solution for the Kinect v2 and the Panda Emika Franka robot using your pre-trained GQ-CNN model. And I have to say that it performs reasonably well considering it was not retrained for my specific setup. The code of this solution can be found here. I just wanted to do a quick check whether I am allowed to publish my grasping solution which uses your pre-trained CNN the way It is currently published. I published my code under an MIT license and added a note that the license of the submodules, including yours should be respected.