Open sunt40 opened 3 weeks ago
Hello @sunt40 ,
The RealSenseCamera was used in our robotic experiment (page 6 in our paper).
Though it will not directly inference grasp poses for your test images. However, you can modify to suit your interests. The following code can be a useful start:
q_img, ang_img, width_img = post_process_output(pred['pos'], pred['cos'], pred['sin'], pred['width'])
grasps = detect_grasps(q_img, ang_img, width_img)
# Get grasp position from model output
pos_z = depth[grasps[0].center[0] + self.cam_data.top_left[0], grasps[0].center[1] + self.cam_data.top_left[1]] * self.cam_depth_scale - 0.04
pos_x = np.multiply(grasps[0].center[1] + self.cam_data.top_left[1] - self.camera.intrinsics.ppx,
pos_z / self.camera.intrinsics.fx)
pos_y = np.multiply(grasps[0].center[0] + self.cam_data.top_left[0] - self.camera.intrinsics.ppy,
pos_z / self.camera.intrinsics.fy)
where post_process_output
is available in inference/post_process.py
, and the pred
was predicted from model.
I hope this explanation helps, please let me know if you have any further questions.
I see your file grasp_generator.py in inference folders. But the code "from hardware.camera import RealSenseCamera" in grasp_generator.py, what is the hardware? @andvg3 @anavuongdin