Closed happygaoxiao closed 1 year ago
Hi, thanks for the issue. This repo is still under development. The supported functions of ICG in this repo are limited.
For (1), this test node in the repo assumes you have a valid image publisher, and it can subscribe to it and do the pose tracking. If you'd like to use YCB dataset, you can write an image publisher where you may handle the file IO and publish the image to the local network.
For (2), since the camera_frame
parameter is not used in the test node, so the triangle.yaml
should not bother you. The yaml file is for the transformation from the mesh to the real object you want in the scene (your modeling may have slight offsets). You may refer to the original README.
For (3), yes, the initial pose is a big problem. However, ICG itself is a pose tracking library. It may be hard for it to detect the initial pose.
For (4), ICG supports multiple objects, but I didn't implement this feature in this repo.
In general, for parameter settings, you may refer to this issue.
Thanks and welcome to contribute to this repo.
Hello, thank you for you reply. Sorry for the delay of my response. For (2), indeed, camera_frame
is not used. After some tests, I found that the published pose is in the camera_color_optical_frame
with Realsense D435 and rs_aligned_depth.launch
. See this picture below. The robotic hand in Rviz, and the banana as the object to be tracked.
The result is almost perfect, but the pose has a little misalignment around the middle finger. It might be due to the hand-eye calibration error.
Great 😃 Gald to see it's working.
Hello, I faced another issue again. I used another computer with Ubuntu 20.04. When I run icg_ros_node
. It gets stuck in this function SetUpAllObjects()
in icg/tracker.cpp
. The node outputs "Viewer already exists".
Many pointers seem to be incorrect. The node doesn't generate the .bin
files either.
Please let me know what I should do. Thank you!
Update:
When I remove the line SetUpObjectPtrs(&all_camera_ptrs_)
. It outputs Color camera color_interface was not set up
. But my Realsense
node works well with the launch file rs_aligned_depth.launch
.
Update: When I remove the line
SetUpObjectPtrs(&all_camera_ptrs_)
. It outputsColor camera color_interface was not set up
. But myRealsense
node works well with the launch filers_aligned_depth.launch
.
This is expected since you did not provide the necessary info for the interface. But I'm not sure about your issue. Maybe check more about your config. Good luck.
icg_ros::ICG_ROS_Config config; config.config_dir = config_dir; config.body_names.push_back("banana"); config.tracker_name = "tracker"; config.renderer_geometry_name = "renderer_geometry"; config.color_camera_name = "color_interface"; config.color_camera_topic = "/camera/color/image_raw"; config.color_camera_info_topic = "/camera/color/camera_info"; config.depth_camera_name = "depth_interface"; config.depth_camera_topic = "/camera/aligned_depth_to_color/image_raw"; config.depth_camera_info_topic = "/camera/aligned_depth_to_color/camera_info"; config.color_depth_renderer_name = "color_depth_renderer"; config.depth_depth_renderer_name = "depth_depth_renderer";
icg_ros::ICG_ROS interface(interface_nh, config);
I didn't change anything about the config
. It worked with another computer with the same Realsense D435.
Ok, I got the reason. The Realsense cannot set the resolution of color
and depth
at 1280x720 at the same time.
Hello, thanks a lot for your repo. It it a little complicated for me to run the orignal repo by DLR. But I am familiar with ROS and could be able to run your code well.
I would like to ask some questions: (1) I am not sure if I am running the repo correctly. My goal is to track some objects from YCB dataset. So, I downloaded the
.obj
files here and puttextured.obj
under theconfig/
. Copy and paste thexx.yaml
andxx_detector.yaml
. Then, put the according object in front of the camera and runrosrun icg_ros icg_test_node -config_dir=icg_ros/config -camera_frame=camera_link
, which would creat two*_model.bin
files and launch the gui. Finally, press D to initilize and T for tracking. (2) For the pose published in ROS, if I keep a identity transformation matrix intriangle.yaml
, it means that it is the original/zero frame where the object is shown in MeshLab, with respect to thecamera_frame
inputed toicg_test_node
? (3) The transformation matrix intriangle_detector.yaml
seems to be the initial estimation pose for alignment? If there is a big initial error, it would be hard to track the object. So, it it better to make sure that the initial pose is almost correct? (4) How could I do if I would like to track multiple objects? As long as we have the.obj
files then we could be able to track them?Thanks you again!