IntelRealSense / librealsense

Intel® RealSense™ SDK
https://www.intelrealsense.com/
Apache License 2.0
7.55k stars 4.81k forks source link

How to get data of object's pose (Rx, Ry, Rz)? #7560

Closed qodlsrud20712 closed 3 years ago

qodlsrud20712 commented 3 years ago

Hello, I am trying to implement object detection using d435i model and pytorch. I am trying to get the angle(pose value) of an object right now, but I don't know how. To get the value, I have the 3d point(x,y,z) value of the object, the top left(x,y) of the bounding box, the bottom_right(x,y) value, the center(x,y) value, and the HFOV( 54), VFOV(42), Camera Matrix(fx,fy = 616). 1) And I found the horizontal angle value of the object here, https://github.com/IntelRealSense/librealsense/issues/5553#issue-543234625 here. Is this method right?

align_to = rs.stream.color align = rs.align(align_to)

frames = pipeline.wait_for_frames()
aligned_frames = align.process(frames)
color_frame = aligned_frames.get_color_frame()
depth_frame = aligned_frames.get_depth_frame()

color_intrin = color_frame.profile.as_video_stream_profile().intrinsics

     dis = depth_frame.get_distance(x_int,y_int)
            # print("dis_int", dis)
        p = rs.rs2_deproject_pixel_to_point(color_intrin, [x, y], dis)

2) And I am currently I am getting the data this way. Is this the right way?

3) How do I get the data of the object's pose (Rx, Ry, Rz) with the data I have?


Required Info
Camera Model D435i
Firmware Version i use pycharm
Operating System & Version WIN10
Kernel Version (Linux Only) I don't know
Platform pc
SDK Version i use pycharm
Language python
Segment Robot, Computer Vision

Issue Description

<Describe your issue / question / feature request / etc..>

MartyG-RealSense commented 3 years ago

Hi @qodlsrud20712 A project in the link below called AUBOi5-D435-ROS-DOPE may be a useful reference for you. It uses RealSense D435, Pytorch and ROS to get the 6 degrees of freedom (6-DOF) pose of recognized objects.

https://github.com/avinashsen707/AUBOi5-D435-ROS-DOPE

If you would prefer to continue to use your own project code though, I will certainly try to assist you.

qodlsrud20712 commented 3 years ago

Hi @qodlsrud20712 A project in the link below called AUBOi5-D435-ROS-DOPE may be a useful reference for you. It uses RealSense D435, Pytorch and ROS to get the 6 degrees of freedom (6-DOF) pose of recognized objects.

https://github.com/avinashsen707/AUBOi5-D435-ROS-DOPE

If you would prefer to continue to use your own project code though, I will certainly try to assist you.

Hello, I want to challenge myself in my own way now. If not all works, I'll try the link's method. Thank you. Is there really no way to find the pose (Rx, Ry, Rz) with the data I have now? I am and I am writing python.

MartyG-RealSense commented 3 years ago

The 400 Series camera models do not have built-in support for 6DOF pose data, so this can mean usng workarounds to get pose. This can include a SolvePNP algorithm with OpenCV, or Inference on ROS like the project in the link above does. I am not personally aware of an example of a pose-calculating script for Python that can operate on D435i without external assistance from software such as ROS or OpenCV.

Below is another example of a Python-based project for getting pose from detected objects with ROS and inference that is much simpler than the first example:

https://github.com/pauloabelha/Deep_Object_Pose

qodlsrud20712 commented 3 years ago

400 시리즈 카메라 모델에는 6DOF 포즈 데이터에 대한 기본 지원이 없으므로 포즈를 얻기위한 해결 방법을 사용할 수 있습니다. 여기에는 OpenCV를 사용한 SolvePNP 알고리즘 또는 위 링크의 프로젝트와 같은 ROS의 추론 이 포함될 수 있습니다 . ROS 또는 OpenCV와 같은 소프트웨어의 외부 지원없이 D435i에서 작동 할 수있는 Python 용 포즈 계산 스크립트의 예를 개인적으로 알지 못합니다.

다음은 첫 번째 예보다 훨씬 간단한 ROS 및 추론을 사용하여 감지 된 개체에서 포즈를 가져 오는 Python 기반 프로젝트의 또 다른 예입니다.

https://github.com/pauloabelha/Deep_Object_Pose

Thanks for the help, I have to separate the stacked bolts. Is there any way to get the angle with the depth function of the camera?? I tried with opencv, but the results weren't good.

MartyG-RealSense commented 3 years ago

Aside from https://github.com/IntelRealSense/librealsense/issues/5553#issue-543234625 other approaches to detecting the angle by using RealSense depth in a trigonometry equation are discussed in the link below:

https://answers.ros.org/question/300639/how-to-get-the-distance-and-the-angle-distance-and-bearing-from-a-depth-camera/

qodlsrud20712 commented 3 years ago

삼각법 방정식에서 RealSense 깊이를 사용하여 각도를 감지하는 다른 방법 은 # 5553 (주석) 외에도 아래 링크에서 설명합니다.

https://answers.ros.org/question/300639/how-to-get-the-distance-and-the-angle-distance-and-bearing-from-a-depth-camera/

Thanks for the help. First, we are going to apply other deep learning techniques. And one more question, is there an example of pose estimation with chessboard among realsense examples?

MartyG-RealSense commented 3 years ago

The link below is not specifically RealSense, but it does provide an example in OpenCV and Python of how to use a camera with a chessboard to estimate pose.

https://opencv-python-tutroals.readthedocs.io/en/latest/py_tutorials/py_calib3d/py_pose/py_pose.html

qodlsrud20712 commented 3 years ago

Thanks for the help. I succeeded in getting the angle of the object using solvePnP! let's close this issue.

MartyG-RealSense commented 3 years ago

Thanks so much @qodlsrud20712 for the update - I'm pleased to hear that you succeeded!

sivaeinfo commented 3 years ago

https://github.com/avinashsen707/AUBOi5-D435-ROS-DOPE

Hi, I have realsense D435i, I need to track pose of two 3D object simultaneously. I referred this link. Will you assist me to run these packages? I have Ubuntu 18.04 and ROS melodic, D435i.

email: sivaeinfo@gmail.com https://github.com/avinashsen707/AUBOi5-D435-ROS-DOPE. https://github.com/pauloabelha/Deep_Object_Pose

MartyG-RealSense commented 3 years ago

Hi @sivaeinfo The simpler project in the link below may be useful to you. It uses RealSense D435, ROS and OpenCV to detect objects of a particular color (cubes) and track pose.

Edit:. Link no longer exists. Here is a different RealSense-compatible ROS project for segmenting a ball by its color and detecting its position.

https://github.com/ctu-mrs/object_detect

sivaeinfo commented 3 years ago

@MartyG-RealSense Thank you. I will look into this and will practice. Also my work is to detect object and hand pose estimation. If you suggest some documents on this, it will be really useful.

MartyG-RealSense commented 3 years ago

There is a long discussion about hand and finger tracking at the link below.

https://github.com/IntelRealSense/librealsense/issues/7877

sivaeinfo commented 3 years ago

I could not launch rs_rgbd.launch to open realsense D435i camera node

I have installed all necessary files for the mentioned packages. Then I ran the below commands bit@bit-OptiPlex-990:~$ roslaunch realsense2_camera rs_rgbd.launch

It shows an error and did not open rviz WARNING: Package name "object_detection_with_intel_D415" does not follow the naming conventions. It should start with a lower case letter and only contain lower case letters, digits, underscores, and dashes. ... logging to /home/bit/.ros/log/a3d4b3c0-a724-11eb-a4aa-d4bed9944d8b/roslaunch-bit-OptiPlex-990-5534.log Checking log directory for disk usage. This may take a while. Press Ctrl-C to interrupt Done checking log file disk usage. Usage is <1GB.

WARNING: Package name "object_detection_with_intel_D415" does not follow the naming conventions. It should start with a lower case letter and only contain lower case letters, digits, underscores, and dashes. started roslaunch server http://bit-OptiPlex-990:41865/

SUMMARY PARAMETERS /camera/realsense2_camera/accel_fps: -1 /camera/realsense2_camera/accel_frame_id: camera_accel_frame /camera/realsense2_camera/accel_optical_frame_id: camera_accel_opti... /camera/realsense2_camera/align_depth: True /camera/realsense2_camera/aligned_depth_to_color_frame_id: camera_aligned_de... /camera/realsense2_camera/aligned_depth_to_fisheye1_frame_id: camera_aligned_de... /camera/realsense2_camera/aligned_depth_to_fisheye2_frame_id: camera_aligned_de... /camera/realsense2_camera/aligned_depth_to_fisheye_frame_id: camera_aligned_de... /camera/realsense2_camera/aligned_depth_to_infra1_frame_id: camera_aligned_de... /camera/realsense2_camera/aligned_depth_to_infra2_frame_id: camera_aligned_de... /camera/realsense2_camera/allow_no_texture_points: False /camera/realsense2_camera/base_frame_id: camera_link /camera/realsense2_camera/calib_odom_file: /camera/realsense2_camera/clip_distance: -1.0 /camera/realsense2_camera/color_fps: -1 /camera/realsense2_camera/color_frame_id: camera_color_frame /camera/realsense2_camera/color_height: -1 /camera/realsense2_camera/color_optical_frame_id: camera_color_opti... /camera/realsense2_camera/color_width: -1 /camera/realsense2_camera/confidence_fps: 30 /camera/realsense2_camera/confidence_height: 480 /camera/realsense2_camera/confidence_width: 640 /camera/realsense2_camera/depth_fps: -1 /camera/realsense2_camera/depth_frame_id: camera_depth_frame /camera/realsense2_camera/depth_height: -1 /camera/realsense2_camera/depth_optical_frame_id: camera_depth_opti... /camera/realsense2_camera/depth_width: -1 /camera/realsense2_camera/device_type: /camera/realsense2_camera/enable_accel: False /camera/realsense2_camera/enable_color: True /camera/realsense2_camera/enable_confidence: True /camera/realsense2_camera/enable_depth: True /camera/realsense2_camera/enable_fisheye1: False /camera/realsense2_camera/enable_fisheye2: False /camera/realsense2_camera/enable_fisheye: False /camera/realsense2_camera/enable_gyro: False /camera/realsense2_camera/enable_infra1: False /camera/realsense2_camera/enable_infra2: False /camera/realsense2_camera/enable_infra: False /camera/realsense2_camera/enable_pointcloud: False /camera/realsense2_camera/enable_pose: False /camera/realsense2_camera/enable_sync: True /camera/realsense2_camera/filters: /camera/realsense2_camera/fisheye1_frame_id: camera_fisheye1_f... /camera/realsense2_camera/fisheye1_optical_frame_id: camera_fisheye1_o... /camera/realsense2_camera/fisheye2_frame_id: camera_fisheye2_f... /camera/realsense2_camera/fisheye2_optical_frame_id: camera_fisheye2_o... /camera/realsense2_camera/fisheye_fps: -1 /camera/realsense2_camera/fisheye_frame_id: camera_fisheye_frame /camera/realsense2_camera/fisheye_height: -1 /camera/realsense2_camera/fisheye_optical_frame_id: camera_fisheye_op... /camera/realsense2_camera/fisheye_width: -1 /camera/realsense2_camera/gyro_fps: -1 /camera/realsense2_camera/gyro_frame_id: camera_gyro_frame /camera/realsense2_camera/gyro_optical_frame_id: camera_gyro_optic... /camera/realsense2_camera/imu_optical_frame_id: camera_imu_optica... /camera/realsense2_camera/infra1_frame_id: camera_infra1_frame /camera/realsense2_camera/infra1_optical_frame_id: camera_infra1_opt... /camera/realsense2_camera/infra2_frame_id: camera_infra2_frame /camera/realsense2_camera/infra2_optical_frame_id: camera_infra2_opt... /camera/realsense2_camera/infra_fps: -1 /camera/realsense2_camera/infra_height: -1 /camera/realsense2_camera/infra_rgb: False /camera/realsense2_camera/infra_width: -1 /camera/realsense2_camera/initial_reset: False /camera/realsense2_camera/json_file_path: /camera/realsense2_camera/linear_accel_cov: 0.01 /camera/realsense2_camera/odom_frame_id: camera_odom_frame /camera/realsense2_camera/ordered_pc: False /camera/realsense2_camera/pointcloud_texture_index: 0 /camera/realsense2_camera/pointcloud_texture_stream: RS2_STREAM_COLOR /camera/realsense2_camera/pose_frame_id: camera_pose_frame /camera/realsense2_camera/pose_optical_frame_id: camera_pose_optic... /camera/realsense2_camera/publish_odom_tf: True /camera/realsense2_camera/publish_tf: True /camera/realsense2_camera/rosbag_filename: /camera/realsense2_camera/serial_no: /camera/realsense2_camera/stereo_module/exposure/1: 7500 /camera/realsense2_camera/stereo_module/exposure/2: 1 /camera/realsense2_camera/stereo_module/gain/1: 16 /camera/realsense2_camera/stereo_module/gain/2: 16 /camera/realsense2_camera/tf_publish_rate: 0.0 /camera/realsense2_camera/topic_odom_in: camera/odom_in /camera/realsense2_camera/unite_imu_method: none /camera/realsense2_camera/usb_port_id: /rosdistro: melodic * /rosversion: 1.14.10

NODES /camera/ color_rectify_color (nodelet/nodelet) points_xyzrgb_hw_registered (nodelet/nodelet) realsense2_camera (nodelet/nodelet) realsense2_camera_manager (nodelet/nodelet)

auto-starting new master process[master]: started with pid [5547] ROS_MASTER_URI=http://localhost:11311

setting /run_id to a3d4b3c0-a724-11eb-a4aa-d4bed9944d8b WARNING: Package name "object_detection_with_intel_D415" does not follow the naming conventions. It should start with a lower case letter and only contain lower case letters, digits, underscores, and dashes. process[rosout-1]: started with pid [5558] started core service [/rosout] process[camera/realsense2_camera_manager-2]: started with pid [5569] process[camera/realsense2_camera-3]: started with pid [5570] process[camera/color_rectify_color-4]: started with pid [5571] process[camera/points_xyzrgb_hw_registered-5]: started with pid [5579] 27/04 02:49:00,893 ERROR [140341701707520] (ds5-options.cpp:88) Asic Temperature value is not valid!

MartyG-RealSense commented 3 years ago

I note that you are using Melodic. Have you installed rs_rgbd please with the instruction below?

sudo apt-get install ros-melodic-rgbd-launch

sivaeinfo commented 3 years ago

Yes. I have installed rs_rgbd using sudo apt-get install ros-melodic-rgbd-launch

MartyG-RealSense commented 3 years ago

@sivaeinfo I will continue the discussion at the case https://github.com/IntelRealSense/librealsense/issues/8915 that you created. Thank you very much.

sivaeinfo commented 3 years ago

I have tried this package for object detection. When I run catkin_make, I have received the following error. https://github.com/joffman/ros_object_recognition

   map(const _Compare& __comp,
   ^~~

/usr/include/c++/7/bits/stl_map.h:186:7: note: no known conversion for argument 1 from ‘const char [178]’ to ‘const std::less&’ /usr/include/c++/7/bits/stl_map.h:177:7: note: candidate: std::map<_Key, _Tp, _Compare, _Alloc>::map() [with _Key = pcl::SacModel; _Tp = unsigned int; _Compare = std::less; _Alloc = std::allocator<std::pair<const pcl::SacModel, unsigned int> >] map() = default; ^~~ /usr/include/c++/7/bits/stl_map.h:177:7: note: candidate expects 0 arguments, 1 provided In file included from /home/bit/obdets_ws/src/ros_object_recognition/src/shape_detector/include/shape_detector/shape_enum.h:8:0, from /home/bit/obdets_ws/src/ros_object_recognition/src/shape_detector/include/shape_detector/reconfigure_parameter_manager.h:13, from /home/bit/obdets_ws/src/ros_object_recognition/src/shape_detector/src/reconfigure_parameter_manager.cpp:10: /usr/include/pcl-1.8/pcl/sample_consensus/model_types.h:99:3: error: expected ‘,’ or ‘;’ before ‘SAC_SAMPLE_SIZE’ SAC_SAMPLE_SIZE (sample_size_pairs, sample_size_pairs + sizeof (sample_size_pairs) / sizeof (SampleSizeModel)); ^~~~~~~ In file included from /home/bit/obdets_ws/src/ros_object_recognition/src/shape_detector/include/shape_detector/shape_enum.h:8:0, from /home/bit/obdets_ws/src/ros_object_recognition/src/shape_detector/src/object_computer.cpp:11: /usr/include/pcl-1.8/pcl/sample_consensus/model_types.h:98:75: error: no matching function for call to ‘std::map<pcl::SacModel, unsigned int>::map(const char [178])’ "is a protected member of the SampleConsensusModel class") ^ In file included from /usr/include/c++/7/map:61:0, from /usr/include/pcl-1.8/pcl/sample_consensus/model_types.h:44, from /home/bit/obdets_ws/src/ros_object_recognition/src/shape_detector/include/shape_detector/shape_enum.h:8, from /home/bit/obdets_ws/src/ros_object_recognition/src/shape_detector/src/object_computer.cpp:11: /usr/include/c++/7/bits/stl_map.h:282:2: note: candidate: template std::map<_Key, _Tp, _Compare, _Alloc>::map(_InputIterator, _InputIterator, const _Compare&, const allocator_type&) map(_InputIterator first, _InputIterator last, ^~~ /usr/include/c++/7/bits/stl_map.h:282:2: note: template argument deduction/substitution failed: In file included from /home/bit/obdets_ws/src/ros_object_recognition/src/shape_detector/include/shape_detector/shape_enum.h:8:0, from /home/bit/obdets_ws/src/ros_object_recognition/src/shape_detector/src/object_computer.cpp:11: /usr/include/pcl-1.8/pcl/sample_consensus/model_types.h:98:75: note: candidate expects 4 arguments, 1 provided "is a protected member of the SampleConsensusModel class") ^ In file included from /usr/include/c++/7/map:61:0, from /usr/include/pcl-1.8/pcl/sample_consensus/model_types.h:44, from /home/bit/obdets_ws/src/ros_object_recognition/src/shape_detector/include/shape_detector/shape_enum.h:8, from /home/bit/obdets_ws/src/ros_object_recognition/src/shape_detector/src/object_computer.cpp:11: /usr/include/c++/7/bits/stl_map.h:265:2: note: candidate: template std::map<_Key, _Tp, _Compare, _Alloc>::map(_InputIterator, _InputIterator) map(_InputIterator first, _InputIterator last) ^~~ /usr/include/c++/7/bits/stl_map.h:265:2: note: template argument deduction/substitution failed: In file included from /home/bit/obdets_ws/src/ros_object_recognition/src/shape_detector/include/shape_detector/shape_enum.h:8:0, from /home/bit/obdets_ws/src/ros_object_recognition/src/shape_detector/src/object_computer.cpp:11: /usr/include/pcl-1.8/pcl/sample_consensus/model_types.h:98:75: note: candidate expects 2 arguments, 1 provided "is a protected member of the SampleConsensusModel class") ^ In file included from /usr/include/c++/7/map:61:0, from /usr/include/pcl-1.8/pcl/sample_consensus/model_types.h:44, from /home/bit/obdets_ws/src/ros_object_recognition/src/shape_detector/include/shape_detector/shape_enum.h:8, from /home/bit/obdets_ws/src/ros_object_recognition/src/shape_detector/src/object_computer.cpp:11: /usr/include/c++/7/bits/stl_map.h:248:2: note: candidate: template std::map<_Key, _Tp, _Compare, _Alloc>::map(_InputIterator, _InputIterator, const allocator_type&) map(_InputIterator first, _InputIterator __last, ^~~ /usr/include/c++/7/bits/stl_map.h:248:2: note: template argument deduction/substitution failed: In file included from /home/bit/obdets_ws/src/ros_object_recognition/src/shape_detector/include/shape_detector/shape_enum.h:8:0, from /home/bit/obdets_ws/src/ros_object_recognition/src/shape_detector/src/object_computer.cpp:11: /usr/include/pcl-1.8/pcl/sample_consensus/model_types.h:98:75: note: candidate expects 3 arguments, 1 provided "is a protected member of the SampleConsensusModel class") ^ In file included from /usr/include/c++/7/map:61:0, from /usr/include/pcl-1.8/pcl/sample_consensus/model_types.h:44, from /home/bit/obdets_ws/src/ros_object_recognition/src/shape_detector/include/shape_detector/shape_enum.h:8, from /home/bit/obdets_ws/src/ros_object_recognition/src/shape_detector/src/object_computer.cpp:11: /usr/include/c++/7/bits/stl_map.h:242:7: note: candidate: std::map<_Key, _Tp, _Compare, _Alloc>::map(std::initializer_list<std::pair<const _Key, _Tp> >, const allocator_type&) [with _Key = pcl::SacModel; _Tp = unsigned int; _Compare = std::less; _Alloc = std::allocator<std::pair<const pcl::SacModel, unsigned int> >; std::map<_Key, _Tp, _Compare, _Alloc>::allocator_type = std::allocator<std::pair<const pcl::SacModel, unsigned int> >] map(initializer_list l, const allocator_type& a) ^~~ /usr/include/c++/7/bits/stl_map.h:242:7: note: candidate expects 2 arguments, 1 provided /usr/include/c++/7/bits/stl_map.h:236:7: note: candidate: std::map<_Key, _Tp, _Compare, _Alloc>::map(std::map<_Key, _Tp, _Compare, _Alloc>&&, const allocator_type&) [with _Key = pcl::SacModel; _Tp = unsigned int; _Compare = std::less; _Alloc = std::allocator<std::pair<const pcl::SacModel, unsigned int> >; std::map<_Key, _Tp, _Compare, _Alloc>::allocator_type = std::allocator<std::pair<const pcl::SacModel, unsigned int> >] map(map&& m, const allocator_type& a) ^~~ /usr/include/c++/7/bits/stl_map.h:236:7: note: candidate expects 2 arguments, 1 provided /usr/include/c++/7/bits/stl_map.h:232:7: note: candidate: std::map<_Key, _Tp, _Compare, _Alloc>::map(const std::map<_Key, _Tp, _Compare, _Alloc>&, const allocator_type&) [with _Key = pcl::SacModel; _Tp = unsigned int; _Compare = std::less; _Alloc = std::allocator<std::pair<const pcl::SacModel, unsigned int> >; std::map<_Key, _Tp, _Compare, _Alloc>::allocator_type = std::allocator<std::pair<const pcl::SacModel, unsigned int> >] map(const map& m, const allocator_type& a) ^~~ /usr/include/c++/7/bits/stl_map.h:232:7: note: candidate expects 2 arguments, 1 provided /usr/include/c++/7/bits/stl_map.h:228:7: note: candidate: std::map<_Key, _Tp, _Compare, _Alloc>::map(const allocator_type&) [with _Key = pcl::SacModel; _Tp = unsigned int; _Compare = std::less; _Alloc = std::allocator<std::pair<const pcl::SacModel, unsigned int> >; std::map<_Key, _Tp, _Compare, _Alloc>::allocator_type = std::allocator<std::pair<const pcl::SacModel, unsigned int> >] map(const allocator_type& __a) ^~~ /usr/include/c++/7/bits/stl_map.h:228:7: note: no known conversion for argument 1 from ‘const char [178]’ to ‘const allocator_type& {aka const std::allocator<std::pair<const pcl::SacModel, unsigned int> >&}’ /usr/include/c++/7/bits/stl_map.h:220:7: note: candidate: std::map<_Key, _Tp, _Compare, _Alloc>::map(std::initializer_list<std::pair<const _Key, _Tp> >, const _Compare&, const allocator_type&) [with _Key = pcl::SacModel; _Tp = unsigned int; _Compare = std::less; _Alloc = std::allocator<std::pair<const pcl::SacModel, unsigned int> >; std::map<_Key, _Tp, _Compare, _Alloc>::allocator_type = std::allocator<std::pair<const pcl::SacModel, unsigned int> >] map(initializer_list l, ^~~ /usr/include/c++/7/bits/stl_map.h:220:7: note: no known conversion for argument 1 from ‘const char [178]’ to ‘std::initializer_list<std::pair<const pcl::SacModel, unsigned int> >’ /usr/include/c++/7/bits/stl_map.h:207:7: note: candidate: std::map<_Key, _Tp, _Compare, _Alloc>::map(std::map<_Key, _Tp, _Compare, _Alloc>&&) [with _Key = pcl::SacModel; _Tp = unsigned int; _Compare = std::less; _Alloc = std::allocator<std::pair<const pcl::SacModel, unsigned int> >] map(map&&) = default; ^~~ /usr/include/c++/7/bits/stl_map.h:207:7: note: no known conversion for argument 1 from ‘const char [178]’ to ‘std::map<pcl::SacModel, unsigned int>&&’ /usr/include/c++/7/bits/stl_map.h:199:7: note: candidate: std::map<_Key, _Tp, _Compare, _Alloc>::map(const std::map<_Key, _Tp, _Compare, _Alloc>&) [with _Key = pcl::SacModel; _Tp = unsigned int; _Compare = std::less; _Alloc = std::allocator<std::pair<const pcl::SacModel, unsigned int> >] map(const map&) = default; ^~~ /usr/include/c++/7/bits/stl_map.h:199:7: note: no known conversion for argument 1 from ‘const char [178]’ to ‘const std::map<pcl::SacModel, unsigned int>&’ /usr/include/c++/7/bits/stl_map.h:186:7: note: candidate: std::map<_Key, _Tp, _Compare, _Alloc>::map(const _Compare&, const allocator_type&) [with _Key = pcl::SacModel; _Tp = unsigned int; _Compare = std::less; _Alloc = std::allocator<std::pair<const pcl::SacModel, unsigned int> >; std::map<_Key, _Tp, _Compare, _Alloc>::allocator_type = std::allocator<std::pair<const pcl::SacModel, unsigned int> >] map(const _Compare& __comp, ^~~ /usr/include/c++/7/bits/stl_map.h:186:7: note: no known conversion for argument 1 from ‘const char [178]’ to ‘const std::less&’ /usr/include/c++/7/bits/stl_map.h:177:7: note: candidate: std::map<_Key, _Tp, _Compare, _Alloc>::map() [with _Key = pcl::SacModel; _Tp = unsigned int; _Compare = std::less; _Alloc = std::allocator<std::pair<const pcl::SacModel, unsigned int> >] map() = default; ^~~ /usr/include/c++/7/bits/stl_map.h:177:7: note: candidate expects 0 arguments, 1 provided In file included from /home/bit/obdets_ws/src/ros_object_recognition/src/shape_detector/include/shape_detector/shape_enum.h:8:0, from /home/bit/obdets_ws/src/ros_object_recognition/src/shape_detector/src/object_computer.cpp:11: /usr/include/pcl-1.8/pcl/sample_consensus/model_types.h:99:3: error: expected ‘,’ or ‘;’ before ‘SAC_SAMPLE_SIZE’ SAC_SAMPLE_SIZE (sample_size_pairs, sample_size_pairs + sizeof (sample_size_pairs) / sizeof (SampleSizeModel)); ^~~~~~~ ros_object_recognition/src/shape_detector/CMakeFiles/shape_detector.dir/build.make:194: recipe for target 'ros_object_recognition/src/shape_detector/CMakeFiles/shape_detector.dir/src/reconfigure_parameter_manager.cpp.o' failed make[2]: [ros_object_recognition/src/shape_detector/CMakeFiles/shape_detector.dir/src/reconfigure_parameter_manager.cpp.o] Error 1 ros_object_recognition/src/shape_detector/CMakeFiles/shape_detector.dir/build.make:98: recipe for target 'ros_object_recognition/src/shape_detector/CMakeFiles/shape_detector.dir/src/object_computer.cpp.o' failed make[2]: [ros_object_recognition/src/shape_detector/CMakeFiles/shape_detector.dir/src/object_computer.cpp.o] Error 1 CMakeFiles/Makefile2:7026: recipe for target 'ros_object_recognition/src/shape_detector/CMakeFiles/shape_detector.dir/all' failed make[1]: [ros_object_recognition/src/shape_detector/CMakeFiles/shape_detector.dir/all] Error 2 Makefile:140: recipe for target 'all' failed make: [all] Error 2 Invoking "make -j8 -l8" failed

MartyG-RealSense commented 3 years ago

I would caution against using projects that are 3 or 4 years old like https://github.com/joffman/ros_object_recognition as they may not be compatible with a modern setup. For example, that particular linked-to project relies on ROS Kinetic, whilst you are using the newer Melodic.

image

sivaeinfo commented 3 years ago

Thank you Actually my project is deep object pose estimation from (https://github.com/yehengchen/DOPE-ROS-D435) for object and human hand. For that I have tried, the below packages. Now I am facing few errors which I listed here Screenshot from 2021-05-04 02-14-17 Screenshot from 2021-05-04 02-14-33 Screenshot from 2021-05-04 02-14-39

. Screenshot from 2021-05-04 02-14-14

=================================================== Errors are

i) roslaunch realsense2_camera rs_rgbd.launch # Publishes RGB images to /camera/color/image_raw

process[camera/realsense2_camera_manager-1]: started with pid [3429] process[camera/realsense2_camera-2]: started with pid [3430] process[camera/color_rectify_color-3]: started with pid [3431] process[camera/points_xyzrgb_hw_registered-4]: started with pid [3436] [ INFO] [1620108763.382160336]: Initializing nodelet with 8 worker threads. [ INFO] [1620108763.525868202]: RealSense ROS v2.2.7 [ INFO] [1620108763.525912472]: Running with LibRealSense v2.44.0

[ INFO] [1620108763.829192188]: JSON file is not provided [ INFO] [1620108763.829239499]: ROS Node Namespace: camera [ INFO] [1620108763.829274609]: Device Name: Intel RealSense D435I [ INFO] [1620108763.829299691]: Device Serial No: 923322073096 [ INFO] [1620108763.829319932]: Device FW version: 05.12.13.50 [ INFO] [1620108763.829349756]: Device Product ID: 0x0B3A [ INFO] [1620108763.829367252]: Enable PointCloud: Off [ INFO] [1620108763.829380406]: Align Depth: On [ INFO] [1620108763.829404377]: Sync Mode: On [ INFO] [1620108763.829524892]: Device Sensors: [ INFO] [1620108763.829617426]: Stereo Module was found. [ INFO] [1620108763.829658363]: RGB Camera was found. [ INFO] [1620108763.829740886]: Motion Module was found. [ INFO] [1620108763.829818176]: (Fisheye, 0) sensor isn't supported by current device! -- Skipping... [ INFO] [1620108763.829846209]: (Fisheye, 1) sensor isn't supported by current device! -- Skipping... [ INFO] [1620108763.829882602]: (Fisheye, 2) sensor isn't supported by current device! -- Skipping... [ INFO] [1620108763.829919924]: (Pose, 0) sensor isn't supported by current device! -- Skipping... [ INFO] [1620108763.829949349]: Setting Dynamic reconfig parameters. /opt/ros/melodic/lib/nodelet/nodelet: symbol lookup error: /home/bit/catkin_ws/devel/lib//librealsense2_camera.so: undefined symbol: _ZN20ddynamic_reconfigure19DDynamicReconfigure16registerVariableIiEEvRKNSt7cxx1112basic_stringIcSt11char_traitsIcESaIcEEET_RKN5boost8functionIFvSA_EEES9_SA_SAS9 [camera/realsense2_camera_manager-1] process has died [pid 3429, exit code 127, cmd /opt/ros/melodic/lib/nodelet/nodelet manager name:=realsense2_camera_manager __log:=/home/bit/.ros/log/9b9f0040-ac9f-11eb-9fb1-d4bed9944d8b/camera-realsense2_camera_manager-1.log]. log file: /home/bit/.ros/log/9b9f0040-ac9f-11eb-9fb1-d4bed9944d8b/camera-realsense2_camera_manager-1.log [camera/points_xyzrgb_hw_registered-4] process has finished cleanly log file: /home/bit/.ros/log/9b9f0040-ac9f-11eb-9fb1-d4bed9944d8b/camera-points_xyzrgb_hw_registered-4.log [camera/realsense2_camera-2] process has finished cleanly log file: /home/bit/.ros/log/9b9f0040-ac9f-11eb-9fb1-d4bed9944d8b/camera-realsense2_camera-2*.log


ii) roslaunch dope dope.launch [config:=/path/to/my_config.yaml] # Config file is optional; default is config_pose.yaml

process[dope-1]: started with pid [3678] Traceback (most recent call last): File "/home/bit/catkin_ws/src/Deep_Object_Pose/nodes/dope", line 19, in import tf.transformations File "/opt/ros/melodic/lib/python2.7/dist-packages/tf/init.py", line 30, in from tf2_ros import TransformException as Exception, ConnectivityException, LookupException, ExtrapolationException File "/opt/ros/melodic/lib/python2.7/dist-packages/tf2_ros/init.py", line 38, in from tf2_py import File "/opt/ros/melodic/lib/python2.7/dist-packages/tf2_py/init.py", line 38, in from ._tf2 import ImportError: dynamic module does not define module export function (PyInit__tf2) [dope-1] process has died [pid 3678, exit code 1, cmd /home/bit/catkin_ws/src/Deep_Object_Pose/nodes/dope name:=dope log:=/home/bit/.ros/log/9b9f0040-ac9f-11eb-9fb1-d4bed9944d8b/dope-1.log]. log file: /home/bit/.ros/log/9b9f0040-ac9f-11eb-9fb1-d4bed9944d8b/dope-1*.log all processes on machine have died, roslaunch will exit shutting down processing monitor... ... shutting down processing monitor complete done bit@bit-OptiPlex-990:~$

MartyG-RealSense commented 3 years ago

@sivaeinfo It is important to match a ROS wrapper version as closely as possible with the librealsense version recommended in the release notes for that particular ROS wrapper version. Your log indicates that you are using the very old ROS wrapper 2.2.7 and the recent librealsense 2.44.0.

image

For librealsense version 2.44.0, ROS wrapper version 2.2.24 should be built for it.

https://github.com/IntelRealSense/realsense-ros/releases/tag/2.2.24

jianjuan commented 3 years ago

Thanks for the help. I succeeded in getting the angle of the object using solvePnP! let's close this issue.

Please, I am curious that how you get the angle of the object, solvePnP requires 3d-2d corresponding points, how to define these corresponding points? Besides, do you use the raw point cloud as 3d points? Have you done any other transformation? Thank you for your reply! @qodlsrud20712

MartyG-RealSense commented 3 years ago

Hi @jianjuan Further information about obtaining the pose of the camera or an observed surface / object can be found in the link below.

https://support.intelrealsense.com/hc/en-us/community/posts/360051944814-Plane-pose-estimation-via-Intel-Realsense-D435i-camera

Regarding obtaining 3D coordinates from a point cloud, you could either:

You can also use an instruction called rs2_transform_point_to_point to perform an 'affine transform' on a pointcloud to set its rotation and position.

https://github.com/IntelRealSense/librealsense/issues/8333#issuecomment-886447391

jianjuan commented 3 years ago

Hi @jianjuan Further information about obtaining the pose of the camera or an observed surface / object can be found in the link below.

https://support.intelrealsense.com/hc/en-us/community/posts/360051944814-Plane-pose-estimation-via-Intel-Realsense-D435i-camera

Regarding obtaining 3D coordinates from a point cloud, you could either:

  • Generate a pointcloud with points = pc.calculate and obtain its vertices with the points.get_vertices instruction
  • Or alternatively, perform depth to color alignment and then use the rs2_deproject_pixel_to_point instruction

You can also use an instruction called rs2_transform_point_to_point to perform an 'affine transform' on a pointcloud to set its rotation and position.

#8333 (comment)

Thank you very much for your quick reply. I would rather like to get the object pose by solvePnP algorithm, but the question is that the obtained 3D pointcloud is defined in camera coordination, how can i get the 3D points in the real-world coordination (as the object coordination)? I am so confused ! Your reply will be very appreciated! @MartyG-RealSense

MartyG-RealSense commented 3 years ago

There is a SolvePNP tutorial in the Python language in OpenCV's documentation at the link below.

https://docs.opencv.org/master/d7/d53/tutorial_py_pose.html

The tutorial states that the camera matrix and distortion coefficients are required. A RealSense team member advises in https://github.com/IntelRealSense/librealsense/issues/3569#issuecomment-475896784 that the RealSense SDK tool rs-enumerate-devices can be used to acquire this information from the camera by launching the tool in its calibration information mode:

rs-enumerate-devices -c

https://github.com/IntelRealSense/librealsense/issues/3569#issuecomment-475896784

jianjuan commented 3 years ago

There is a SolvePNP tutorial in the Python language in OpenCV's documentation at the link below.

https://docs.opencv.org/master/d7/d53/tutorial_py_pose.html

The tutorial states that the camera matrix and distortion coefficients are required. A RealSense team member advises in #3569 (comment) that the RealSense SDK tool rs-enumerate-devices can be used to acquire this information from the camera by launching the tool in its calibration information mode:

rs-enumerate-devices -c

#3569 (comment)

I have already known this, my question is not about how to get the camera matrix and distortion. Still, very thanks for your patience! By the way, Are the 3D points in and image pixels in rgb camera One-to-one correspondence ?

MartyG-RealSense commented 3 years ago

Yes, a 2D pixel can be mapped to a 3D point (rs2_deproject_pixel_to_point) and a 3D point mapped to a 2D pixel (rs2_project_point_to_pixel). The RealSense SDK's Projection documentation provides more information about this.

https://dev.intelrealsense.com/docs/projection-in-intel-realsense-sdk-20

Tranthanhbao198 commented 3 months ago

Thanks for the help. I succeeded in getting the angle of the object using solvePnP! let's close this issue.

hello, can you help me to find yaw pitch roll please?

MartyG-RealSense commented 3 months ago

Hi @Tranthanhbao198 You can only get pitch and roll from the IMU. A RealSense user shared a Python script at https://github.com/IntelRealSense/librealsense/issues/4391#issuecomment-510578991 that attempted to calculate yaw as well as retrieve pitch and roll.