Note: The code is only tested with the RGB recognition. The depth recognition is not tested due to the lack of 3D object recognition models.
Note: Ubuntu 20.04 LTS(Focal Fossa) is recommended for this codebase to work.
sudo apt install ros-rolling-desktop
For more details, see ROS2 Rolling Installation.
source /opt/ros/rolling/setup.bash
cv-bridge
sudo apt-get install ros-rolling-cv-bridge
yaml-cpp-vendor
sudo apt install ros-rolling-yaml-cpp-vendor*
rqt-reconfigure
sudo apt install ros-rolling-rqt-reconfigure
mkdir ~/mir_object_recognition/src
cd ~/mir_object_recognition/src
git clone --branch rolling-devel https://github.com/HBRS-SDP/ss22-ros2-perception.git .
git clone --branch foxy-devel https://github.com/HBRS-SDP/mas_perception_msgs.git
Note: If you want to use the bag file for RGB image and Pointcloud data, skip the next step.
sudo apt-key adv --keyserver keyserver.ubuntu.com --recv-key F6E65AC044F831AC80A06380C8B3A55A6F3EFCDE || sudo apt-key adv --keyserver hkp://keyserver.ubuntu.com:80 --recv-key F6E65AC044F831AC80A06380C8B3A55A6F3EFCDE
sudo add-apt-repository "deb https://librealsense.intel.com/Debian/apt-repo $(lsb_release -cs) main" -u
sudo apt-get install librealsense2-dkms librealsense2-utils librealsense2-dev
git clone https://github.com/IntelRealSense/realsense-ros.git -b ros2-beta
mas_perception_msgs
realsense2_camera_msgs
lifecycle_controller
mir_rgb_object_recognition_models
mir_object_recognition
mir_recognizer_scripts
realsense2_camera
realsense2_description
colcon build
source install/setup.bash
Note: Make sure to source the ROS rolling and devel in all the terminals
Step 0:
cd ~/mir_object_recognition
source /opt/ros/rolling/setup.bash
source install/setup.bash
Step 1:
ros2 launch realsense2_camera rs_launch.py pointcloud.enable:=true pointcloud.ordered_pc:=true depth_module.profile:=640x480x30 rgb_camera.profile:=640x480x30
ros2-beta
branch of realsense2_camera
package has a bug that doesn't set the pointcloud.ordered_pc
parameter to true. So, we have to set it manually using the ros2 param
command.
ros2 param set /camera pointcloud.ordered_pc true
ros2 param set /camera align_depth.enable true
Or
ros2 bag play -l bag_files/bag_file_name
~/mir_object_recognition/bag_files
:
https://drive.google.com/file/d/1okPBwca5MgtF6kc3yL3oOEA8TWGFyu-0/view
Note:
In order for the object recognition to work properly, the RGB image and the Pointcloud data should be of same size and in sync.
If you want to collect a bag file and use if for later, use the following command in terminal to record the bag file with the required topics:
ros2 bag record /camera/color/camera_info /camera/color/image_raw /camera/depth/color/points /clock /tf /tf_static
Step 2:
ros2 run tf2_ros static_transform_publisher 0.298 -0.039 0.795 0.0 1.16 -0.055 base_link camera_link
Step 3:
ros2 launch mir_object_recognition multimodal_object_recognition.launch.py
Step 4:
ros2 run rqt_reconfigure rqt_reconfigure
Step 5:
ros2 run lifecycle_controller lifecycle_controller --ros-args -p lc_name:=mmor
lifecycle_controller needs the lifecycle node name as a parameter to run.
Here, we are passing mmor
for our multimodal_object_recognition (mmor) node.
To know more about how to use the lifecycle controller, refer to the wiki.
Step 6:
ros2 launch mir_recognizer_scripts rgb_recognizer.launch.py
Step 7:
rviz2
~/mir_object_recognition/src/mir_object_recognition/ros/rviz/mir_object_recognition.rviz
file to view the recognized objects and their poses.Step 8:
To perform RGB object recognition, follow the steps below:
C
in the lifecycle_controller
terminal, during which all the parameters, publishers, subscribers and other configurations take place.rqt_reconfigure
gui to see the updated parameters.mmor
node state to Active by entering A
in the lifecycle_controller
terminal.mmor
node then process the image and point cloud data and publishes the recognized objects list, along with their poses and bounding boxes.rviz2
.mmor
node, enter X
in the lifecycle_controller
terminal, which will shut down the node.Step 1:
Follow the first step for the MMOR component and run either the bagfile or the realsense node.
Step 2: In a new terminal with the workspace sourced, run the launch file for the data collector component
ros2 launch mir_object_recognition data_collector.launch.py
Step 3: In another terminal, run the lifecycle controller node and pass 'data_collector' as the lc_name argument.
ros2 run lifecycle_controller lifecycle_controller --ros-args -p lc_name:=data_collector
Step 4:
Press C to transition the data_collector component from UNCONFIGURED to INACTIVE state, and then press A to transition it to ACTIVE state. In this state, the component will start saving the pointcloud clusters and the RGB image. By default, the location is the '/tmp/' directory, but if you want to change this, you can provide the desired location as an argument to the launchfile like the following example:
ros2 launch mir_object_recognition data_collector.launch.py log_directory:=/home/user/Pictures/
More details about the concepts, issues and resources can be found on the wiki page.