Multispectral Processing is an implementation in ROS Melodic for Multi-modal Data Processing and Implementation for Vineyard Analysis. The main focus of the project is the development of a method for the registration of multi-modal images in order to obtain a three-dimensional reconstruction of the vine enriched with photometric or radiometric data. Furthermore, an artificial intelligence module is developed to jointly process images from the different modalities for a detailed analysis of the plant's condition.
Be sure that you have installed the melodic version of the packages below.
ueye_cam: ROS package that that wraps the driver API for UEye cameras by IDS Imaging Development Systems GMBH.
$ cd ~/catkin_ws/src
$ git clone https://github.com/anqixu/ueye_cam.git
$ cd ~/catkin_ws
$ catkin_make
iai_kinect2: Package that provides tools for Kinect V2 such as bridge between kinect and ROS, cameras calibration, etc.
libfreenect2: Drivers for Kinect V2.
image_pipeline: This package is designed to process raw camera images into useful inputs to vision algorithms: rectified mono/color images, stereo disparity images, and stereo point clouds.
$ cd ~/catkin_ws/src
$ git clone https://github.com/ros-perception/image_pipeline.git
$ cd ~/catkin_ws
$ catkin_make
rosbridge_suite: ROS package that provides a JSON API to ROS functionality for non-ROS programs.
$ sudo apt-get install ros-melodic-rosbridge-server
rviz: 3D visualization tool for ROS.
rtabmap_ros: A RGB-D SLAM approach with real-time constraints.
$ sudo apt-get install ros-melodic-rtabmap-ros
Multispectral Camera:
Acquisition & Band Separation.
Flat-field Correction.
White Balance Normalization.
Crosstalk Correction.
Vegetation Indices Calculation NDVI, MCARI, MSR, SAVI, TVI, etc.
Both Cameras:
Cameras Geometric Calibration.
Multi-modal Image Registration.
3D Reconstruction.
Change permissions to all python files to be executable with the command below:
$ roscd multispectral_processing/src
$ chmod +x *.py
Follow the steps below to succeed the best image acquisition.
Save a single frame or multiple frames when running image registration in no capture mode, by using the command below:
$ rosrun multispectral_processing backup.py
or
$ rosrun multispectral_processing backup
For the whole implementation is used C++ and Python code. Every node is developed with C++ and with Python respectively. Image registration is performed between multispectral camera and Kinect V2 camera.
Multispectral camera and kinect cameras image registration, via feature detection (C++). Edit args="capture" to start corners capturing or args="nocapture" to start publishing.
<node name="features_registrator" pkg="multispectral_processing" type="features_registrator" args="nocapture" output="screen"/>
and run
$ roslaunch multispectral_processing registration_approach1_cpp.launch
Multispectral camera and kinect cameras image registration, via feature detection (Python). Edit args="capture" to start corners capturing or args="nocapture" to start publishing.
<node name="features_registrator" pkg="multispectral_processing" type="features_registrator.py" args="nocapture" output="screen"/>
and run
$ roslaunch multispectral_processing registration_approach1_py.launch
Multispectral camera and kinect cameras image registration, via chessboard coreners detection (C++). Edit args="capture" to start corners capturing or args="nocapture" to start publishing.
<node name="corners_registrator" pkg="multispectral_processing" type="corners_registrator" args="nocapture" output="screen"/>
and run
$ roslaunch multispectral_processing registration_approach2_cpp.launch
Multispectral camera and kinect cameras image registration, via chessboard coreners detection (Python). Edit args="capture" to start corners capturing or args="nocapture" to start publishing.
<node name="corners_registrator" pkg="multispectral_processing" type="corners_registrator.py" args="nocapture" output="screen"/>
and run
$ roslaunch multispectral_processing registration_approach2_py.launch
For mapping by using rtabmap_ros package:
Run the command to start rtabmap_ros package:
$ roslaunch rtabmap_ros rtabmap.launch rtabmap_args:="--delete_db_on_start" rgb_topic:=/multispectral/image_mono depth_topic:=/multispectral/image_depth camera_info_topic:=/multispectral/camera_info approx_sync:=false
or for external odometry use:
$ roslaunch rtabmap_ros rtabmap.launch rtabmap_args:="--delete_db_on_start" rgb_topic:=/multispectral/image_mono depth_topic:=/multispectral/image_depth camera_info_topic:=/multispectral/camera_info approx_sync:=false visual_odometry:=false odom_topic:=/my_odometry
and replace odom_topic:=/my_odometry with the external odometry topic.
These experiments include only the imagees of the multispectral camera and the included processes. Run experiments with the already captured images located in /data/simulation folder and follow the steps below:
Comment the includes below in cms_cpp.launch or cms_py.launch file.
<!-- <include file="$(find multispectral_processing)/launch/kinect2_bridge.launch"/> -->
<!-- <include file="$(find multispectral_processing)/launch/ueye_camera_gige.launch"/> -->
Uncomment the include of experiments.cpp node.
<node name="experiments" pkg="multispectral_processing" type="experiments" args="2 2020511" output="screen"/>
where
args=<folder id> <prefix of images>
Choose the dataset that you want by changing the "args" value.
Run cms_cpp.launch or cms_py.launch file.
Perform offline image registration with all approaches. Run experiments with the already captured images located in /data/simulation folder and follow the steps below:
Comment the includes below in the selected .launch file of the examined approach as presented below:
<!-- <include file="$(find multispectral_processing)/launch/kinect2_bridge.launch"/> -->
<!-- <include file="$(find multispectral_processing)/launch/ueye_camera_gige.launch"/> -->
<!-- <node name="band_separator" pkg="multispectral_processing" type="band_separator" args="nodebug" output="screen"/> -->
<!-- <node name="tf_node" pkg="multispectral_processing" type="tf_node"/> -->
<!-- <node name="static_transform_publisher" pkg="tf" type="static_transform_publisher" args="0 0 0 -1.5707963267948966 0 -1.5707963267948966 camera_link kinect2_link 100"/> -->
Uncomment the include of offline_registration.cpp node.
<node name="offline_registration" pkg="multispectral_processing" type="offline_registration" args="1 2020511" output="screen"/>
where
args=<folder id> <prefix of images>
Choose the dataset that you want by changing the "args" value.
Run the launch file of the image registration approach.
Visualize results by using $ rviz
with Fixed Frame="multispecral_frame"
and use the published topics:
or use
$ rqt_image_view
This project is licensed under the MIT License - see the LICENSE file for details.