georgealexakis / multispectral_processing

Multispectral Processing is an implementation in ROS Melodic for Multi-modal Data Processing and Implementation for Vineyard Analysis.
MIT License
18 stars 3 forks source link
camera-calibration cpp detects-features kinect-v2 multispectral multispectral-processing ndvi python ros vineyard-analysis

Multispectral Processing - Multi-modal Data Processing and Implementation for Vineyard Analysis

Multispectral Processing is an implementation in ROS Melodic for Multi-modal Data Processing and Implementation for Vineyard Analysis. The main focus of the project is the development of a method for the registration of multi-modal images in order to obtain a three-dimensional reconstruction of the vine enriched with photometric or radiometric data. Furthermore, an artificial intelligence module is developed to jointly process images from the different modalities for a detailed analysis of the plant's condition.

Table of Contents

Requirements

Pipeline

Packages Installation

Source Files

Launch Files

Resources

Execution

Demo Experiments

Figures

License

Requirements

Software

Hardware

Pipeline

Packages Installation

Be sure that you have installed the melodic version of the packages below.

Source Files

Launch Files

Resources

Execution

Functionalities

  1. Multispectral Camera:

    • Acquisition & Band Separation.

    • Flat-field Correction.

    • White Balance Normalization.

    • Crosstalk Correction.

    • Vegetation Indices Calculation NDVI, MCARI, MSR, SAVI, TVI, etc.

  2. Both Cameras:

    • Cameras Geometric Calibration.

    • Multi-modal Image Registration.

    • 3D Reconstruction.

Permissions

Change permissions to all python files to be executable with the command below:

$ roscd multispectral_processing/src
$ chmod +x *.py

Preparation for Image Acquisition

Follow the steps below to succeed the best image acquisition.

  1. Connection of multispectral camera, kinect V2 sensor and PC with ROS installation.
  2. Sensor alignment.
  3. Adjusting the optics:
    • Adjust the "Focus or Zoom" of the lens on an object at the same distance as the vine.
    • Adjust the "Aperture" of the lens.
  4. Set acquisitions parameters
    • Gain (not auto gain, lower is better).
    • Exposure time (we can change it as convenience).
    • Framerate.
    • Others.
  5. Pre-processing parameters:
    • Set white balance, white reference.
    • Set crosstalk correction or not.
    • Set flatfield correction or not.
  6. Start one of the registration approaches as described below to register Homographies (rotations, translations, scale). Be sure theat the sensors are fixed. "DO NOT TOUCH SENSORS".
  7. Save a single frame or multiple frames when running image registration in no capture mode, by using the command below:

    $ rosrun multispectral_processing backup.py

    or

    $ rosrun multispectral_processing backup

Image Registration

For the whole implementation is used C++ and Python code. Every node is developed with C++ and with Python respectively. Image registration is performed between multispectral camera and Kinect V2 camera.

  1. Multispectral camera and kinect cameras image registration, via feature detection (C++). Edit args="capture" to start corners capturing or args="nocapture" to start publishing.

    <node name="features_registrator" pkg="multispectral_processing" type="features_registrator" args="nocapture" output="screen"/>

    and run

    $ roslaunch multispectral_processing registration_approach1_cpp.launch

  2. Multispectral camera and kinect cameras image registration, via feature detection (Python). Edit args="capture" to start corners capturing or args="nocapture" to start publishing.

    <node name="features_registrator" pkg="multispectral_processing" type="features_registrator.py" args="nocapture" output="screen"/>

    and run

    $ roslaunch multispectral_processing registration_approach1_py.launch

  3. Multispectral camera and kinect cameras image registration, via chessboard coreners detection (C++). Edit args="capture" to start corners capturing or args="nocapture" to start publishing.

    <node name="corners_registrator" pkg="multispectral_processing" type="corners_registrator" args="nocapture" output="screen"/>

    and run

    $ roslaunch multispectral_processing registration_approach2_cpp.launch

  4. Multispectral camera and kinect cameras image registration, via chessboard coreners detection (Python). Edit args="capture" to start corners capturing or args="nocapture" to start publishing.

    <node name="corners_registrator" pkg="multispectral_processing" type="corners_registrator.py" args="nocapture" output="screen"/>

    and run

    $ roslaunch multispectral_processing registration_approach2_py.launch

3D Reconstruction

For mapping by using rtabmap_ros package:

  1. Run one of the registration approaches with args="nocapture".
  2. Run the command to start rtabmap_ros package:

    $ roslaunch rtabmap_ros rtabmap.launch rtabmap_args:="--delete_db_on_start" rgb_topic:=/multispectral/image_mono depth_topic:=/multispectral/image_depth camera_info_topic:=/multispectral/camera_info approx_sync:=false

    or for external odometry use:

    $ roslaunch rtabmap_ros rtabmap.launch rtabmap_args:="--delete_db_on_start" rgb_topic:=/multispectral/image_mono depth_topic:=/multispectral/image_depth camera_info_topic:=/multispectral/camera_info approx_sync:=false visual_odometry:=false odom_topic:=/my_odometry

    and replace odom_topic:=/my_odometry with the external odometry topic.

Demo Experiments

General

These experiments include only the imagees of the multispectral camera and the included processes. Run experiments with the already captured images located in /data/simulation folder and follow the steps below:

  1. Comment the includes below in cms_cpp.launch or cms_py.launch file.

    <!-- <include file="$(find multispectral_processing)/launch/kinect2_bridge.launch"/> -->
    <!-- <include file="$(find multispectral_processing)/launch/ueye_camera_gige.launch"/> -->
  2. Uncomment the include of experiments.cpp node.

    <node name="experiments" pkg="multispectral_processing" type="experiments" args="2 2020511" output="screen"/>

    where

    args=<folder id> <prefix of images>

  3. Choose the dataset that you want by changing the "args" value.

  4. Run cms_cpp.launch or cms_py.launch file.

Offline Image Registration

Perform offline image registration with all approaches. Run experiments with the already captured images located in /data/simulation folder and follow the steps below:

  1. Comment the includes below in the selected .launch file of the examined approach as presented below:

    <!-- <include file="$(find multispectral_processing)/launch/kinect2_bridge.launch"/> -->
    <!-- <include file="$(find multispectral_processing)/launch/ueye_camera_gige.launch"/> -->
    <!-- <node name="band_separator" pkg="multispectral_processing" type="band_separator" args="nodebug" output="screen"/> -->
    <!-- <node name="tf_node" pkg="multispectral_processing" type="tf_node"/> -->
    <!-- <node name="static_transform_publisher" pkg="tf" type="static_transform_publisher" args="0 0 0 -1.5707963267948966 0 -1.5707963267948966 camera_link kinect2_link 100"/> -->
  2. Uncomment the include of offline_registration.cpp node.

    <node name="offline_registration" pkg="multispectral_processing" type="offline_registration" args="1 2020511" output="screen"/>

    where

    args=<folder id> <prefix of images>

  3. Choose the dataset that you want by changing the "args" value.

  4. Run the launch file of the image registration approach.

  5. Visualize results by using $ rviz with Fixed Frame="multispecral_frame" and use the published topics:

    • /multispectral/image_color: Registered Kinect RGB image.
    • /multispectral/image_mono: Registered multispectral image.
    • /multispectral/image_depth: Registered depth image.

    or use

    $ rqt_image_view

Figures

Sensors Position

Robotnik Summit Equipped with the Sensors in Vineyard

Multispectral Image Pixels

Captured Image & Bands by the Multispectral Camera and OpenCV UI

Captured RGB Image by the Kinect V2 Sensor, Captured Bands by the Multispectral Camera

NDVI calculation, Colored vegetation, Colored Vegetation After Crosstalk Correction

Background Subtraction by using Otsu's method

Image Registration with Feature Matching

Image Registration with Corner Matching

3D Reconstruction

License

This project is licensed under the MIT License - see the LICENSE file for details.