jrcuaranv / terrasentia-dataset

This dataset is intended for the evaluation of visual-based localization and mapping systems in agriculture.
16 stars 2 forks source link

Terrasentia-dataset

Explore dataset

Download scripts

Paper

Paper - Supplementary material

This dataset is intended for the evaluation of visual-based localization and mapping systems in agriculture. It includes stereo images, IMU, GPS, and wheel encoder measurements. It was collected from a ground robot in the Illinois Autonomous Farm at the University of Illinois at Urbana-Champaign. The collection campaign took place during the Summer of 2022. Different data sequences were collected twice per week in corn fields, and less often in soybean and sorghum. This dataset exhibit high variability in terms of weather conditions and growth stages. It contains challenging features like occlusions, illumination variations, weeds, dynamic objects, and rough terrain.

Terrasentia-robot

Data description

This dataset is organized in different folders, classified by crop type and collection date (See Table I for further description of each folder). Within each folder, we find Rosbag (.bag) and SVO (.svo) files.

Table I. Main characteristics of data folders

Folder          Number of    Time        Occlusions     Presence of     Weather         Growth-stage       Rough      Folder
                sequences    span                       Weeds           variability     variability        terrain    size (GB)
Cornfield1      80           4 months    ✓              ✓               ✓               ✓                  ✓          584
Cornfield2      17           3 months    ✓              ✓               ✓               ✓                  ✓          171
Cornfield3      2            1 week      x              x               x               x                  ✓          28
Cornfield4      4            1 months    ✓              x               x               ✓                  ✓          37
Sorghum         2            3 weeks     ✓              ✓               x               x                  ✓          9
Soybean         12           2 weeks     ✓              ✓               ✓               x                  ✓          79
Sweet Corn      4            1 weeks     ✓              ✓               x               x                  ✓          49
Others          14           3 months    x              x               ✓               ✓                  ✓          103

Table II. ROS topics in Rosbag files

Rosbag-topics

Playing SVO files (optional)

Left image and neural depth computed by the ZED SDK

neural-depth

If you want higher resolution images and higher depth quality than that of the rosbag files, then you can use the svo files. Most of them were recorded simultaneously with the rosbags.

  1. Install the ZED SDK (You need CUDA)
  2. install the ZED ROS Wrapper
  3. Play the svo files:
    cd ~/catkin_ws
    source devel/setup.bash
    roslaunch zed_wrapper zed2.launch svo_file:=/home/path_to_svo_file/file.svo
  4. See all the ROS topics as the camera was connected
    rostopic list

    You can change the quality of the depth image and other parameters by editing the file common.yaml in the directory: catkin_ws/src/zed-ros-wrapper/zed_wrapper/params. To get the Intrinsic parameters of the camera, check topics:

    • /zed2/zed_node/depth/camera_info
    • /zed2/zed_node/left/camera_info
    • /zed2/zed_node/right/camera_info

Building fpn_msgs in your catkin workspace

In order to read the GPS and motor messages, which are user-defined ROS messages, you will need to build them in your catkin workspace. Further details.

  1. Download the package fpn_msgs to your catkin/src folder and unzip the file.
  2. Go to your catkin workspace and build the package
    catkin_make fpn_msgs
  3. Source your workspace
    source devel/setup.bash

Extracting data from rosbag files

  1. Build fpn_msgs as explained above
  2. Edit script extract_data_from_rosbag.py, customizing topics to extract and output directory
  3. Run script
    
    source ~/catkin_ws/devel/setup.bash
    python3 extract_data_from_rosbag.py

## Sensor calibration
### Robot's coordinate frames
<div align="center">
  <a href="https://github.com/jrcuaranv/terrasentia-dataset/blob/main/figures/coordinate_frames.png">
    <img src="https://github.com/jrcuaranv/terrasentia-dataset/raw/main/figures/coordinate_frames.png" width="400" alt="coordinate-frames">
  </a>
</div>

Download [sensor_parameters.txt](sensor_parameters.txt)

## Publication
If you find this dataset useful, please cite our paper:

Cuaran, J., Baquero Velasquez, A. E., Valverde Gasparino, M., Uppalapati, N. K., Sivakumar, A. N., Wasserman, J., ... & Chowdhary, G. (2023). Under-canopy dataset for advancing simultaneous localization and mapping in agricultural robotics. The International Journal of Robotics Research, 02783649231215372.

 ```bibtex
@article{cuaran2023under,
  title={Under-canopy dataset for advancing simultaneous localization and mapping in agricultural robotics},
  author={Cuaran, Jose and Baquero Velasquez, Andres Eduardo and Valverde Gasparino, Mateus and Uppalapati, Naveen Kumar and Sivakumar, Arun Narenthiran and Wasserman, Justin and Huzaifa, Muhammad and Adve, Sarita and Chowdhary, Girish},
  journal={The International Journal of Robotics Research},
  volume={43},
  number={6},
  pages={739--749},
  year={2023},
  publisher={SAGE Publications Sage UK: London, England}
}