aislabunimi / door-detection-long-term

6 stars 0 forks source link

Enhancing Door Detection for Autonomous Mobile Robots with Environment-Specific Data Collection

@INPROCEEDINGS{antonazzi2023doordetection,
  author={Antonazzi, Michele and Luperto, Matteo and Basilico, Nicola and Borghese, N. Alberto},
  booktitle={2023 European Conference on Mobile Robots (ECMR)}, 
  title={Enhancing Door-Status Detection for Autonomous Mobile Robots During Environment-Specific Operational Use}, 
  year={2023},
  pages={1-8},
  doi={10.1109/ECMR59166.2023.10256289}}

#f03c15 The new version of this repository with more models (DETR, Faster R-CNN and YOLOv5) can be found here.

Here you can find the code and the datasets used in the article entitled Enhancing Door Detection for Autonomous Mobile Robots with Environment-Specific Data Collection To use this package and install all the dependencies, clone this repository and run:

pip install -e .

Datasets links:

Simulation Environment

To acquire the visual dataset we use an extended version of Gibson, obtainable here. The simulator is automatically installed with the above command pip install -e .. The door dataset has been acquired by virtualizing through Gibson the environments of Matterport3D.

Pose extractor

The code to extract plausible positions of a mobile robot to acquire the images is in positions_extractor package.

Baseline

The code of the baseline (with the relative configuration parameters) is in baseline.py.

The door detector

The proposed detector is coded here. To download the trained models, click here and copy the donwloaded files in this folder

Results

In this paper, we present a door detector for autonomous robots to find doors and their status (open or closed) in RGB images. We built a general detector and a technique to increase its performance in the specific operational environment of the robot. This technique, based on the fine-tune paradigm, produces a qualified detector fine-tuned with new examples acquired directly from the target environment e. We test this approach in 10 different photorealistic simulated environments, producing a general detector GD-e without any knowledge about e and three qualified modules (starting from GD-e) fine-tuned with the 25%, 50%, and 75% of the new examples collected in e. We call these detectors QD25e, QD50e, QD75e.

Our results shows that the proposed general detector GD-e correctly detects doors also with any knowledge of a target environment e.

We also demonstrate that the detection accuracy increases with consecutive fine-tune operations.

Another interesting outcome is that the best performance increment is reached with the smallest fine-tune operation (QD25e) which requires the lowest effort to acquire and label the new examples.