qcr / benchbot

BenchBot is a tool for seamlessly testing & evaluating semantic scene understanding tools in both realistic 3D simulation & on real robots
BSD 3-Clause "New" or "Revised" License
110 stars 12 forks source link

Part V: Comprehensively Evaluating Performance with BenchBot Power Tools #25

Closed ErnestoMM94 closed 3 years ago

ErnestoMM94 commented 3 years ago

I was wondering if evaluation performance only works with object detection? Is it possible to just focus on SLAM without object detection to evaluate the results?

Example: benchbot_batch --robot carter --task semantic_slam:active:ground_truth --envs miniroom:1 --native python my_solution.py

btalb commented 3 years ago

Thanks for the question @ernestoMM94.

Evaluation requires some kind of defined evaluation methodology. For example, SLAM could be cumulative error in final pose, IOU error in final object map, or some hybrid combination.

Evaluation methods are defined using BenchBot add-ons. An add-on can define novel functionality for the system, including evaluation methods. To create your own you'll need to:

  1. Clearly define what your "SLAM evaluation method" measures
  2. Implement that measurement / evaluation metric in a custom add-on

The documentation for the benchbot_addons repository should help you process, and understand how you can add custom evaluation functionality if you need.