qcr / benchbot

BenchBot is a tool for seamlessly testing & evaluating semantic scene understanding tools in both realistic 3D simulation & on real robots
BSD 3-Clause "New" or "Revised" License
110 stars 12 forks source link

Semantic SLAM example bad results #46

Closed ivbelkin closed 2 years ago

ivbelkin commented 2 years ago

Hello!

I successfully run a Semantic SLAM example on develop_1 env batch, got results in json format and calculated OMQ for every environment. But I achieved an overall score just about 0.013, instead 0.13, that you mention in the Tutorial. Is this is a typo in Tutorial or I have some problems on my side?

Thank you.

btalb commented 2 years ago

We will test this on our end and see if we can reproduce. It may be related to #43, but will have to do some tests to be sure.

ivbelkin commented 2 years ago

When vertical shift was fixed in #47, I rerun semantic SLAM baseline on miniroom:1 and still got bad results: 0.005 OMQ for this single env. I visualize GT (green) vs predictions(red) and these looks very different.

Screenshot from 2022-04-01 20-59-01

david2611 commented 2 years ago

As part of issue #53 we found that the camera intrinsics had been set up incorrectly for Omniverse and were not utilising the camera_info observations. This has now been fixed.

Qualitative results are shown in the below gif with map generated by the tutorial being toggled on and off amidst the ground-truth map. sem

Quantitative results are different (0.10 OMQ across the develop_1 batch of environments) but that is currently being accredited to the change in appearance of the environments causing incorrect detections at the start of the pipeline rather than monumental error in object placement as was seen here.

@ivbelkin can you confirm you get similar results after updating the example add-on?

ivbelkin commented 2 years ago

Yes, now I get similar results! Thank you! I think the issue can be closed