appliedAI-Initiative / orb_slam_2_ros

A ROS implementation of ORB_SLAM2
Other
596 stars 283 forks source link

Mappoint / pose relation not correct? (rviz) #48

Open flo981 opened 4 years ago

flo981 commented 4 years ago

Hey,

it seems like there is a error corresponding map points/pose? I don't understand why there are so many points directly at the the current pose position. Im using a stereo camera setup so the depth information should be provided. See the pictures for the current camera frame and the visualization from 2 different potions.

3 1 2

lennarthaller commented 4 years ago

Hey, this looks like DepthMapFactor is not set correctly, meaning the scale of the points is different from the scale of the tf frame.

flo981 commented 4 years ago

Thanks for your fast reply!:) Where does this factor come from? Camera calibration? Its not existing in my .ymal file. And what should it be?

flo981 commented 4 years ago

DepthMapFactor is a scale factor that multiplies the input depthmap (if needed) if you are using a RGB-D camera. This is used in the TUM RGB-D dataset. (from https://github.com/raulmur/ORB_SLAM2/issues/89#issuecomment-220551549)

But I am using a stereo, and not RGB-D camera...

flo981 commented 4 years ago

In Tracking.cc :


    if(sensor==System::STEREO || sensor==System::RGBD)
    {
        mThDepth = mbf*(float)fSettings["ThDepth"]/fx;
        cout << endl << "Depth Threshold (Close/Far Points): " << mThDepth << endl;
    }

    if(sensor==System::RGBD)
    {
        mDepthMapFactor = fSettings["DepthMapFactor"];
        if(fabs(mDepthMapFactor)<1e-5)
            mDepthMapFactor=1;
        else
            mDepthMapFactor = 1.0f/mDepthMapFactor;
    }

https://github.com/appliedAI-Initiative/orb_slam_2_ros/blob/e978a2c1d1db6abc93c9a7e0c5bbca01739d8356/orb_slam2/src/Tracking.cc#L136

lennarthaller commented 4 years ago

I just went through the code, you are right, in Tracker.cc, where the parameters are read, the setting is only used for RGBD cams. I have to say that I work with mono or RGBD cams most of the time and haven't really used the stereo node.

lennarthaller commented 4 years ago

The algorithm treats the RGBD and Stereo information the same most of the time, if you look in the paper they bring the stereo information in the same format as the RGBD image and use the same pipeline.

lennarthaller commented 4 years ago

So for me it still looks like its some scaling issue. Maybe you can try to apply the DepthMapFactor to the stereo image?

flo981 commented 4 years ago

I tried changing ThDepth parameter and also adding the DepthMapFactor inside the .ymal file. The only thing changes, the more I decrease ThDepth the less features a extracted. Adding DepthMapFactor doesnt do anything, but this also visible in the code I posted above (for stereo)

But this seems to be an issue of ORBSLAM2 itself, thus I should ask Raulmur?

flo981 commented 4 years ago

I'm also thinking of using a RGBD camera instead now... where I wont have this issue right?

flo981 commented 4 years ago

But the original ORBSLAM2 implementation doesn't hast this issue. At least not in the map viewer... 4

flo981 commented 4 years ago

And when I publish the map points and pose there, it looks also right:

5

lennarthaller commented 4 years ago

I'm also thinking of using a RGBD camera instead now... where I wont have this issue right?

At least we never encountered the issue there ;)

lennarthaller commented 4 years ago

And when I publish the map points and pose there, it looks also right:

5

Interesting, thanks you for investigating. Probably there is a bug in the stereo code of the node then. Like I said, we usually use the mono or RGBD nodes so the Stereo stuff is much less tested.

If you want to investigate further and fix the bug I am happy to merge your fix!

flo981 commented 4 years ago

I'll take a look at it but I think this is beyond my skills... Is it possible you take a look as well? It would be crucial for my masters thesis...

lennarthaller commented 4 years ago

I am pretty busy atm, I wont be able to investigate that in the next couple of weeks.

raclab commented 4 years ago

Is there any detailed description about the usage of the package with ros? For example, what did I should do for watching of the point cloud on rviz? Thank you