For stereo depth camera (like Intel RealSense D415 and D435), to run orb slam2 in RGB-D mode, it's required to pass a parameters Camera.bf to the SLAM system, it's the baseline of IR projector times fx like follow:
For Azure Kinect camera, the hardware specifications page doesn't have this parameter. I also tried to ask this via Azure Kinect SDK github's issue, but they told me Azure Kinect's time-of-flight is a uniform illumination with a completely different principle to compute depth compare to stereo matching.
So does parameter 'Camera.bf' really matter for Azure Kinect camera to run ORB SLAM2?
For stereo depth camera (like Intel RealSense D415 and D435), to run orb slam2 in RGB-D mode, it's required to pass a parameters
Camera.bf
to the SLAM system, it's the baseline of IR projector timesfx
like follow: For Azure Kinect camera, the hardware specifications page doesn't have this parameter. I also tried to ask this via Azure Kinect SDK github's issue, but they told me Azure Kinect's time-of-flight is a uniform illumination with a completely different principle to compute depth compare to stereo matching.So does parameter 'Camera.bf' really matter for Azure Kinect camera to run ORB SLAM2?