I'm using project tango for 3D reconstruction project. I'm curious about the accuracy of motion tracking. I know that tango combines both visual features and IMU data, and I expect pixel accuracy. By pixel accuracy, I mean the reprojection error. However, In practice I found the drift among frames is fairly large. More specifically:
1.When the motion is purely translational in xy plane, the tracking error is pretty large. I expect that in feature based SLAM system, this error should be small.
When the motion is purely translational along z axis, the motion tracking is terrible. This is reasonable to me.
When the motion is purely rotational around y axis (rotate horizontally), the error is relatively small. I think this might thank to IMU sensor.
I also tried to perform ICP after getting all point clouds, but this doesn't help much. Because the depth resolution is pretty low, small error in 3D space will easily cause more than 20 pixels reprojection error on image.
The way I'm using the pose data is that I recorded the timestamp of each frame, and query their poses using mTango.getPoseAtTime() API at the end.
So I'm curious about whether I'm doing the right thing. And are there any suggestions about how to improve motion tracking?
Hi, everyone,
I'm using project tango for 3D reconstruction project. I'm curious about the accuracy of motion tracking. I know that tango combines both visual features and IMU data, and I expect pixel accuracy. By pixel accuracy, I mean the reprojection error. However, In practice I found the drift among frames is fairly large. More specifically:
1.When the motion is purely translational in xy plane, the tracking error is pretty large. I expect that in feature based SLAM system, this error should be small.
I also tried to perform ICP after getting all point clouds, but this doesn't help much. Because the depth resolution is pretty low, small error in 3D space will easily cause more than 20 pixels reprojection error on image.
The way I'm using the pose data is that I recorded the timestamp of each frame, and query their poses using mTango.getPoseAtTime() API at the end.
So I'm curious about whether I'm doing the right thing. And are there any suggestions about how to improve motion tracking?
Thanks all!