Closed lawpdas closed 2 years ago
Hello, thanks for your interest in our dataset.
Our ground truth poses are estimated offline using additional external information, i.e. highly accurate high-resolution laser scans of the environment. As a result, these ground truth poses are expected to often be more accurate than the on-device ARCamera poses. More importantly, however, our ground truth poses are estimated in the coordinate system of the laser scans, while ARKit poses (ARCamera.transform) are in a completely different coordinate system since ARKit does not know anything about the laser scans. Estimating ground truth poses in the coordinate system of the laser scans is what enables us to render ground truth depth maps aligned with the RGB frames using the laser scan meshes. This would not be possible using ARCamera poses because they are defined in a different coordinate system.
Thank you for your helpful reply. Could you share the collection APP? Can I find it in the APP Store?
We currently do not have any plans on sharing the data collection app.
Hi, thanks for your great work! I'm working on a data collection APP with ARKit. I notice that you estimate the ground truth poses instead of using ARCamera poses (ARCamera.transform). Why? Is it because the ARCamera poses are inaccurate?