Thank you for providing this toolkit—it’s been incredibly helpful!
I have a question regarding the alignment between the raw.glb mesh and the ARKit camera poses. From my understanding, the textured raw.glb is generated using the LiDAR sensor, while the camera poses are collected from the ARKit tracking camera. I’ve noticed that there seems to be an inherent offset between the LiDAR sensor and the tracking camera.
Do I need to manually align the mesh with the camera poses, or does ARKit automatically account for this offset, ensuring that the exported mesh is naturally aligned with the ARKit coordinate system?
I would greatly appreciate any clarification on this matter. Thank you in advance!
Hi there,
Thank you for providing this toolkit—it’s been incredibly helpful!
I have a question regarding the alignment between the raw.glb mesh and the ARKit camera poses. From my understanding, the textured raw.glb is generated using the LiDAR sensor, while the camera poses are collected from the ARKit tracking camera. I’ve noticed that there seems to be an inherent offset between the LiDAR sensor and the tracking camera.
Do I need to manually align the mesh with the camera poses, or does ARKit automatically account for this offset, ensuring that the exported mesh is naturally aligned with the ARKit coordinate system?
I would greatly appreciate any clarification on this matter. Thank you in advance!