Closed amir989 closed 4 years ago
It doesn't look like Apple's ARKit API allows access to the raw LiDAR data.
It would appear that the iPad Pro LiDar sensor is not a depth camera in the traditional sense. From this teardown video you can see that the projection pattern is too sparse, and the projector/camera baseline is too narrow for it to be a structured light camera. https://www.youtube.com/watch?time_continue=99&v=xz6CExnGw9w&feature=emb_title
Since it is supposed to use time of flight to measure depth, my assumption is that the camera is treating each of these points as a single point of depth. That would mean a pointcloud would only have about 400-500 points per frame.
There is a new API in ARKit 3.5 that provides a reconstructed mesh of a scanned environment. Unity will need to add this part of the API into ARFoundation. https://developer.apple.com/documentation/arkit/world_tracking/visualizing_and_interacting_with_a_reconstructed_scene
Outside of that, the LiDAR sensors greatly improves existing ARKit features. Plane detection happens almost instantaneously. Body segmentation has much more accurate depth values. 3D skeletons are tracked at a consistent scale. And overall AR camera tracking is much more stable and recovers much better.
Thank you for the information. the video was really helpful. its an unfortunate that there are just a few dots. I was expecting something like Tango's sensor.
Cheers
The new feature in ARKit 3.5 will be provided via the existing ARMeshManager in ARFoundation. We will announce its release on this forum post when it is ready.
Thank you for the answer. I think you can close this question.
So if i get the new Ipad, will i receive point clouds from LiDAR from current ARFoundation or should I wait until the new API comes?
Also, will the points be in Unity's world space?
Cheers!