Closed zixuanz13 closed 2 years ago
Please file a bug. https://unity3d.com/unity/qa/bug-reporting
This sounds like the job of the ARFrame. ARKit is responsible for managing the video and lidar streams and providing them as part of the ARFrame, which is a set of synchronized data.
AR Foundation doesn't independently manage those streams, so the AVCaptureDataOutputSynchronizer does not seem relevant to me.
Out of sync can be a problem when the device is moving or the scene is dynamic.
What problem are you trying to solve exactly? Can you post a video or elaborate on what you're seeing?
This sounds like the job of the ARFrame. ARKit is responsible for managing the video and lidar streams and providing them as part of the ARFrame, which is a set of synchronized data.
AR Foundation doesn't independently manage those streams, so the AVCaptureDataOutputSynchronizer does not seem relevant to me.
Thanks for your explanation. It seems that we do not have to worry about synchronization in AR Foundation.
What problem are you trying to solve exactly? Can you post a video or elaborate on what you're seeing?
I am using RGBD data from iPhone for some mapping applications. The data is sometimes noisy and I am looking for the root cause. I was skeptical about the time sync issue and device localization accuracy. Now I guess device tracking might be the problem.
How can we make sure the depth images are synchronized with their corresponding camera images? Out of sync can be a problem when the device is moving or the scene is dynamic.
For example we call
TryAcquireLatestCpuImage
andTryAcquireEnvironmentDepthCpuImage
onARCameraManager.frameReceived
. It seems iniOS
we can useAVCaptureDataOutputSynchronizer
to do this [https://developer.apple.com/documentation/avfoundation/cameras_and_media_capture/capturing_depth_using_the_lidar_camera]Thank you!