Open bialpio opened 3 years ago
We have few options here:
active
.A bit of background regarding Chrome's current implementation: renderer receives new depth information data on every frame (via a shared memory buffer coming from device's process). The allocation + copy on the device side is unavoidable since we need to have a way of getting the data out of ARCore. The depth data buffer is passed on & stored on the renderer side, and is copied every time an app requests an instance of XRDepthInformation.
More thoughts on the above options and how they could change Chrome's implementation.
active
XRFrame allows us to skip a copy of the depth buffer when the application is requesting depth information - since instances of XRDepthInformation are usable only when a frame is active, they can now share the underlying depth buffer among themselves (and once the frame becomes inactive, we could reclaim the buffer). Drawback is that it'd mean the app can accidentally overwrite entries in this buffer (and those overwritten entries will be visible via other XRDepthInformation instances).
Quote from twitter's @mrmaxm: https://twitter.com/mrmaxm/status/1333516895975305218
"Or look into similar approach of Hand Tracking API with providing allocated array into function, which will fill it with data. Allocations - is very important issue with realtime apps."