Open bialpio opened 2 years ago
Perhaps a stupid question, but how is this different from existing camera access APIs such as getUserMedia()
It allows the sites to obtain camera images that are synchronized with WebXR's XRPose
s. If a site were to obtain camera images in some other manner (like getUserMedia()
), it would be unable to correlate them with spatial data that it can get by integrating with WebXR.
Does it otherwise provide the same data as getUserMedia()
?
No, there are some differences. The big one is that the camera image that is presented to the site is going to be cropped compared to the image available via getDisplayMedia()
. The spec mandates that only the part of the image is aligned to the XRView
will be accessible to the site, and in smartphone AR case, that cropped camera image is also displayed to the user. We also expose the projection matrix of the camera (since it's the same as the one on XRView
due to the alignment requirement). The other difference is the way that the camera image is exposed - we surface the image as WebGLTexture
, which the site can then use for rendering - this should be a more direct approach than obtaining ImageBitmap
(via ImageCapture
constructed from MediaStreamTrack
) & uploading it to a WebGLTexture
.
Request for Mozilla Position on an Emerging Web Specification
Other information
The WebXR Raw Camera Access Module extends the capabilities of the core WebXR Device API by allowing the sites to request
"camera-access"
feature when creating XR sessions. The feature allows the sites to obtain access to camera pixels (via an integration with WebGL, exposing the pixels asWebGLTexture
). This capability has been requested for a long time by developers.