Open nbutko opened 2 years ago
/facetoface
The current explainer suggests the main usage of this for smartphone AR is for render effects, and for headsets is computer vision. It also suggests this current API is focussed on the smartphone render effect use-case.
Firstly I'd disagree smartphone use cases will only care about render effects - presumably 8th Wall as a consumer are using this to do CV? At Zappar we generally run our CV in a worker, so the synchronous nature of the current API isn't very well suited there. Another big use case for it would be for media capture, but the lack of a canvas to capture for an immersive session also prevents the current proposal from providing much help there.
Even if the scope is limited to solving for "smartphone based full-frame render effects", I don't feel that an immersive-ar
session with this extension is a great solution for it, as the browser will still be spending cycles compositing the plain camera image underneath the content, even if the content renders the full camera image itself with some special effect shader.
I'd be keen to hear the use cases that 8th Wall are using the current API for, and how well you feel it meets those needs.
For me this is one aspect of the wider issues of how well-suited the current WebXR API is for smartphone AR use cases. My ideal solution for handheld AR resolves around exposing native tracking data as a camera stream with metadata, and leaving it up to the site to render both camera frame and content to a WebGL canvas. https://github.com/immersive-web/webxr-ar-module/issues/78
Sorry I didn't mean that to sound quite so critical... if this currently proposed API solves real world problems in a reasonable way then there's no harm in trying to move forward with it alongside other potential APIs for other use cases, as the explainer itself mentions.
I very much appreciate all the work that's gone into the current proposal and getting an implementation out. It looks like the session time for this one at the f2f is one I can probably attend from London, so I'm looking forward to that discussion.
/facetoface
Tagging for further discussion in 2023 F2F
As mentioned at the face to face, I'm interested in collaborating on this using Wolvic for OpenXR wearable AR devices to try experiment with the demonstrated use cases shown on mobile AR, as well as making a raw camera access feed available of a virtual environment to help show the POC use cases in a wearable XR fashion to help move things forward.
I'm using webXR for handheld smartphone AR with remote object recognition for my master's thesis and the option to enable autofocus without having to use the experimental image tracking would make development much easier.
I'm using webXR for handheld smartphone AR with remote object recognition for my master's thesis and the option to enable autofocus without having to use the experimental image tracking would make development much easier.
"Raw camera access" just relates to giving the page access to the textures from the underlying AR implementation (ARCore in the case of Android Chrome). Auto-focus control would be a separate issue. Unless your objects are particularly small I find the fixed-focus default of ARCore works OK though - have using any other devices?
We have a POC implementation for raw camera access API (Chrome) and consumer (8th Wall). What is needed next to move this forward?
https://github.com/immersive-web/raw-camera-access