I'm wondering about the possibility to polyfill AR, akin to how Carboard is polyfilled, but for "magic window" (not stereo) AR.
There are a few reasons to do this:
when developing AR content, it is often convenient to use desktops for debugging. Yes, you can't get at the platform capabilities (e.g., full motion control, anchors, etc), but I have personally found that being able to do some of the debugging of content, etc., can by much easier on desktop. Having an app "see that there is AR" and then follow that path of the code, even though there are limitations, is useful
on mobile, with WebRTC available, doing simple magic window AR (especially for geospatial AR or rotation-only situations) is something people ask for a lot
simple marker-based AR (e.g., ARToolkit, OpenCV Aruco markers, and probably more) is possible using video from WebRTC streams. Again, as a fallback for developers who want to hit more users in the next while, providing the basic setup and display of video, exposing the device orientation via the Cardboard code, and then letting devs do the trivial work of grabbing video frames and tracking with it, may be useful in the short term
There are significant limitations, most obviously that the camera intrinsics aren't known, so the field of view of the video isn't known. But given the ability to build the polyfill with different features turned on / off (right now, WebVR and Cardboard), it seems like a potentially useful addition.
I'm wondering about the possibility to polyfill AR, akin to how Carboard is polyfilled, but for "magic window" (not stereo) AR.
There are a few reasons to do this:
There are significant limitations, most obviously that the camera intrinsics aren't known, so the field of view of the video isn't known. But given the ability to build the polyfill with different features turned on / off (right now, WebVR and Cardboard), it seems like a potentially useful addition.