Open speigg opened 6 years ago
I'm wondering if @TrevorFSmith or @dmarcos might have insight here on what direction might end up happening in WebXR.
This is something like what we'd talked about at the last WebXR meeting: if custom platform capabilities exist, they are likely exposed by custom APIs that integrate with the WebXR Device API by exposing anchors (or coordinate systems or whatever they end up being called).
Gheric and I were talking about this because for the project he/we are doing at GT, with Traclabs, we are considering transitioning to WebXR instead of using argon.js ... however, we will need the tracking capabilities of Vuforia that are embedded in Argon4. I'm excited about shifting Argon to exposing WebXR, as this will give us the opportunity (force us?) to experiment with how to expose platform-specific computer vision capabilities.
It's definitely too early to build anything except small examples on WebXR.
It would be a distraction for both projects to think about how to mutate the Argon worldview with vuforia into WebXR. There are too many decisions in Argon that are almost but not enough like WebXR that it will be massively confusing to even have the conversations about how that would work.
Since we'll probably move away from this polyfill code base and use AR extensions to the W3C CG's polyfill (and eventually native implementations) and that all won't happen for a few months, I'd suggest that Argon stay its course until the WebXR standards process and implementations are farther down the road.
I'll take that as a "no insight on what the trajectory might be." I didn't think it'd been discussed, which is why I asked.
As for the polyfill; I think the rumors of it's death of greatly exaggerated, to coin a phrase. As you say, it may be months till there are any native implementations, and it's unclear when an "official" polyfill might appear.
But it is clear that the implementation here is reasonably aligned with whatever will happen with the standard, and shifting from this polyfill to the eventual standard will be pretty trivial. Waiting and switching then will add unnecessary delays.
I'll take that as a "no insight on what the trajectory might be." I didn't think it'd been discussed, which is why I asked.
I don't understand your statements. I know what the trajectory will be: The W3C WG polyfill will be the active codebase and we'll extend it with AR features, first as polyfill extensions and then in native code. That's pretty much a done deal as far as the WG is concerned, and it's the right direction because the WebVR 1.1 polyfill already has more functionality and stability than this code base. Most of what needs to happen for the first "train" of WebXR Device API features is renaming, shuffling around setup and teardown, and adding the browser provided GL context, so using the 1.1 polyfill to create the WebXR Device API polyfill makes total sense.
For the second "train" that includes AR features and perhaps more layers, we'll make another repo under the W3C org and run experiments with APIs there.
But it is clear that the implementation here is reasonably aligned with whatever will happen with the standard
That's not clear at all. We already know that the frames of reference and coordinate systems will be different, anchor finding and vending will be different, it's unclear what will happen with CV, it's unclear what will happen with platform specific APIs like vuforia, etc.
Sorry I wasn't clear.
I asked about Gheric's question regarding one idea about custom extensions; that's all I was asking about. I know the WebXR path, and the two trains, and I know you do too. ;)
I also understand a lot will change. But in the grand scheme of things, any WebVR/WebXR API is (vastly) more closely aligned with where WebXR will end up than the architecture of argon.js. If gheric wants to end up with something that looks like WebXR, starting here is a reasonable step, even if it will require another (significantly smaller) round of changes in a few months.
(we can talk about "why" offline, if you want to know more)
Oh! I totally misunderstood the question.
no problem.
I wanted to let you know that I am working on adding support for the Argon Browser to the polyfill, and plan to expose a way to do marker tracking with Vuforia. I'm not sure yet what that should look like, but please suggest anything if you have ideas.
My thoughts are that there can be an extension API similar to webgl-extensions (https://developer.mozilla.org/en-US/docs/Web/API/WebGL_API/Using_Extensions), perhaps by exposing a similar "getExtension" API in the Reality class:
Anyway, after loading and activating a Vuforia dataset, the trackables contained in that dataset would have to be made available somehow. Trackables can have a known or unknown pose, however right now it doesn't seem to be the case that XRAnchors can have an "unknown" pose state, so I'm not sure what the best way to make them available as XRAnchors would be. Would it be okay to extend the XRAnchor to have various states, so that applications can hold to a single Anchor reference as it gains and loses tracking?