immersive-web / proposals

Initial proposals for future Immersive Web work (see README)
95 stars 11 forks source link

Displaying non XR HTML5 content inside XR #11

Closed pmanixxx closed 6 years ago

pmanixxx commented 6 years ago

An easy possibility to render a HTML5 (non XR content) iframe or a similar solution inside XR which would be able to interact with via controllers could dramatically increase the XR adoption pace, encouraging and facilitating wide scale development and paving way for lots of use cases as well as becoming a transitional period solution.

johnpallett commented 6 years ago

To clarify, this proposal is more than a display model for the DOM in AR/VR, it also includes an interactivity model for the DOM in AR/VR (i.e. It's not just rendering the DOM to a texture, it's also a model for how controllers would interact with the web page). Is that correct?

pmanixxx commented 6 years ago

Yes, correct. I see HMDs not only as app-specific gadget but as a natural evolution of screen, that we use today for everyday tasks and work. They will eventually match the resolution, image quality and comfort of use of monitors and they are already offering much more in terms of functionality.

I imagine however that this might be split into two features - first - as you suggested - effectively being able to see the HTML/CSS/JS content, and second being able to interact with it the way we interact with content using current displays - desktop or mobile. In my opinion it is crucial for the period of transition., just like mobile devices are backward compatible by being able to display non-responsive websites in their desktop format. Such feature would render HMDs a fully functional browsing tools and speed up the technology adoption by offering important functionality. At present we have controllers, in the future we will probably have palms' movement tracking systems, but the model in which both interact with content might be developed earlier and upgraded later the same way as trackpads evolved.

TrevorFSmith commented 6 years ago

The tactic we're taking with Firefox Reality (hand waving here) is to treat the 2D web as initially more important than the immersive web. So, we're putting a lot of effort into providing the comfortable windows + tabs that people expect, tweaked to work well in headsets with wand and hand input. It's true that we need to eventually figure out how WebXR experiences can show and interact with DOM based flat content in their scenes, but there's an intermediary step until that's nailed down in a standard: We can make it possible for people in WebXR immersive experiences to pause the render to interact with the 2D browser, much like people currently do with tethered HMDs. They can push the headset up on their foreheads and interact with the flat web page that is providing the WebXR session. The same can be true for stand-alone headset, where the user can show and hide the 2D UI while not totally exiting the immersive WebXR session. This gives app developers the ability to provide flat DOM based UIs that are used during immersive experiences. This also allows users to check email and social networks without completely ending the WebXR session.

pmanixxx commented 6 years ago

to treat the 2D web as initially more important than the immersive web.

Exactly what I have in mind! Step by step.

So, we're putting a lot of effort into providing the comfortable windows + tabs that people expect, tweaked to work well in headsets with wand and hand input.

Keep up the good work!

They can push the headset up on their foreheads and interact with the flat web page that is providing the WebXR session.

In my opinion it is a workaround. It might work for some users and use cases, but might on the other hand spoil it for others. But merely having that possibility to test it whether it works for you and learn from it has a lot of value in itself.

Some other use cases, that I can imagine: A fully functional VR (providing there's good enough text input solution) or AR desktop for developers, that would have multiple windows - focused main window with code editor and others showing live preview of the website look on three different devices that would reflect every change in the code. Or they could emulate different screen resolutions of the same device. There could be multiple open tabs, that would serve normal browser content. All at the same time within one desktop. Also there will be a moment when HMDs resolution would be sufficient enough to even enable websites that are not maximized just to sit there somewhere taking small part of space and still be legible - no keyboard shortcuts would be needed to maximize-minimize and read it. Just a head movement.

A CCTV security app, that has, say, 6x4 screens with live feed, each of the live feeds can be pointed at with a controller, which would enlarge it and put it to focus, so that the user can control the movement of the camera with a controller or even better - if the cameras are 360 capable, their movement could be following the gaze. There could also be a browser window so that, the user doesn't need to leave the VR session to browse the web on external device. The security personnel would be probably more immersed and they would only need one HMD instead of 6 screens with 4 splitscreens. The setup would be mobile and the work could be effectively done offsite.

TrevorFSmith commented 6 years ago

Is there more to discuss here or should I close this Issue?

TrevorFSmith commented 6 years ago

I'll close this now but feel free to reopen it if there is more discuss.