immersive-web / webxr

Repository for the WebXR Device API Specification.
https://immersive-web.github.io/webxr/
Other
2.99k stars 384 forks source link

Should an XRLayer support multiple framebuffers? #397

Closed jespertheend closed 6 years ago

jespertheend commented 6 years ago

Hi, my apoligies if this has been asked before, I couldn't find anything about this.

It seems like right now the UA creates only one framebuffer and then requests to draw multiple views to it using XRFrame.views, each view having their own viewport and matrices. Most devices require only two views, both having the same resolution.

It seems to me like perhaps devices in the future might require multiple views with different resolutions, or even different framebuffers. For example, a camera that captures a scene of a user in front of a green screen with a VR headset in order to create a mixed reality video. Or perhaps a multi user VR application where both users are connected with two different headsets to the same desktop. (Something that seems unlikely now but might be feasible in the future). In this case both headsets might use a different framebuffer, but still be attached to the same gl instance.

I suppose it is possible to render multiple views to one framebuffer in these situations, but it seems less efficient to me when the views all have different resolutions. In a case like that, a space in the framebuffer will be left unused.

Perhaps being able to access to multiple devices at the same time is a better solution for these situations. Though I'm not sure if XR.requestDevice() currently supports this.

I'm not sure if this is a huge requirement, maybe rendering everything to one framebuffer works just fine.

NellWaliczek commented 6 years ago

No apologies necessary! Thanks for chiming in!
There's some changes coming regarding requestDevice() which will probably make that part of your comment a bit moot so keep an eye out on the repo for that PR shortly. That said, we made a call about a year ago that we were explicitly simplifying the WebXR API by not supporting multiple XR devices to be simultaneously used by a single document. The reasoning is that the far more common scenario for multiplayer would be across a network, not the least of which because graphics processing requirements and the behavior of the existing underlying XR platforms we are implementing on top of. Trying to maintain theoretical support for that functionality was putting a significant burden on API design simplicity, as well. As for the question of multiple framebuffers for a single device, yes absolutely there's no guarantee that all views have the same resolution. The single framebuffer design is an artifact of an attempt to enable multiview rendering on webgl 1.0 given some of it's limitations. However, for a handful of reasons, we're finding that we need to take a different approach, and there are currently two open issues (#361 and #317) tracking reevaluating that initial design. Specifically one of the issues questions if it's worth doing something explicitly different for WebGL 2.0 that would allow us to use array textures. Hopefully that addresses your concerns! If not, please feel free to reopen and we can dig into it further!

jespertheend commented 6 years ago

Thanks for the clarification. That definitely clears some things up! Supporting this with texture arrays seems like a clever idea. I'll keep an eye out for that.