Closed klausw closed 5 years ago
Sessions of type inline and immersive-vr MUST use the blending mode most effective for making the user's environment not visible. This is opaque mode when possible, but a headset with an optical see-through display MAY choose additive if the hardware is not capable of opaque mode. Applications can use the reported additive blend mode to adjust color choices if necessary to help support this fallback VR mode.
This is in conflict with the explainer which says that the blending mode should be used to NOT render a background so the user's environment is visible.
@klausw It seems that you are proposing that on AR devices, the authors should fill the frame with 100% white and then composite the scene over it. Is that correct?
@cabanier wrote:
This is in conflict with the explainer which says that the blending mode should be used to NOT render a background so the user's environment is visible.
Good point, the explainer will also need to be updated to match the intended behavior, looks like that hasn't been fully updated to match the changes to the spec. The sentence "Similarly, if the developer knows that the environment will be visible they may choose to not render an opaque background" is intended for "immersive-ar". And yes, this is potentially confusing if it leads people to conclude that "immersive-vr" on an additive display is an AR mode.
@klausw It seems that you are proposing that on AR devices, the authors should fill the frame with 100% white and then composite the scene over it. Is that correct?
It's up to the application developer, but I wouldn't make this a general recommendation. It is likely to look strange and would only shift the problem - instead of black objects being invisible, this would make white objects invisible. I don't expect that VR-focused applications would generally do much to support additive displays, but there may be some low-hanging fruit to improve compatibility such as avoiding pure-black user interface elements. Elsewhere I saw the suggestion that apps could skip drawing black shadows to save performance if those would end up invisible anyway.
I agree. These are all indications that environmentBlendMode
is only useful in AR environments.
We should update the spec so this attribute is only available during an AR session. (In which case, I don't really know when opaque
would be an option)
@klausw In light of @NellWaliczek's email, it's becoming more urgent that this is settled quickly.
I strongly believe that no vendor is going to want to account for 'additive' or 'source-over' blending during an 'immersive-vr' session. The only reason they would do this, would be to hack in AR which we all agree we do not want.
We should either remove the blendmode if the session is not AR (which I presume means completely removing it from the spec), or we should define that it must be 'opaque' during an 'immersive-vr' session.
I think we should go with the simplest solution (which may be what's in the text, adjusted slightly)
immersive-ar
session is not available, the author should not try to fake it in immersive-vr
. If an immersive-vr
session is not available, the author should not render full screen immersive-vr. Authors should be reminded that a UA might give the user control over what modes are available to the page, and a well behaved page should respect that. I think the purpose of the spec is to define how UAs should behave, and why, which is aimed at both implementors and authors. A determined author (who wants to detect they are on a display like an ML1 or HL2 and hack immersive-vr
to do AR) will find a way to determine that by looking at browserID strings, display resolution, controller characteristics, etc. Just like they do not on mobile devices, where there are libraries to help developers "best guess" the device, even though that information is mostly hidden.
As a user, I will want to be able to run immersive-vr
applications of AR displays. Not because it's a great experience, but because if that's the only display I have, and there is content I want (or need) to experience, it would be unreasonable for the UA to prevent me from doing it. Think about the times you want to access desktop sites on a mobile phone: it's great when a website gives you a nice mobile site first, but often those sites don't have all the features, so being able to (possibly painfully) navigate the desktop site is sometimes necesssary. When we move past "demo" content to a world where people are actually using these devices to accomplish things they care about, we aren't going to want to block content on devices just because the experience isn't perfect.
As @klausw also pointed out, the envionmentBlendMode
provides one very useful big of information to developers: the physical characteristics of the optics. Knowing how content will be perceived, and giving the author the ability to adjust it to be as good as possible. It will be a very useful feature of a toolkit if it detects that you are doing VR on an translucent AR display and adjusts the colors (perhaps by increasing luminence contrast, perhaps by changing shadow colors, whatever) so that the content is more easily understood. It's still a VR experience.
Anecdote: for a while, I didn't have a VR display at home, but did have a Hololens sitting around. Right after WebVR was released on Edge, I was able to try out a VR experience on my Hololens (a link someone had sent me they wanted my feedback on). Was it "great"? Not as nice as a wide FOV VR display, not as nice as a high contrast opaque display. But I "got the experience" that I needed to get, and gave the person the feedback they wanted.
This is something the 2D web naturally enables and we should make sure we enable it on the immersive web where possible. I am very much against anything that prevents users from accessing as much XR content as they can on the device they have, within reason. While a lot of AR content might not be reasonably accessed on a VR device (some might be faked in various ways, but most would actually need to see the users physical space), there is almost no VR content that should be prevented from running on an AR device. It's no ideal, it might kill performance and battery life, but the goal isn't to use an AR device as a VR device for hours and hours. It's about access.
One of the "secret sauces" of the web is access: platforms are free to control what content is in their apps stores, for example, but we should be aiming for the immersive web to give users access to as much content as possible.
I think we should go with the simplest solution (which may be what's in the text, adjusted slightly)
- environmentBlendMode is set to describe the characteristics of the display (as it is meant to)
yes
- it is provided (read only, as @klausw suggests) in all modes, and always tells the truth
yes
- there should be text added for developers being very explicit that the goal is to enable better experiences on the device, in the mode selected. If an
immersive-ar
session is not available, the author should not try to fake it inimmersive-vr
. If animmersive-vr
session is not available, the author should not render full screen immersive-vr. Authors should be reminded that a UA might give the user control over what modes are available to the page, and a well behaved page should respect that.
I don't think a note in a spec will be sufficient. Authors of sites and frameworks will figure out that they can use this blend mode attribute to mimic AR and they are not going to be bothered by strong wording in a spec. If the UA can't enforce it, a warning is meaningless.
- UAs should provide all modes they can, and should consider giving users control over what is available when possible
I can't think of any use case where an 'additive' or 'source-over' device is going to want to support 'immersive-vr'. Likewise, what 'opaque' device is going to support 'immersive-ar'?
I think the purpose of the spec is to define how UAs should behave, and why, which is aimed at both implementors and authors. A determined author (who wants to detect they are on a display like an ML1 or HL2 and hack
immersive-vr
to do AR) will find a way to determine that by looking at browserID strings, display resolution, controller characteristics, etc. Just like they do not on mobile devices, where there are libraries to help developers "best guess" the device, even though that information is mostly hidden.
Yes, a determined author can always use sidechannels to infer device capabilities. This shouldn't change how we design a spec though.
As a user, I will want to be able to run
immersive-vr
applications of AR displays. Not because it's a great experience, but because if that's the only display I have, and there is content I want (or need) to experience, it would be unreasonable for the UA to prevent me from doing it. Think about the times you want to access desktop sites on a mobile phone: it's great when a website gives you a nice mobile site first, but often those sites don't have all the features, so being able to (possibly painfully) navigate the desktop site is sometimes necesssary.
This is exactly what I fear will happen and which @kearwood and @thetuvix also voiced concerns about: we don't want to pollute the AR ecosystem with hacked VR content. As far as I know, we were all in agreement that this is bad.
When we move past "demo" content to a world where people are actually using these devices to accomplish things they care about, we aren't going to want to block content on devices just because the experience isn't perfect.
Magic Leap will block immersive-vr
content. The majority of VR sites look bad on an additive AR display and we want to do what we can to avoid giving users a bad experience.
I can't speak for authors but I bet that they too don't want to get complaints that their VR site is unusable on an AR device. (Last thing we want is for sites to say that they only work with certain VR devices and block others.)
As @klausw also pointed out, the
envionmentBlendMode
provides one very useful big of information to developers: the physical characteristics of the optics. Knowing how content will be perceived, and giving the author the ability to adjust it to be as good as possible. It will be a very useful feature of a toolkit if it detects that you are doing VR on an translucent AR display and adjusts the colors (perhaps by increasing luminence contrast, perhaps by changing shadow colors, whatever) so that the content is more easily understood. It's still a VR experience.
Magic Leap One supports WebXR. If you have access to one, try some of the three.js demos that fill the display. It's definitely NOT a VR experience if you can see a disjointed real world mixed with your scene.
Anecdote: for a while, I didn't have a VR display at home, but did have a Hololens sitting around. Right after WebVR was released on Edge, I was able to try out a VR experience on my Hololens (a link someone had sent me they wanted my feedback on). Was it "great"? Not as nice as a wide FOV VR display, not as nice as a high contrast opaque display. But I "got the experience" that I needed to get, and gave the person the feedback they wanted.
This is something the 2D web naturally enables and we should make sure we enable it on the immersive web where possible. I am very much against anything that prevents users from accessing as much XR content as they can on the device they have, within reason. While a lot of AR content might not be reasonably accessed on a VR device (some might be faked in various ways, but most would actually need to see the users physical space), there is almost no VR content that should be prevented from running on an AR device. It's no ideal, it might kill performance and battery life, but the goal isn't to use an AR device as a VR device for hours and hours. It's about access.
One of the "secret sauces" of the web is access: platforms are free to control what content is in their apps stores, for example, but we should be aiming for the immersive web to give users access to as much content as possible.
I'll reiterate what I've been saying:
immersive-ar
so authors can voice the intent of the session (see issue 786)opaque
for VR and source-over
or additive
for AR to discourage hacked ARAs a user, I will want to be able to run
immersive-vr
applications of AR displays. Not because it's a great experience, but because if that's the only display I have, and there is content I want (or need) to experience, it would be unreasonable for the UA to prevent me from doing it. Think about the times you want to access desktop sites on a mobile phone: it's great when a website gives you a nice mobile site first, but often those sites don't have all the features, so being able to (possibly painfully) navigate the desktop site is sometimes necesssary.This is exactly what I fear will happen and which @kearwood and @thetuvix also voiced concerns about: we don't want to pollute the AR ecosystem with hacked VR content. As far as I know, we were all in agreement that this is bad.
We are most definitely not in agreement that this is bad.
Magic Leap will block
immersive-vr
content. The majority of VR sites look bad on an additive AR display and we want to do what we can to avoid giving users a bad experience.
That is, of course, your choice as a browser vendor. But, that is not a justification for enforcing this arbitrary limitation in the spec, and other browser vendors should be free to make other (reasonable) choices. Browser vendors might also choose to disable modes by default and allow users to consciously choose to enable them.
As @klausw also pointed out, the
envionmentBlendMode
provides one very useful big of information to developers: the physical characteristics of the optics. Knowing how content will be perceived, and giving the author the ability to adjust it to be as good as possible. It will be a very useful feature of a toolkit if it detects that you are doing VR on an translucent AR display and adjusts the colors (perhaps by increasing luminence contrast, perhaps by changing shadow colors, whatever) so that the content is more easily understood. It's still a VR experience.Magic Leap One supports WebXR. If you have access to one, try some of the three.js demos that fill the display. It's definitely NOT a VR experience if you can see a disjointed real world mixed with your scene.
Like I said, I've tried VR experiences (on Hololens 1, using WebVR). I am speaking from experience, as a user of AR/VR and an author of countless AR experiences.
The anecdote I gave seemed pretty clear: this isn't at all about VR experiences being great on ML1 or HL2 or whatever. It's about enabling users to get work done. If the only device a user has on hand is a see-through AR display, it makes absolutely no sense to arbitrarily say "you can't run that here". As I user, I'd be furious if I spent thousands of dollars on a display, and needed to run something to get my job done, knowing full well that it is going to "kinda suck because it's not fully immersive", and the browser said "nope, sorry, we don't think you should run VR stuff on our display that is fully capable of running VR stuff".
I'll reiterate what I've been saying:
* bring back `immersive-ar` so authors can voice the intent of the session (see [issue 786](https://github.com/immersive-web/webxr/issues/786))
100% agree.
* limit the blendmode to `opaque` for VR and `source-over` or `additive` for AR to discourage hacked AR
100% disagree. blendmode should accurately convey the characteristics of the device.
I think we pretty much agree on most things, btw, with the exception being whether your (and MLs) particular preference should be something you put in your browser, or in the spec. I think the former is fine, but the later isn't as obvious.
As a user, I will want to be able to run
immersive-vr
applications of AR displays. Not because it's a great experience, but because if that's the only display I have, and there is content I want (or need) to experience, it would be unreasonable for the UA to prevent me from doing it. Think about the times you want to access desktop sites on a mobile phone: it's great when a website gives you a nice mobile site first, but often those sites don't have all the features, so being able to (possibly painfully) navigate the desktop site is sometimes necesssary.This is exactly what I fear will happen and which @kearwood and @thetuvix also voiced concerns about: we don't want to pollute the AR ecosystem with hacked VR content. As far as I know, we were all in agreement that this is bad.
We are most definitely not in agreement that this is bad.
I meant "the people who voiced their opinion during the meeting and on the github issue thought this was bad" :-) Unfortunately @NellWaliczek got this topic reversed in her summary email.
Magic Leap will block
immersive-vr
content. The majority of VR sites look bad on an additive AR display and we want to do what we can to avoid giving users a bad experience.That is, of course, your choice as a browser vendor. But, that is not a justification for enforcing this arbitrary limitation in the spec, and other browser vendors should be free to make other (reasonable) choices. Browser vendors might also choose to disable modes by default and allow users to consciously choose to enable them.
I didn't say that the spec should limit what browser choice. If for instance Hololens has a different opinion they are free to allow 'immersive-vr'.
As @klausw also pointed out, the
envionmentBlendMode
provides one very useful big of information to developers: the physical characteristics of the optics. Knowing how content will be perceived, and giving the author the ability to adjust it to be as good as possible. It will be a very useful feature of a toolkit if it detects that you are doing VR on an translucent AR display and adjusts the colors (perhaps by increasing luminence contrast, perhaps by changing shadow colors, whatever) so that the content is more easily understood. It's still a VR experience.Magic Leap One supports WebXR. If you have access to one, try some of the three.js demos that fill the display. It's definitely NOT a VR experience if you can see a disjointed real world mixed with your scene.
Like I said, I've tried VR experiences (on Hololens 1, using WebVR). I am speaking from experience, as a user of AR/VR and an author of countless AR experiences.
The anecdote I gave seemed pretty clear: this isn't at all about VR experiences being great on ML1 or HL2 or whatever. It's about enabling users to get work done. If the only device a user has on hand is a see-through AR display, it makes absolutely no sense to arbitrarily say "you can't run that here". As I user, I'd be furious if I spent thousands of dollars on a display, and needed to run something to get my job done, knowing full well that it is going to "kinda suck because it's not fully immersive", and the browser said "nope, sorry, we don't think you should run VR stuff on our display that is fully capable of running VR stuff".
I'll reiterate what I've been saying:
* bring back `immersive-ar` so authors can voice the intent of the session (see [issue 786](https://github.com/immersive-web/webxr/issues/786))
100% agree.
* limit the blendmode to `opaque` for VR and `source-over` or `additive` for AR to discourage hacked AR
100% disagree. blendmode should accurately convey the characteristics of the device.
I think we pretty much agree on most things, btw, with the exception being whether your (and MLs) particular preference should be something you put in your browser, or in the spec. I think the former is fine, but the later isn't as obvious.
Think about it this way: if immersive-ar
came back, what would be the point of an author mimicking AR during a VR session? If they want to create an AR session, they can just request it and the UA can ask and inform the user what type of experience they will see.
Regarding whether "polluting" AR with VR content is bad:
Rik wrote:
I meant "the people who voiced their opinion during the meeting and on the github issue thought this was bad" :-) Unfortunately @NellWaliczek got this topic reversed in her summary email.
What I heard (and wrote as scribe) was general agreement that having immersive-vr
and immersive-ar
was good in the long term because we all mostly agree that we want content developers and users and browser vendors to differentiate between AR-centric and VR-centric use cases.
What I did not hear was what RIk is claiming which is that everyone agrees that we should NOT follow the proposal of the leadership team (not just Nell) to work on the immersive-ar
flag, the handheld/head-worn problem, and security/user-understanding problem in a separate module outside of the WebXR core spec.
The question is not whether or not to support immersive-ar
, the question is whether we're going to let the unsolved questions about immersive-ar
block the WebXR core spec from becoming a standard while we figure out the AR core.
On Tuesday we'll get clarity from everybody on where they stand on the question of where to work on immersive-ar
, be it the AR core module (as we agreed on at the June f2f and described by the most recent email from Nell) or somehow shipping just the immersive-ar
flag in the WebXR core, which would be a new direction.
Regarding whether "polluting" AR with VR content is bad:
Rik wrote:
I meant "the people who voiced their opinion during the meeting and on the github issue thought this was bad" :-) Unfortunately @NellWaliczek got this topic reversed in her summary email.
What I heard (and wrote as scribe) was general agreement that having
immersive-vr
andimmersive-ar
was good in the long term because we all mostly agree that we want content developers and users and browser vendors to differentiate between AR-centric and VR-centric use cases.What I did not hear was what RIk is claiming which is that everyone agrees that we should NOT follow the proposal of the leadership team (not just Nell) to work on the
immersive-ar
flag, the handheld/head-worn problem, and security/user-understanding problem in a separate module outside of the WebXR core spec.
I never claimed that; please re-read my comments. Here's a link to the minutes so people can see for themselves what was said. Specifically:
alexturn: It's more about being explicit about choosing a form factor and thus an env blend mode so we can be more explicit by being in a store or targeting devices. On the web it's more broad, so I'm inclined to agree with Rik about knowing which form factore. I don't know if we would block 90% of the VR on the device (only supporting inline) and we'd need to relent at some point. ... We'd say fine you could support VR and AR by adding a black think in the background. Well, immersive-vr is the only way to use AR devices so a lot of logically AR experiences would end up using immersive-vr.
and
Kip: We do function on additive displays, ML specifically. We would like to see there being the ability to express AR explicitly. We want to not pollute VR with lots of AR content. We want to encourage people to prototype in the future by providing responsive mode tools where you might simulate AR without having AR hardware. ... Even if we don't have hit-testing and other sensing (lighting estimation) I'd like to see immersive-ar enabled instead of shipping only immersive-vr.
Also this thread has similar concerns from Google engineers where they don't want fake AR by querying the blend mode
On Tuesday we'll get clarity from everybody on where they stand on the question of where to work on
immersive-ar
, be it the AR core module (as we agreed on at the June f2f and described by the most recent email from Nell) or somehow shipping just theimmersive-ar
flag in the WebXR core, which would be a new direction.
immersive-ar
is not a new direction. It's in the first published working draft.
I guess I'll repeat this again:
The question of whether immersive-ar
is eventually necessary is not the question.
The question is whether to block WebXR altogether while the open questions of immersive-ar
(as raised by multiple orgs and described in Nell's email) are solved. At the June f2f we agreed (day 1 discussion, day 2 discussion) that both AR and the unsolved portion of the input APIs (aka "gamepad") should be moved into modules. We have finished the "vr complete" WebXR core and are now following the direction agreed to in June.
I've heard nobody say that work on AR API should stop (or gamepad input, or performance, or layers, or anchors, or environmental lighting [it's a long list!]) but folks we're sort of a year later than when we wanted for this standard and there are dozens of important features that could, if we let them, make us another year late. At some point we had to choose what goes first and what waits. In June we made that choice.
Are we really going to entirely stop WebXR to revisit these choices or do we want to actually publish a perfectly usable core while the remaining open questions are answered and standardized?
@TrevorFSmith I think you're replying to the wrong thread.
It was @klausw who created this issue and this is a conversation on the issues of environmentBlendMode
which exist regardless of immersive-ar
It is also possible that you believe that all conversations should stop and the spec should ship as-is. Is that the case?
Rik wrote:
It is also possible that you believe that all conversations should stop and the spec should ship as-is. Is that the case?
That is a wildly inappropriate response, Rik, bordering on a violation of the Code of Conduct. Please de-escalate how you approach this topic.
@cabanier:
This is exactly what I fear will happen and which @kearwood and @thetuvix also voiced concerns about: we don't want to pollute the AR ecosystem with hacked VR content. As far as I know, we were all in agreement that this is bad.
In regards to the risk that we "pollute the AR ecosystem with hacked VR content", there are two related but opposite problems that have been motivating us all to figure this out ASAP, and I believe we may be conflating them in this discussion:
"immersive-ar"
by shipping AR experiences that self-describe as "immersive-vr"
, since that's all they have available to get immersive content onto an AR headsetTo be clear, my own primary concern is 1, for two key reasons:
"immersive-ar"
on HoloLens today, even before any RWU features such as hit-testing or anchors are available. Their scenarios are clearly AR scenarios in that their users would be relating virtual objects the app renders to the real-world objects they can also see in their environment (think AR furniture shopping), and it would be a shame for future web compat if they started publishing these as "immersive-vr"
sessions, simply because that's all we gave them."immersive-vr"
content on HoloLens if a user explicitly seeks it out.One additional risk to mislabeled VR vs. AR content is the advent of video passthrough headsets like the Varjo XR-1. These are headsets that really can either block out the world completely (VR) or show it behind app content (AR) based on the app's preference. If web content is clean about its intent when creating a session, UAs would then know whether to enable video composition under the site's content on such headsets by whether the site chose "immersive-vr"
or "immersive-ar"
. If we accrue a set of AR web content mislabeled as "immersive-vr"
, the UA will then dutifully block out the world, even though the app would prefer to have it blended in.
- I disagree with @cabanier that there is no place for VR content on an AR headset. For example, an application like HoloTour is meaningfully a VR experience - it fills every pixel it can with a virtual rendering of Peru or Italy. This experience will be most immersive on an opaque headset with a wide FOV - however, even on an additive headset with a narrower FOV, this app still provides a meaningful experience that is superior to a flat 2D rendering of the content. It's of course up to any given UA whether to support VR content or not - however, I don't believe that we'd actually block
"immersive-vr"
content on HoloLens if a user explicitly seeks it out.
Yes, there will be content that is useful but there will also be content that will be broken.
I suspect we'll reevaluate our blocking of immersive-vr
as time goes on. It would be great if you could share user feedback!
I think you mentioned that a UA could post a warning if VR
was requested on an AR
device. That's a very good idea!
One additional risk to mislabeled VR vs. AR content is the advent of video passthrough headsets like the Varjo XR-1. These are headsets that really can either block out the world completely (VR) or show it behind app content (AR) based on the app's preference. If web content is clean about its intent when creating a session, UAs would then know whether to enable video composition under the site's content on such headsets by whether the site chose
"immersive-vr"
or"immersive-ar"
. If we accrue a set of AR web content mislabeled as"immersive-vr"
, the UA will then dutifully block out the world, even though the app would prefer to have it blended in.
I also mentioned these device types in immersive-web/webxr#786
Fixed by #2
As per recent discussions on the public-immersive-web-wg list (see also immersive-web/webxr#786), I think it would be helpful to add some clarifications to the environmentBlendMode section of the spec.
I'd suggest emphasizing that this is a read-only value that's set by the user agent based on the session mode and hardware properties, and that applications cannot change it or request a specific blend mode.
This note is a bit confusing now that the spec doesn't include
immersive-ar
. How about including something similar to this instead?Sessions of type
inline
andimmersive-vr
MUST use the blending mode most effective for making the user's environment not visible. This is opaque mode when possible, but a headset with an optical see-through display MAY choose additive if the hardware is not capable of opaque mode. Applications can use the reported additive blend mode to adjust color choices if necessary to help support this fallback VR mode.The blend mode of alpha-blend is reserved for future extensions such as augmented reality on a device using passthrough camera, this MUST NOT be used for
inline
andimmersive-vr
sessions.