mrdoob / three.js

JavaScript 3D Library.
https://threejs.org/
MIT License
102.77k stars 35.38k forks source link

Proposal: Lets make EffectComposer "Stereo Aware" to simplify VR post effects #8146

Open bhouston opened 8 years ago

bhouston commented 8 years ago

So I was trying my hand at stereo effects in ThreeJS and the first issue I ran into is that they are somewhat incompatible with EffectComposer. The issue is that the "effects" in the examples/js/effects directory are not actually possible to put into the EffectComposer. This was initially a little confusing to me. You can put examples/js/passes into the EffectComposer though (maybe it should have been called PassComposer?) but anyhow....

I think the solution is relatively straight forward. We could convert the examples/js/effects into Passes and push them through the effect composer pipeline. I've actually done this in my private branch of ThreeJS already last fall but I am unsure if is the best way, I did it in an expedient way... I'd like to get this back into ThreeJS proper now if you guys want it....

Here is how I did it:

  1. I modified EffectComposer to optionally accept a StereoCamera as the main camera.
  2. If the camera passed into EffectComposer is not a StereoCamera it behaves at it current does. Thus it is backwards compatible.
  3. If you pass in a StereoCamera, things change, but only for passes that are "stereoAware". This is a flag on each pass that defaults to false if not set. If it is set, if the pass is "stereoAware", then the EffectComposer will call the Pass twice, once for each camera if the stereo rig and with the appropriate viewport settings (only half of the render target is rendered.)

That is it. It is very straight forward but powerful change.

A pass like RenderPass is stereo aware (stereoAware=true), it needs to generate different images for each camera. Other passes like a non-adaptive ToneMap are not stereo aware as it operates on each pixel in a simple matter. The rule generally is any Pass that uses a search area, such as FXAA, SMAA, Botek, SSAO should be stereo aware because otherwise there would be blending across central barrier.

Right now I make the assumption that StereoCamera splits the viewport into two even side-by-side renders. This seems to make sense to me, but maybe it isn't a valid assumption.

I am sure there are ways to further extend this design, but it does achieve what is minimally necessary, to unify Effects and Passes to make it easier to do high quality VR with ThreeJS.

I think this is a generally decent design and it is really simple to implement. I'd also suggest implementing a few new passes for this workflow, such as Lenticular (interleave), and Anagraph, which just engage in pixel re-arrangement. These could take the side-by-side renderings and turn them into new forms for output, thus all stereo effects could use the same pipeline -- thus we'd have simplicity and efficient of code.

This change should mostly gut the existing examples/js/effects in terms of rendering, the main thing we would need to keep is the particular StereoCamera setup code that is particular to each device.

mrdoob commented 8 years ago

Not super sure about the proposed design (not a fan of EffectComposer). If you have a link with the working test, I can try to give it a go at solving the design.

bhouston commented 8 years ago

Argh. I was trying some Google Cardboard stereo and it is just brutally slow on my Nexus 5 without using any EffectComposer. So theoretically it is a good idea to do stereo in a unified fashion in EffectComposer, but practically it maybe isn't useful. But I'll post some code....

bhouston commented 8 years ago

The inner loop of the EffectComposer becomes something like this (this is modified from our custom Three.JS so it may not compile):

      if( pass.vrAware && camera.type === 'StereoCamera' ) {
        this.renderer.enableScissorTest( true );
        for( var j = 1; j >= 0; j -- ) {
          var camera = ( j == 1 ) ? camera.cameraR : camera .cameraL;
          var viewportRect = THREE.VRUtils.setupVRViewportAndSissor( this.renderer, j );
          pass.render( this.renderer, this.writeBuffer, this.readBuffer, delta, maskActive, camera, viewportRect );
        }

        THREE.VRUtils.resetViewportAndSissor( this.renderer, this.writeBuffer );
        this.renderer.enableScissorTest( false );
      }
      else {
        pass.render( this.renderer, this.writeBuffer, this.readBuffer, delta, maskActive, camera );
      }

Notice that I pass in both the camera and the viewportRect into the render(), that is because sometimes for things like background images you need to get them to fit into the render region. Also having the render camera available in all passes is a god send for other reasons, such as getting the nearClip and farClip distances for SSAO/SAO/DOF - as you need to sync them between your depth pass and when you read that depth pass.

MasterJames commented 8 years ago

Typo? Sissor vs Scissor

bhouston commented 8 years ago

Typo? Sissor vs Scissor

Heh. Appearently that is what I named the other function in my code so it actually works. But yeah, it is spelled wrong, but I think the idea is sort of clear?

possan commented 8 years ago

Any new ideas on this?

I would really like to add some post processing to one of my VR Hacks... and if we wouldn't use the EffectComposer, what would we use instead to add multiple render passes to the output?

bhouston commented 8 years ago

I have thought about doing thus but we have had a hard time getting VR rendering fast enough that adding complex passes to it just seems unnecessary at this time.

michaelybecker commented 7 years ago

Checking in from the future: in light of this thread, is out-of-the-box postprocessing for VR still essentially a no-go? I know @takahirox was tweeting some cool stuff a few months back... Are there plans for any of that to make it into the official build?

sjlynch commented 6 years ago

I think this is an important feature for threejs. At the moment, it is difficult to do something like the outlinepass in vr. In an app where learning is important, being able to highlight and then display information about what a user is looking at or interacting with would be really useful and wouldn't slow the rendering much since it's just a simple outline shader.

Has there been any progress lately?

LAGENCECREE commented 6 years ago

@wizgrav made awesome work here: https://wizgrav.github.io/aframe-effects/ , maybe portable for threejs only, last version 90 ? ( with renderer.vr )

wizgrav commented 6 years ago

aframe-effects supports using it without aframe, with just three.js that is, check the docs and the example

wizgrav commented 6 years ago

A good addition for the shaders would be a cameraViewport vec4 uniform containing the normalized viewport. This is more or less what I implemented in aframe-effects along with a textureVR function that just bounds the texture lookups based on that uniform so they don't spill across eyes

danrossi commented 1 year ago

I need this to add this distortion pass to make a new WebXR view for Iphone. The polyfill is broken with three.js. So I have to use the stereoeffect with it.

https://github.com/ycw/three-lens-distortion