mrdoob / three.js

JavaScript 3D Library.
https://threejs.org/
MIT License
101.8k stars 35.31k forks source link

Project Feedback and Questions #1017

Closed scottgarner closed 12 years ago

scottgarner commented 12 years ago

Hey all,

I'm finishing up the much-delayed new incarnation of Mission Control:

http://superfad.com/missioncontrol/video/

I have a lot of little loose ends to take care of, but there are some bigger questions I was hoping to get some help with— particularly with compositing.

• Layered RenderPasses

I have both the room/main screen and the video panel on separate render passes. This works fine unless I want to do something like apply an effect to the whole room, but not the screen.

Basically, what I want to accomplish is fake DOF by blurring the main RenderPass, but not the video screen. Is this possible? I tried something where I rendered the whole room to a texture, but it didn't look very good. Sandwiching ShaderPasses between RenderPasses doesn't seem to work, either.

• Overlaid Effects

I'd like to throw some lighting effects over the whole render, basically just static plates that I'd normally just layer on top in After Effects in multiply mode. Is there a straightforward way to do something like this?

• Floor Reflections

I'd like some hint of glowing light from the wall and the screen reflected on the floor. What's a good way to fake this?

Of course I'd also like any other feedback on how I might tighten up the presentation, etc.

Best,

Scott

alteredq commented 12 years ago

Hmmm, interesting. I think we'll need to improve postprocessing chain to be able to do effects you want.

For the fake DOF you could use masking (with MaskPass), but it would need to be inverted - effects later in chain applying only where there is nothing rendered, instead of the current way where effects apply only where something is rendered. See dotted head here:

http://mrdoob.github.com/three.js/examples/webgl_postprocessing.html

I'll look into this, I intended to do inverted masking some day, now you found a use case ;)

For image overlays, I guess you could make ShaderPass that would take extra texture(s) input and do the proper mix. Something like blend shader, just with a simple multiplication instead of mix:

https://github.com/mrdoob/three.js/blob/master/examples/js/ShaderExtras.js#L1094

Maybe we could have shader(s) for doing all standard blending modes:

http://blog.blackpawn.com/post/13087451005/photoshop-blend-modes

For floor reflections, I don't know yet. I still didn't figure out some nicer, more generic / automatic way of doing these things.

Usual trick is to have mirrored copy of the stuff you want to "reflect" placed below the real thing and overlaid with some semi-transparent plane.

http://mrdoob.github.com/three.js/examples/webgl_geometry_text.html http://mrdoob.github.com/three.js/examples/canvas_materials_video.html

alteredq commented 12 years ago

And inverse masking is in:

https://github.com/alteredq/three.js/commit/1ca518d8787b8c066a2523d7783cf18a547ca154

I updated postprocessing example to show how to use it to blur the background.

renegademaster88 commented 12 years ago

I'd like to agree with Jungalero, it would be great to be able to sandwich effect passes between render passes in future! I tried something similar and also didnt work!

I think the Composer has enormous potential if it was made a bit more flexible.

I read an interesting article about renderer pipe-lining that suggested putting the Composition phase onto a separate thread. Is there any reason I couldn't do this using a web worker?

http://www.slideshare.net/ozlael/hable-john-uncharted2-hdr-lighting

Look at slides 233 and beyond.

alteredq commented 12 years ago

I think sandwiching doesn't work because z-buffer gets destroyed with postprocessing passes. These things are surprisingly tricky. One day...

Meanwhile, render to texture is probably the way to handle such things, using EffectComposer like Photoshop.

I read an interesting article about renderer pipe-lining that suggested putting the Composition phase onto a separate thread. Is there any reason I couldn't do this using a web worker?

Web workers are currently quite limited. You can't access DOM, you can't render anything there, just pass in and out JSON or typed arrays. They are basically good just for numerical computations, string manipulations, etc.

If I remember well though there were some plans to make it possible to do some more WebGL things in workers.

http://www.slideshare.net/ozlael/hable-john-uncharted2-hdr-lighting

Nice presentation, thanks for the link. John Hable is great, it's partly thanks to his proselytization we have gamma correct rendering option.

Filmic tonemapping would be also nice, just as far as I understood you need HDR with 16 bit colors not to get horrible artefacts, so this would make everything much more bandwidth heavy. Could be still worth a try. RGBM encoding from the presentation looks very interesting.

SSAO is in the works ;).

scottgarner commented 12 years ago

Thanks for the tips, alteredq.

I got a bit closer by rendering the background to one texture and the video panel to another. I want to just put the video panel on top now, but the texture doesn't have any alpha even though my setClearColorHex should be clear.

I messed with looking for a key color (in this case just white), but that doesn't help me when the video fades in. Same problem with your inverse masking trick. I can blur the background, but when I look through the semi-transparent video as it fades up, I see a non-blurred background. Make sense?

Is there a way I can just get a TexturePass with an alpha?

-S

scottgarner commented 12 years ago

Here's a version with the bad color key glitch:

http://superfad.com/missioncontrol/video/

alteredq commented 12 years ago

If you want to use alpha with EffectComposer you need to supply own RenderTarget that uses RGBA format (the one created by default uses just RGB format).

Also if you handle resizing, don't forget to supply new render target with RGBA to effectComposer.reset

scottgarner commented 12 years ago

Thanks for the quick response.

That is so close to working, but it's not only setting alpha for the background of my texture, but for the texture itself. In other words, it hides both the background and the thing I'm rendering.

I'm sure I'm doing something wrong. You can see it here set to 50% alpha—it ghosts the texture and the background.

http://superfad.com/missioncontrol/video/

-S

alteredq commented 12 years ago

I'm not sure I understand the diff between what's expected and what's rendered.

Here is how it looks for me (zoomed out):

zoom out

And here it is zoomed in:

zoom in

Can you show me what's wrong?

scottgarner commented 12 years ago

How strange! That's what I want to achieve, minus the 50% alpha of course. This is what I see in Chrome 16.0.012.75:

zoom out

And this:

zoom in

alteredq commented 12 years ago

So it seems like one of these unfortunate cases where different systems give different results for the same code :S.

Which OS / GPU do you have?

I'm on Windows 7 with Nvidia Quadro 2000M GPU. I get the same result both for ANGLE and OpenGL rendering backends (in latest stable Chrome).

In Firefox, after clicking video doesn't show at all, there is a warning in JS console:

WebGL: DrawElements: bound vertex attribute buffers do not have sufficient size for given indices from the bound element array

This kinda points that there is probably some bug somewhere. It would be good to try to address this first, maybe the other issue is related.

scottgarner commented 12 years ago

I'm on an iMac running Lion (10.7.2) with an AMD Radeon HD 6970M 1024 MB.

Strange, I get the same results in Firefox 9.0.1 as Chrome. No error, just weird transparency.

I'm not really sure what to try next.

Thanks for all of your help on this. Tomorrow is my last day at Süperfad, so I'm eager to get it done before I leave.

-S

alteredq commented 12 years ago

I tried on my older notebook with ATI Radeon 3650 GPU (also Windows 7) and I got the same results as on my Nvidia (ok in Chrome, doesn't work in Firefox).

So it seems more likely it's something about Mac vs Windows than about Nvidia vs ATI.

Also I'm getting exception about superwall.screenUniforms[0] is undefined and failure to load this file http://superfad.com/missioncontrol/video/textures/background.jpg

But that's also in Chrome where it works.

Hard to tell what's wrong, could be some lower layer bug, or some difference in specs interpretation, maybe some other Mac user can chime in.

These troubles are usually very hard to figure out, you need to painstakingly isolate which single thing makes the difference on different systems and even then often there is no solution, just to file a bug report somewhere.

scottgarner commented 12 years ago

Works fine on a Mac Pro, so it must be the card in the iMac plus Mac browsers.

I'll just work on a simplified solution without quite so much pizzaz. Thanks again for your help.

alteredq commented 12 years ago

Just a wild guess - this "feather" shader is messing with transparency of the whole video - maybe it's something there, accidentally setting to transparent not just borders but everything.

Try no-feather, no-fades, just straight rendering there to see if the video layer is still transparent.

scottgarner commented 12 years ago

Not a bad guess, but I got the same results. I tested a few more machines in the office and it just seems to be an issue on the iMacs. I'll keep experimenting and let you know if I come up with anything.

Thanks again.

renegademaster88 commented 12 years ago

I have to say its going to look really nice when it does work!

scottgarner commented 12 years ago

alteredq,

I finally ran into the Firefox error you mentioned. The crazy thing is that it only happens when I view the site on a remote server, not on a local one. The line that's causing it seems to be:

superwall.renderer.render( superwall.scene, superwall.camera, superwall.UVRenderTexture , true );

I do this in superevents.js to render a UV color pass to know where the user is clicking on the panel.

The UVRenderTexture is created in superwall.js like this:

var pars = { minFilter: THREE.LinearFilter, magFilter: THREE.LinearFilter, format: THREE.RGBFormat }; superwall.UVRenderTexture = new THREE.WebGLRenderTarget( superwall.renderWidth, superwall.renderHeight, pars );

Any ideas? It's so weird to me that the exact same code in the exact same browser works at localhost but not on a server.

alteredq commented 12 years ago

This could be some asynchronous issue. Things taking different time on localhost / remote server causing different order of execution so something which is assumed to be ready is not ready all the time, just when timing is accidentally right.

scottgarner commented 12 years ago

That sounds correct, but I'm a little stumped.

The whole point is to render the scene to a target with an override material (a UV color map) so I can getContext and look at the pixels. I wrote this before all of your awesome post-processing stuff, so I'm sure I could rework it with an EffectComposer, but I'm not sure I understand the distinction between RenderPass and SavePass. Can I use one or the other with an override material and end up with something I can examine pixels on?

alteredq commented 12 years ago

RenderPass renders scene into EffectComposer's render target.

SavePass copies a current state of the EffectComposer's render target into another render target. It's basically a "fork" in the postprocessing chain.

So far it's been used just for a motion blur effect, to be able to blend new frame with untouched old frame:

applyEffects( blend( oldFrame, newFrame ) )

vs

blend ( applyEffects( oldFrame ), applyEffects( newFrame ) )

For readPixels any render target should do, just reading pixels must be executed immediately after rendering into it (as readPixels uses "current" frame buffer) or otherwise you need to set render target explicitly before reading (renderer.setRenderTarget( myRenderTarget )).

With EffectComposer also you need to be aware that there are two render targets (renderTarget1 and renderTarget2) because of double buffering, so if one doesn't work, try the other one.

scottgarner commented 12 years ago

Would there be any advantage to using a completely separate renderer for this or is that problematic?

alteredq commented 12 years ago

No, separate renderer would just make troubles. WebGL contexts cannot share GL data.