Closed MasterJames closed 5 years ago
There are at least one patent on HBAO: http://www.google.com/patents/US8878849 I am unsure what is NVIDIA's stance on people re-implementing it in non-licensed contexts like Three.JS. I know that NVIDIA does tend to enforce its patents.
For temporal anti-aliasing I have heard that NVIDIA's TXAA is overhyped for its results compared to the time it takes. I am not sure that is the case because I haven't done tests. But that is what random people on the internet have stated. There is a whole field of TAA algorithms. I understand the one documented here from CryEngine is one of the better ones: http://www.crytek.com/download/Sousa_Graphics_Gems_CryENGINE3.pdf
I've looked into this recently and haven't found out what algorithm Star Wars Battlefront or Fallout 4 are using for their TAA, but the results are good and I believe it isn't NVIDIA's TXAA.
I asked around and the state-of-the-art is TAA that UE4 and other modern game engines use, but it is designed for a deferred pipeline with post processing:
https://de45xmedrsdbp.cloudfront.net/Resources/files/TemporalAA_small-59732822.pdf
BTW I still believe that we should integrate post processing into the core WebGLRenderer... different discussion I know.
I was considering some sort of renderer.setEffect()
API...
Yes nice examples TAA is a clear winner. Some kind of composting effect with a depth mask and edge detection would be a nice simple start I imagined. It's amazing foe me to think one could spend years just focused on Anti Aliasing alone!? The key thing I think is going to be important is more automated ways to dumb down enhancements and reflections etc. based on frame rate achieved. At least it would be default aND you cold set a priority for high quality renders vs frame rate. Example: increase shadow map resolution based on distance from camera and fov.
Some kind of composting effect with a depth mask and edge detection would be a nice simple start I imagined.
It is just so costly with WebGL 1.0 because we can not do true multiple outputs per render pass. This limits scalability pretty severely, doing complex scenes with animated characters are basically 3 times slower than if it was possible to have multiple outputs.
I guess we can have two modes. A mode that renders out the passes manually if the WebGL_DrawBuffers extension isn't present and then one that does it efficiently if it exists.
The key thing I think is going to be important is more automated ways to dumb down enhancements and reflections etc. based on frame rate achieved.
Yeah, but that is pretty hard to do. Because frame rates achieved depend on the current scene complexity and that is always changing in a game context.
I'd prefer just to be able to do TAA and screen space specular reflections first. :)
I would love to see screen space specular reflections vi aray marching, those effects are so sexy.
Sure. :)
@mrdoob I think that these advanced game engines have semi-fixed post processing pipelines and fairly complex ones where passes talk to each other using a ton of intermediate buffers. It is also as if each different post processing pipeline structure is a class with options. That class knows how to structure the effects it needs. Thus it is higher level than just setEffect, because setEffect is fairly low level and not away of the higher level structure of a complex post effect pipeline.
@bhouston Have you seen any implementations of the PCSS (percentage closer soft shadows)? Out of the 3 effects that @MasterJames linked to (thanks for these btw! :-) ) the PCSS were in my mind the most feasible since, according to the PDF paper, they require no pre-processing, no added geometry, no post-processing, and are scene complexity - independent. They are the best looking shadows I have seen yet (even in other recent AAA games I can pick out the industry standard shadowmap filtering artifacts right away, especially on main character movement through a scene).
The PCSS 's sound too good to be true! Maybe I'm missing something, and there is more magic going on under the hood, but I think these type of shadows would finally cure three.js of its shadowing woes. Would like to hear your thoughts (and others' thoughts too).
I guess we can have two modes. A mode that renders out the passes manually if the WebGL_DrawBuffers extension isn't present and then one that does it efficiently if it exists.
Or three WebGL_DrawBuffers > Float textures > multi-pass
@erichlof I also can not find any patents on PCSS at least not under the name "Randima Fernando" so I think we are okay to proceed. Seriously, if it is good and easy, we should do it. :)
More details on TAA under a different name (SMAA 1TX) along with code samples: http://www.iryoku.com/smaa/
Full source code to SMAA on github with a permissive license:
Delightful find.
@benaadams I think that you are right: "Or three WebGL_DrawBuffers > Float textures > multi-pass" If we integrate this into a request model (as we previously discussed) and with @mrdoob's suggested setEffect() it should be really nice to use.
I guess this is officially a feature inquiry/request, but I just thought to share useful information.
This video highlights some newer technologies found in Creed (for your consideration). https://www.youtube.com/watch?v=04URvixusZM
More here... http://www.geforce.com/hardware/technology TXAA http://www.geforce.com/hardware/technology/txaa/technology
HBAO+ http://www.geforce.com/hardware/technology/hbao-plus
PCSS Paper: http://developer.download.nvidia.com/shaderlibrary/docs/shadow_PCSS.pdf Video: https://www.youtube.com/watch?v=QW6Tm_mfOmw compare: http://international.download.nvidia.com/geforce-com/international/comparisons/grand-theft-auto-v/grand-theft-auto-v-soft-shadows-interactive-comparison-1-nvidia-pcss-vs-softest.html
I guess the question is, "Is THREE.js already using or able to use any of these ideas?".