Closed bhouston closed 5 years ago
Thanks for the ping, and it's cool to see someone put it into WebGL. Now I want to see it side-by-side with other SSAOs.
On Wed, Apr 22, 2015 at 7:56 PM, Ben Houston notifications@github.com wrote:
This is really neat and better than any SSAO technique I have read about yet:
http://graphics.cs.williams.edu/papers/SAOHPG12/McGuire12SAO-talk.pdf
Sample code:
https://gist.github.com/fisch0920/6770311
/ping @erich666 https://github.com/erich666 as I think you were thanked in the presentation for your contributions. :)
— Reply to this email directly or view it on GitHub https://github.com/mrdoob/three.js/issues/6446.
Looks very interesting
related issue
This is what I have so far:
The original scene without SAO is here: https://clara.io/view/a558dca2-8c2f-432c-ab34-9135c3066010/webgl
Source code: https://gist.github.com/bhouston/1dc2a760783314b95bd9
Wow!
@bhouston How is your work? I am working on transforming the position of the clip space to the camera space. This is a little bit tricky. I am trying to figure out the reconstructCS functions from HPG2012 paper. And I have tried to use invProj matrix to assist me to make the transformation, but it doesn't work.
I am curious on your randomPatternRotationAngle. There are lots of hardcode variable. Currently, I am trying using 'xor' and 'and' operators like the paper mentioned. My result is still incorrect, but I think it has been much closer.
I previously had made up the rand function. But it wasn't giving great results. I switched to this random function and it works great:
float rand(vec2 co){
return fract(sin(dot(co.xy ,vec2(12.9898,78.233))) * 43758.5453);
}
source: http://stackoverflow.com/questions/12964279/whats-the-origin-of-this-glsl-rand-one-liner
I believe we are both supposed to be doing 2 blurring passes after the first AO calculation. I haven't done that yet.
The current equation for invProj matrix-based reconstruction is only correct for perspective projection matrices. I believe if it is orthogonal, one needs to not scale the first piece. Thus instead of:
return vec3( (clipPos * projInfo.xy + projInfo.zw) * cameraDepth, cameraDepth );
One needs for the case of orthographic projection matrices:
return vec3( (clipPos * projInfo.xy + projInfo.zw), cameraDepth );
Turns out the rand function I linked to above has issues on certain mobile GPUs, and this version is recommended instead:
http://byteblacksmith.com/improvements-to-the-canonical-one-liner-glsl-rand-for-opengl-es-2-0/
We should likely create a library of functions to be used in post processing effects to avoid re-inventing the wheel. :)
For line:
https://gist.github.com/daoshengmu/da66727d5f7124cfd172#file-sao-glsl-L39
You can likely just replace it with something simplier:
cross(dFdx(viewPos), dFdy(viewPos))
One issue I am running into is that I can see the mesh structure and it is very distracting. Here is an SAO only render:
btw where in the code does it vary the radius of the sample disk based on the z-distance? I was thinking it should do that.
I am pretty sure this part of your code @daoshengmu is incorrect:
float ao = (1.0 - sum) / float(NUM_SAMPLES);
ao = 1.0-clamp(pow(ao, 1.0 + 100.0), 0.0, 1.0);
I just tried it in my version and it produces incorrect results. I think the issue is that (1-sum)/NUM_SAMPLES doesn't make sense to me. You need to normalize sum first.
BTW for debugging purposes I up the samples to ridiculous levels and I see artifacts like this in my version - which is unfortunate:
I ran this by Morgan McGuire (one creator of SAO), who replied: "all of the code is open source in G3D (http://g3d.sf.net) and the self-shadowing artifacts are because they're probably using vertex normals instead of face normals. If the implementation used face normals OR increased the bias term that artifact would not occur. All of this is discussed in the original paper, which isn't linked to the thread:
@bhouston Thanks for your reply. I think you could try to add z_bias value to help solve the artifacts. @erich666 what does face normals mean? In the demo of McGuire, he uses the screen-space normal, it is generated by cross(dFdy(C), dFdx(C)
. I think it is similar to tangent space normal.
In the G3D demo C
is the reconstructed view(?) space position, and cross(dFdy(C), dFdx(C))
is the view space normal (computed from the geometry). The resulting normal will be constant within each triangle (hence "face normals") and consistent with the contents of the z buffer.
If you instead use interpolated vertex normals (as you do for phong lighting, for example), the normal will be varying within each triangle and look as if the triangle was curved. This is inconsistent with the geometry as it is stored in the z buffer, and will therefore lead to SSAO artifacts.
Both of your shaders use the same normal calculation as the G3D demo, so I don't think this is the problem. When I look closely at the 90-sample image from the paper, I can see similar artifacts on the front car.
What if you adjust tDepth by the bumpmap texture height when generating it?
Acting as hole in the wall, I'm posting Morgan's response: "The code in G3D uses interpolated vertex normals and then biases the AO. This gives smooth surfaces and high quality at silhouettes, at the expense of missing some occlusion and having a scene-specific constant.
Sponza is the worst case--the ceiling arches are really low polygon and get darkness in the corners, but if you bias it enough to hide that, then you lose all of the fine detail on the columns. Possible take-home: use approximately uniform tessellation on curves."
I'm betting if you wrote him questions directly, he'd respond (who doesn't like to see their work get implemented?). As a bonus, I could stop being the bearer of messages.
Thanks @erich666 ! Sorry for the slow reply, was at a conference yesterday. I've been working from Morgan's reference source as well. The bias parameter is super finicky and scene dependent I find. I am also convinced there is a real bug in my code that is adding some serious view dependence.
But the results now look like this -- which apart from a few areas that seem to dark, is fairly decent:
More from Morgan, about the latest test tractor image. Boy, so many subtleties...
That looks like there are a few problems:
@bhouston . Thanks for your notice. I correct these lines and find the result is better.
float ao = 1.0 - (sum / float(NUM_SAMPLES));
ao = 1.0-clamp(pow(ao, 1.0 + 100.0), 0.0, 1.0);
I am still working on tuning parameters. I have no idea why the dark side is not smooth and gradient. I wonder if it is worthy to have a try on using orthogonal projection matrix to replace perspective one on computing the depth map. In my past experience, I notice that if I use orthogonal matrix will generate more precise depth value to assist me to do the shadow map or some post effects.
This is really impressive! Has there been any progress since the last update in May? I'd love to try this out if the latest code could be put somewhere.
Any news on this ?
Attempt on a PR for this here: #8605
Implemented via #11458
This is really neat and better than any SSAO technique I have read about yet:
http://graphics.cs.williams.edu/papers/SAOHPG12/McGuire12SAO-talk.pdf
Sample code:
https://gist.github.com/fisch0920/6770311
/ping @erich666 as I think you were thanked in the presentation for your contributions. :)