OmarShehata / webgl-outlines

Implementation of a post process outline shader in ThreeJS & PlayCanvas.
MIT License
360 stars 39 forks source link

InstancedMesh support #16

Open agviegas opened 1 year ago

agviegas commented 1 year ago

Hey, fantastic work here!

Just for the notice, I tried adding a simple InstancedMesh to the scene of your example and it looks like the image below.

const box = new THREE.BoxGeometry();
const material = new THREE.MeshLambertMaterial({ color: "purple" });
const mesh = new THREE.InstancedMesh(box, material, 2);
const position = new THREE.Matrix4();
mesh.setMatrixAt(0, position);
position.setPosition(2, 0, 0);
mesh.setMatrixAt(1, position);
addSurfaceIdAttributeToMesh(mesh);
scene.add(mesh);

image

OmarShehata commented 1 year ago

@agviegas thank you for the bug report! It should work if you use "Outlines V1".

I think the issue with outlines V2 is that the code that computes the surface IDs doesn't take into account instancing.

agviegas commented 1 year ago

Awesome, thanks a lot for the reply! šŸ™‚ The main limitation of V1 is parallel faces, right? If I understood correctly, as normal vectors don't have information about the position in the scene, it's hard to distinguish between two faces contained in 2 planes that are parallel but not coplanar. Here are some ideas that I came across; sorry in advance if these are not relevant / if you already went through them. TLDR: Have you thought about using planes instead of only normals?

Have you considered using the plane of the face instead of the normal vector? This way, you wouldn't have only information about the orientation, but also the position of the face, and the artifacts of v1 would go away. Additionally, you wouldn't need to precompute face IDs.

The equation of a plane looks like this:

$ax + by + cz = d$

Which you can easily get from a vector $\vec n = (a, b, c)$ and a point $p = (x_0, y_0, z_0)$:

$a(xāˆ’x_0)+b(yāˆ’y_0)+c(zāˆ’z_0)=0$

The vector $\vec n$ is the normal vector of that face, and the point $p$ should be easily available for each texel of the fragment shader.

Finally, for rendering this "plane pass" as a color in a postproduction pass, we need to reduce this from 4 components to only 3 (r,g,b). The $d$ of the plane equation is the signed distance of the plane to the origin, and $\vec n = (a, b, c)$ is a unit vector. Multiplying $\vec n$ by $d$ would reduce this to only 3 components (usable as RGB).

To normalize that new vector, we could take the $d$ of the plane that is farthest away from the origin and divide all the vectors by it, normalizing everything. That way, we could end up with a pass similar to your normal pass but distinguishing between parallel non-coplanar faces.

Again, thanks a lot for the fantastic work. Cheers!

OmarShehata commented 1 year ago

@agviegas I had not heard of this approach before! It sounds intriguing. I'd love to see it in action, it sounds not too difficult to implement. I don't know when I'll get a chance to give it a shot but happy to point you in the right direction.

This would be really exciting if it works as well as it sounds because it would fix a lot of these artifacts "for free", without having to worry about the additional buffer or manually tweaking geometry in Blender etc

agviegas commented 1 year ago

@OmarShehata thanks a lot for the pointers! I tried it out and the results improved significantly, but they are not perfect yet (see screenshots below: before - after). I'll keep you updated with further findings!

image image

agviegas commented 1 year ago

@OmarShehata I've had the chance to experiment a bit more. Now I am separating them into 2 different render targets and using them both to compute the edges (this can probably be optimized; I'll make further tests). I've also rewritten the fragment shader, as the depth buffer was not needed anymore, and that simplified it quite a lot. As it turns out that the plane distance pass acts as a "smarter" depth buffer, I could also remove all the bias and multiplier factors from the fragment shader.

This is how it looks now. I think there's still room for improvement. I've started to learn WebGL recently, so please forgive any blunders! šŸ™‚

uniform sampler2D sceneColorBuffer;
uniform sampler2D surfaceBuffer;
uniform sampler2D planeDistanceBuffer;
uniform vec4 screenSize;
uniform vec3 outlineColor;
uniform int width;
uniform float tolerance;

varying vec2 vUv;

vec3 getValue(sampler2D buffer, int x, int y) {
         return texture2D(buffer, vUv + screenSize.zw * vec2(x, y)).rgb;
}

float normalDiff(vec3 normal1, vec3 normal2) {
        return ((dot(normal1, normal2) - 1.) * -1.) / 2.;
}

void main() {
    vec4 sceneColor = texture2D(sceneColorBuffer, vUv);

    vec3 normal = getValue(surfaceBuffer, 0, 0);
        vec3 distance = getValue(planeDistanceBuffer, 0, 0);

        vec3 normalTop = getValue(surfaceBuffer, 0, width);
        vec3 normalBottom = getValue(surfaceBuffer, 0, -width);
        vec3 normalRight = getValue(surfaceBuffer, width, 0);
        vec3 normalLeft = getValue(surfaceBuffer, -width, 0);

        vec3 normalTopRight = getValue(surfaceBuffer, width, width);
        vec3 normalTopLeft = getValue(surfaceBuffer, -width, width);
        vec3 normalBottomRight = getValue(surfaceBuffer, width, -width);
        vec3 normalBottomLeft = getValue(surfaceBuffer, -width, -width);

        vec3 distanceTop = getValue(planeDistanceBuffer, 0, width);
        vec3 distanceBottom = getValue(planeDistanceBuffer, 0, -width);
        vec3 distanceRight = getValue(planeDistanceBuffer, width, 0);
        vec3 distanceLeft = getValue(planeDistanceBuffer, -width, 0);

        vec3 distanceTopRight = getValue(planeDistanceBuffer, width, width);
        vec3 distanceTopLeft = getValue(planeDistanceBuffer, -width, width);
        vec3 distanceBottomRight = getValue(planeDistanceBuffer, width, -width);
        vec3 distanceBottomLeft = getValue(planeDistanceBuffer, -width, -width);

        float depthDiff = 0.0;

        depthDiff += normalDiff(normal, normalTop);
        depthDiff += normalDiff(normal, normalBottom);
        depthDiff += normalDiff(normal, normalLeft);
        depthDiff += normalDiff(normal, normalRight);
        depthDiff += normalDiff(normal, normalTopRight);
        depthDiff += normalDiff(normal, normalTopLeft);
        depthDiff += normalDiff(normal, normalBottomRight);
        depthDiff += normalDiff(normal, normalBottomLeft);

        depthDiff += step(0.001, abs((distance - distanceTop).x));
        depthDiff += step(0.001, abs((distance - distanceBottom).x));
        depthDiff += step(0.001, abs((distance - distanceLeft).x));
        depthDiff += step(0.001, abs((distance - distanceRight).x));
        depthDiff += step(0.001, abs((distance - distanceTopRight).x));
        depthDiff += step(0.001, abs((distance - distanceTopLeft).x));
        depthDiff += step(0.001, abs((distance - distanceBottomRight).x));
        depthDiff += step(0.001, abs((distance - distanceBottomLeft).x));

        float outline = step(tolerance, depthDiff);

        float background = 1.0;
        vec3 absNormal = abs(normal);
        background *= step(absNormal.x, 0.);
        background *= step(absNormal.y, 0.);
        background *= step(absNormal.z, 0.);
        background = (background - 1.) * -1.;
        outline *= background;

        vec4 color = vec4(outlineColor,1.);
        gl_FragColor = mix(sceneColor, color, outline);
}

White color, with 1 and tolerance 2 look like this:

image

agviegas commented 1 year ago

Hey, I just cleaned up, managed to bring both normal and plane distance into a single shadermaterial (so only one extra render is needed, like before) and improved the precission of the outlines. Of course, there's still room for improvement but I think this looks much better now šŸ™‚

image

image

OmarShehata commented 1 year ago

Looks amazing!! Would you like to open a PR or just make your own fork and I can link to it from mine? (I don't want to take credit for your work & idea!) would be cool to have an article explaining the technique and the insight, and I'm curious to hear if you were to share it on Twitter/other graphics communities if this is a known technique or just something no one has tried on the web before etc

agviegas commented 1 year ago

Thanks a lot! I can make a PR. Would you rather make this an improvement of V1, or call this V3? šŸ™‚ I couldn't have done it without your previous discoveries. Regarding the article, I'm aware that you already wrote two articles on Medium talking about V1 and V2. If you'd like to co-publish another one there (as a natural sequence of the other two), I'm up for it. Otherwise, I'm open to any other ideas.

OmarShehata commented 1 year ago

I can make a PR. Would you rather make this an improvement of V1, or call this V3?

I think it might be easiest at this point to make it as a copy of the folder, to keep it clean/easy to read the source code, kind of like the "vertex welder" example:

https://github.com/OmarShehata/webgl-outlines/tree/main/vertex-welder

Out of curiosity, do you have a debug rendering of the "plane distance" buffer? I think it would be interesting to compare that to the existing depth buffer that was used. I think you're correct that it is essentially acting as a depth buffer, and maybe one reason it works better is that it normal depth buffers don't encode distance linearly. This might make a difference in scenes with far away scenes like mountains (like I'm curious what it'd look like on scenes like this https://twitter.com/ianmaclarty/status/1499495014082441218)

agviegas commented 1 year ago

I think it might be easiest at this point to make it as a copy of the folder

Got it! I can do a PR like that. I have also prepared it to make it compatible with instanced meshes.

do you have a debug rendering of the "plane distance" buffer?

image

If I understood this correctly, the plane distance guarantees that all the pixels in the same plane have the same color (as it measures the minimum signed distance of the plane to the origin), while the depth buffer only measures the distance of each pixel to the camera, so you can find many tones within the same plane (making it harder to distinguish between them).

The current implementation is quite primitive because the plane distance is computed to the origin of the scene. Maybe this could be improved by computing the plane distance to the camera, making it more scalable when the camera is far away from the origin.

Regarding far away distances, I would also like to see how it behaves. It is likely that when the plane distance goes beyond the value that the pixel can store, the current implementation will stop working, but I'm sure we can find solutions (e.g. maybe big dinstances are solved by the depth buffer and short distances by the plane distance?).

agviegas commented 1 year ago

More discoveries. I added the mesh colors to the computation and the results got a lot better, even for coplanar faces šŸ™‚ this is how it looks like now for buildings. Curtain walls are a bit challenging yet, but I'm happy with the progress so far.

image image image

OmarShehata commented 1 year ago

(as it measures the minimum signed distance of the plane to the origin), while the depth buffer only measures the distance of each pixel to the camera, so you can find many tones within the same plane (making it harder to distinguish between them).

Ah, this is a very subtle but important distinction! I think this helps a lot in removing unwanted lines inside the same surface (similarly to the benefit you get from the surface ID color but solved in a different way)

the screenshots look great, can't wait to play with this in a live demo!

RodrigoHamuy commented 7 months ago

This looks amazing!! Brilliant work @agviegas and @OmarShehata!! Do you know when this could be released? :grimacing: Thanks!!

agviegas commented 7 months ago

Thanks @RodrigoHamuy! I never had the time to make a PR, but you can check out this code here (everything is inside the postproduction object). These weeks are crazy, so I don't think I'll have time to do it myself, but if you want to do it and I'll be happy to answer any question you have :)

christiandimitri commented 3 weeks ago

More discoveries. I added the mesh colors to the computation and the results got a lot better, even for coplanar faces šŸ™‚ this is how it looks like now for buildings. Curtain walls are a bit challenging yet, but I'm happy with the progress so far.

image image image

Hi @agviegas, thanks for enhancing this repo forward to fit engine fragments, I am actually developing a tool on top of yours, and I am missing the outline implemention to achieve the prototype's goal. Last week I fell on @OmarShehata's article on better outlines with post processing, Reading the issues made me discover this thread, can you please share the source code of how you integrated it with the Object3D and fragment meshes loaded from the fragment loader please ?

That would really help me cheers

agviegas commented 2 weeks ago

Sure! You can find the source here, and you can see it in action here. I'm applying multiple effects (you can see them here, and specifically here). Let me know if you have any questions!

christiandimitri commented 2 weeks ago

@agviegas thanks man šŸ«¶