Closed WestLangley closed 12 years ago
We had it like this before but then I read from game developers that it's better to do normalization just in fragment shaders, so I dropped normalizations from vertex shaders.
The only exception would be if it is known in advance that the incoming normals are of unit length
This should be guaranteed, we do normalizations when creating normals.
and the normalMatrix is a pure rotation matrix.
This I'm not sure about, is this about not having non-uniform scaling?
I think in general game development community tries to avoid using non-uniform scaling as much as possible because it complicates things. I guess lot of our code could be based on the assumption that there are no non-uniform scaling transformations.
This I'm not sure about, is this about not having non-uniform scaling?
Yes.
I guess lot of our code could be based on the assumption that there are no non-uniform scaling transformations.
It is. I am in the process of fixing it.
I read from game developers that it's better to do normalization just in fragment shaders.
Perhaps they were assuming the NormalMatrix
is a pure rotation matrix? Can you recollect the reference?
Perhaps they were assuming the NormalMatrix is a pure rotation matrix? Can you recollect the reference?
I'm trying to find it ...
So far I found just this (which seems concur with you):
http://www.lighthouse3d.com/tutorials/glsl-tutorial/normalization-issues/
Though I feel it was also mentioned somewhere else. It was persuasive enough at that time that I did the change.
Not sure if it was "let's just wing it because it looks good enough and it's faster" or "make sure you don't break assumptions and you will be good".
Game developers often do these types of trade-offs and sacrifice 100% correctness for 100% use case coverage if it makes something few percent faster for majority of use cases.
BTW while at it, I found this, which I think we should use ;)
http://www.iquilezles.org/www/articles/normals/normals.htm
(I was already bugged by this that we don't scale normals contributions by triangle areas and this is just such elegant solution)
I admit that making this change may not make a discernible visual difference. That is one reason not to do it.
But if it were me, and there was not a significant performance penalty, I'd go for correctness.
Regarding the iquilezles article, it is elegant. It's also different, as the vertex normals are effectively weighted. He says is gives "better quality vertex normals". I wonder what "better" means?
I was able to find this commit:
https://github.com/alteredq/three.js/commit/370d20b4074e201cfb7f1294135ddc408c9f92d5
And I remembered this fixed some horrible artefacts on some computers. So it's possible this was also a workaround for some buggy drivers.
I don't know anymore, probably worth to re-test it with and without normalization in vertex shader to see the effects on artefacts and performance (I guess it would be very little difference for both).
Regarding the iquilezles article, it is elegant. It's also different, as the vertex normals are effectively weighted. He says is gives "better quality vertex normals". I wonder what "better" means?
That's what intrigues me ;)
When I learned you could weight normals by area, I was curious how it would look, just I was too lazy to implement it to see the difference. Now, with this method this should be easy to try.
(again it was in the context of game developers discussing normals and telling - "why you bother about 100% correctness if you don't even weight vertex normals by triangle areas")
Hmmm. There is a compelling argument for weighting a vertex normal by inverse area, too -- as a vertex normal should look most like the face normals to which it is closest.
Oh, this is a rabbit hole:
http://webserver1.tricity.wsu.edu/cs/boblewis/pdfs/2003_vertnorm_tvc.pdf http://www.bytehazard.com/code/vertnorm.html http://meshlabstuff.blogspot.sk/2009/04/on-computation-of-vertex-normals.html http://www.polycount.com/forum/showthread.php?t=85809
Seems like there are many possible ways for computing vertex normals and what's the best depends on the nature of the model.
But weighting by area can bring some noticeable improvements over not doing any weighting and with iq's method it should be also fast (also it's a default method in MeshLab).
So I wanted to try and then I found this:
https://github.com/alteredq/three.js/blob/dev/src/core/BufferGeometry.js#L165
I already reinvented iq's method for BufferGeometry
and then I completely forgot about it ;).
I did try it now also with regular Geometry
and while not perfect it does seems to help quite a lot, at least for some models.
This is TextGeometry
with unweighted vertex normals:
And this is TextGeometry
with area weighted vertex normals:
For comparison, here is how it looks with computing front and side normals separately, with front normals being flat shaded face normals:
(this is what's used in the text example currently, to get around ugly looking vertex normals when considering whole geometry together)
This thread took a left-hand turn somewhere. :-)
The area-weighted vertex normals do look nice.
Ok, so I'll commit area weighting change. We will see how it works for more use cases, we can revert if there will be some problems.
Sorry for the thread derail ;)
For normalization in vertex shader, we can try and see if there is not some noticeable slowdown then keep it.
For normalization in vertex shader, we can try and see if there is not some noticeable slowdown then keep it.
OK, sounds good. I think the shader list in my original post is accurate.
So I checked and it seems the impact of normalizations in vertex shaders on the performance is negligible:
ANGLE current normalized
skin girl ~134 fps ~133 fps
bumpmap skin ~127 fps ~127 fps
pointlights2 ~41 fps ~41 fps
geo large ~134 fps ~134 fps
perf static 62-65 fps 63-65 fps
ninja ~188 fps ~188 fps
OpenGL current normalized
skin girl ~137 fps ~135 fps
bumpmap skin ~116 fps ~114 fps
pointlights2 ~58 fps ~58 fps
geo large ~147 fps ~147 fps
perf static 67-72 fps 68-73 fps
ninja ~208 fps ~204 fps
I guess we can keep them.
Thanks, @alteredq.
Maybe it would be also good to do these tests on Firefox for Android. What looks negligible on a "high end" GPU may not be so much on a low end one.
I'm definitely in favour for this change, just a thought for future performance tests.
Maybe it would be also good to do these tests on Firefox for Android. What looks negligible on a "high end" GPU may not be so much on a low end one.
Good point. Though performance on Android is so bad for the moment that it kinda doesn't make sense to cripple desktop use cases.
I wonder how we'll deal with this. The performance gap between mobile and desktop will be massive for a long time to come. Make something run good enough on mobile means severely cutting down possibilities on the desktop.
Maybe we should have "high" and "low" quality shaders.
No no, I meant that it would be good to see how it affects on mobile. For example, it would be good to know if, say, this change went from 10fps to 5fps on mobile. Doesn't matter if it doesn't reach 25fps.
That's kinda what I meant - I'm not very motivated to spend much time on mobile if it's too slow in any case ;)
Testing alone is pain, testing on multiple platforms on a single computer is worse, testing on multiple devices is yet worse.
Doesn't make much sense to optimize experience for too underpowered HW + SW. Usable WebGL on mobile will come eventually but at that point I expect performance profile will be different (e.g. Firefox on Galaxy Nexus vs Chrome on Nexus 4 could be completely different experience).
If a vertex normal is transformed by the
normalMatrix
in the vertex shader, the vertex normal needs to be normalized before passing it as avarying
to the fragment shader.The only exception would be if it is known in advance that the incoming normals are of unit length, and the
normalMatrix
is a pure rotation matrix.The reason this is important, is that the
NormalMatrix
can change the normal vector's length -- it always gets the direction correct, but the length is arbitrary.(In fact, it is worse than that -- each normal vector of a primitive can end up having a different length.)
If the normal vectors are not re-normalized, the interpolation of the normal vectors across the primitive's face in the fragment shader will result in incorrect weightings.
So, in the vertex shader, we need to always do something like this:
Then, in the fragment shader,
vNormal
needs to be normalized again.Although we have been consistent in (2), we have not been consistent in (1).
For example, the
MeshPhongMaterial
shader does not normalize properly in the vertex shader. Neither does theMeshNormalMaterial
shader, the Normal-Map Shader, or the Simple-Skin shader.There may be other cases in the examples directory, but I haven't hunted them all down.