newlinkgithub / webgl-loader

Automatically exported from code.google.com/p/webgl-loader
0 stars 0 forks source link

Normal prediction #2

Closed GoogleCodeExporter closed 9 years ago

GoogleCodeExporter commented 9 years ago
Normals are often predictable from the positions.

1) Initialize per-vertex normals to zero.
2) For each triangle:
3)   compute the cross product
4)   add the cross product to each triangle vertex normal

This will give you a per-vertex area-weighted normal. Not sure if it needs to 
be normalized at this point (think about what happens when it is interpolated 
per-fragment).

Quick enhancement: instead of initializing normals to 0, initialize to some 
delta from the prediction. Actually, maybe this has to be done as a 
post-normalized fixup...

This technique will be make existing compression slower, so it should be 
optional. However, it can also orthogonal to how you decode the rest of your 
mesh. Obviously, the benefit is that you spend less bits encoding normals.

Original issue reported on code.google.com by wonchun on 13 Aug 2011 at 3:02

GoogleCodeExporter commented 9 years ago

Original comment by wonchun on 13 Aug 2011 at 3:08

GoogleCodeExporter commented 9 years ago
r66 started this, by adding this demo:

http://webgl-loader.googlecode.com/svn/trunk/samples/walt.html

Client-side normal computation:

http://code.google.com/p/webgl-loader/source/browse/trunk/samples/walt.js

Original comment by wonchun on 15 Oct 2011 at 6:31

GoogleCodeExporter commented 9 years ago
I noticed Walt has some weird artefacts. They are mostly noticeable at the back 
of head, though they are also elsewhere. 

Seems like seams between buffers (coming from large geometry split into 
multiple chunks), like if normals were computed just per chunk, not taking into 
account vertices in other chunks.

Original comment by postfil...@gmail.com on 27 Oct 2011 at 3:45

GoogleCodeExporter commented 9 years ago
Yeah, this is not really folded into the main library, and is just the demo for 
Walt right now, which has no model normals (at least in the version I grabbed 
from mr.doob)

But what you say is basically what's going on. Actually, it is also texture 
seams that would do it. I'm not sure how efficient it would be to do any better 
than this (finding matching positions without depending on matching indices). 
What I was planning on doing was supporting this with refinement code. This 
could mean either/or:

 1. sending per-vertex normal deltas for predicted deltas
 2. encoding object space normal maps as per-texel deltas from interpolated, predicted deltas

The first would be useful for dealing with parts that are grossly off, like the 
buffer/texcoord seams (also, various configurations on misbehaved, weird-area, 
anisotropic triangles). I think instead of putting all the pressure on the 
just-in-time prediction code to be right, I can think of it as a precision 
optimization for a refinement pass. Even when things are working right, I think 
you can still see some faceting.

The second, I think, is the really interesting bit. There is a problem in how 
to encode normal maps over HTTP. PNG is expensive, JPEG has blocking artifacts, 
and neither really have enough precision for high-end visualization (you want 
9-10 bits). However, we can encode deltas to be much smaller in magnitude. So, 
the per-vertex normals take care of the coarse encoding, and the normal maps 
take care of the fine encoding (maybe even use 16-bit textures!). Also, I think 
the delta normal maps MIPmap better using the default generator.

Of course, this means you have to be using object-space normal maps rather than 
tangent-space normal maps, and that will certainly affect asset authorship. It 
is nice not to have to send a per-vertex coordinate frame. There are cheap ways 
to do it (e.g. send a quaternion) but some of those also affect the asset 
pipeline.

Original comment by wonchun on 27 Oct 2011 at 4:24

GoogleCodeExporter commented 9 years ago
Implemented in r99.

Original comment by wonchun on 18 Aug 2012 at 11:26