Tallefer / webgl-loader

Automatically exported from code.google.com/p/webgl-loader
0 stars 0 forks source link

Handle models with > 65535 vertices #8

Closed GoogleCodeExporter closed 9 years ago

GoogleCodeExporter commented 9 years ago
Currently it's actually even less as it seems new vertices can be created in 
vertex optimization stage? (I had it fail on model with original 50K vertices)

This would be very nice, with such great compression it's so tempting to have 
larger models. On decent GPU WebGL can easily handle models with at least 
several hundred thousand triangles, it's pity to be limited by 16 bit indices.

Original issue reported on code.google.com by postfil...@gmail.com on 13 Aug 2011 at 2:08

GoogleCodeExporter commented 9 years ago
What's happening is that modelling-oriented formats don't use the same kind of 
indices as rendering-based formats; there's a separate index per attribute, as 
opposed to per vertex. For example, if you have a crease, then positions might 
get shared, but normals or texcoords might not. Part of the process is to 
flatten the indices, which might involve making duplicates.

In any case, I think after I do material batching, the easiest thing to do will 
be to simply emit multiple draw batches. I'll do this first.

With a vertex cache hit rate of ~0.6, ideally we hope it to work with models 
~100k triangles, although this is pretty much a best-case scenario. It is worth 
looking at the triangle optimization algorithm to see if it is doing something 
silly. One defect I already know about is the index deltas aren't using the 
full UTF-8 range, since it isn't handling the surrogate pair range (like 
"length" is supposed to).

Would you mind sending me the offending OBJ (attachment or link, or off-thread 
e-mail)? That would definitely help me debug things.

Original comment by wonchun on 13 Aug 2011 at 8:14

GoogleCodeExporter commented 9 years ago

Original comment by wonchun on 13 Aug 2011 at 8:14

GoogleCodeExporter commented 9 years ago

Original comment by wonchun on 13 Aug 2011 at 8:30

GoogleCodeExporter commented 9 years ago
Here is example OBJ with 50k vertices / 100k triangles that doesn't pass vertex 
number CHECK. 

It's decimated Lucy, originally from Stanford 3d scan repository:

http://graphics.stanford.edu/data/3Dscanrep/

(I had to add dummy UVs as original model doesn't have texture coordinates 
while compressor requires them)

Original comment by postfil...@gmail.com on 14 Aug 2011 at 2:35

Attachments:

GoogleCodeExporter commented 9 years ago
Hm, this model has zero vertex sharing, for two reasons.

1) the normal indices are per face. "s off" is a bad sign. This is alone to 
eliminate all vertex sharing, since it means that if a position shows up in a 
different triangle, it won't share the normal, so would have to get duplicated.

I wonder if it makes sense to support a non-indexed triangle mode, in case this 
was intentional?

2) the texcoords are based on the position within a triangle. So, even if the 
normal thing was dealt with there would still be a problem. For example, 
position 3/ shows up as the third vertex of the first triangle as 3/3/. It 
shows up again as the second vertex of the second triangle as 3/2/, which would 
also prevent sharing.

Anyway, I should make it so that this can support OBJ files with texcoords, and 
maybe automatically smooth normals? Meanwhile, maybe you can use different 
settings to decimate Lucy and see if you have more success.

Original comment by wonchun on 14 Aug 2011 at 4:19

GoogleCodeExporter commented 9 years ago
I guess Lucy is different as it originally came from 3d scan, so you have just 
points and all other things are then generated by automated processes.

Though such brute-force generated models are maybe the best candidates for good 
compression ;).

How we handle this particular model with current three.js JSON pipeline - only 
vertices and faces are in the file, normals are generated per face upon loading 
from geometry and texture coordinates are not used at all.

We do vertex duplication all the time (all normals going to WebGL layer are per 
face use of vertex, there is no sharing except for quads; it's not optimal for 
performance, but there are no constrains on model) and if indices overflow 
16-bit, we split into multiple buffers.

With UTF8 format, it's actually a bit too smart for us, it tries to solve whole 
path from model to WebGL, down to actual GL buffers. But this means model would 
be completely invisible to our API, so we need to stuff data first in our own 
data structures and then we generate buffers ourselves.

We also do things like automatic computation of normals (if user wants it), 
though this is not ideal. For flat shading it's ok, but for smooth shading, you 
generally want to preserve whatever normals artist created in 3d modelling 
application - you can't reconstruct hand crafted normals automatically.

Original comment by postfil...@gmail.com on 14 Aug 2011 at 4:04

GoogleCodeExporter commented 9 years ago

Original comment by wonchun on 25 Aug 2011 at 7:42

GoogleCodeExporter commented 9 years ago

Original comment by wonchun on 5 Sep 2011 at 9:48

GoogleCodeExporter commented 9 years ago
I forgot to update this issue. Last week, I put up a demo in r35:

http://webgl-loader.googlecode.com/svn/trunk/samples/happy/happy.html

Some competitive analysis of an alternative: 
http://code.google.com/p/webgl-loader/wiki/HappyBuddha

Original comment by wonchun on 15 Sep 2011 at 1:38

GoogleCodeExporter commented 9 years ago
This issue was closed by revision r41.

Original comment by wonchun on 20 Sep 2011 at 3:18