Open FractalFir opened 1 year ago
64205140eb5edcb302b1848acdbb5d3367a5ab24 implements most of the unfiy_index_data
function.
79373dd5db6ac677a1d09fca540ff9831e88ef2d implements saving SharedIndexSegment
's. Reduction in file size to 238 kb.
This is only tangential to the issue, but are the indices currently bit-packed to the required number of bits?
If that's the case, a good technique for reducing the index size more (especially with large models) is to use a history buffer of previously encountered indices and referencing them when possible. The main caveat is that this relies on "good" ordering of indices to work well. Though, optimally models should use this anyway (see meshoptimizer's vertex cache/vertex fetch optimizations). Also encoding performance might suffer a little from this.
I haven't tested this extensively with a lot of models, but on the Stanford XYZ RGB dragon model (~7.2M triangles) about 78% of indices fall within the last 16 ones encountered. This results in about 60% size savings for this specific model if my math is correct.
I could take a look at this at some point.
af67f29966953364a0f79b6be9e48bfb336472f6 adds full support for SharedIndexSegment
s. It has cut file size by roughly half. I have also noticed that normal triangle array is now broken. (None
after reading it). Currently, under investigation.
f36cc01befa3b24ea2f0165307832ba135282121 fixed the bug with normals and added additional tests to make cathcing this kind of errors easier.
Index segments currently make up most of the mesh size. Different approaches to reducing their size have yielded insufficient results. The previous attempts were small, universal improvements for saving all kinds of meshes. Some meshes can be however further compressed by increasing the size of one segment to shrink another. This is where
SharedIndexSegment
would come in.A
SharedIndexSegment
is a segment that stores indices, which are identical, but would normally end up duplicated in completely different segments. A bitmask at the begging of the segment signals, which segments did the indices in the segment belong to. This alone will not do a lot, since a large range of indices being shared across segments is very rare. This is where the cost in increasing size of some other segments comes in.Let us imagine this hypothetical scenario: We have a set of vertices:
[va,vb,vc,vd]
uvs:[ua,ub,ub,ud]
combined into two triangles: vertex index:[0,1,2,3,1,2]
and uvs:[1,2,3,0,2,3]
There are 5 unique combinations of vertex and uv indices:[ (0,1) ,(1,2),(2,3),(3,0),(1,1)]
if we change the uv and vertex array to look like this:[va,vb,vc,vd,vb]
and uv array to look like this:[ub,uc,ud,ua,ub]
Each unique combination of uv and vertex data can be represented with a single index! [0,1,2,3,4,2
] This has it's downsides:unfiy_index_data
].This has the potential to drastically reduce the size of some meshes.
What is needed for this to work?
unify_index_data
function.