KhronosGroup / glTF-Blender-IO

Blender glTF 2.0 importer and exporter
https://docs.blender.org/manual/en/latest/addons/import_export/scene_gltf2.html
Apache License 2.0
1.49k stars 317 forks source link

Encode Morph Targets with Sparse Accessors #1346

Closed Christopher-Hayes closed 1 year ago

Christopher-Hayes commented 3 years ago

Sparse Accessors and Blender Currently, gltf-pipeline and fbx2gltf do not support sparse accessors at all (links are to the feature requests). It does look like glTF-Blender-IO can import models with sparse accessors. However, unless I'm mistaken Blender is not capable of encoding a model with sparse accessors that previously did not have them?

No Available Tools can Encode Sparse Accessors I receive my models as FBXs and convert them to glTF. Morph targets double the model size of the model. The morph targets in my models simply interpolate scaling/translations at fixed intervals along a linear curve. It would seem that sparse accessors would be hugely advantageous, but I have yet to find a glTF exporter/optimizer that is capable of this.

Feature Request Option to encode morph targets using sparse accessors to reduce model size.

donmccurdy commented 3 years ago

Are you able to share an example (glTF or .blend) of a model you are hoping this feature would optimize?

scurest commented 3 years ago

Try this branch: https://github.com/scurest/glTF-Blender-IO/tree/sparse-morph

Christopher-Hayes commented 3 years ago

@donmccurdy I could email it to you if you want. But, since originally making this ticket I'm realizing I might've misunderstood how Sparse Accessors worked. For my use-case, I don't have any animations at all. It's 100% morph targets. So, there wouldn't be any room for optimization between animation key frames.

I also did a breakdown of how the morph targets impacted the file size, and I'm now realizing that a 2x increase in model size on a model that heavily uses blend shapes actually isn't that unexpected. Below is a random mesh selected from the model, this mesh morphs along the X axis and the Z axis:

Position: 1158x * 12 B (Vec3) = 14KB
Normal: 1158x * 12 B (Vec3) = 14 KB
Texcoord_0: 1158x * 8 B (Vec2) = 9 KB
Indices: 2964x * 2 B (Scalar) = 6 KB
Morph Target (Base Positions): 1158x * 12 B (Vec3) = 14 KB
Morph Target (Positions morphed along the X axis): 1158x * 12B (Vec3) = 14 KB
Morph Target (Positions morphed along the Z axis): 1158x * 12B (Vec3) = 14 KB

Total Mesh Size: 71 KB

The meshes vary, but in general the above shows that morphs accounted for 42 KB of 71 KB (60%) on this particular mesh.

@scurest Thank you for your work on this feature, I did try your branch and I didn't notice any differences in the file size. I searched in the GLB for the "sparse" keyword to see if sparse accessors were used anywhere, but didn't find any. The exported model doesn't have any animations, so I'm now realizing sparse accessors are probably more for animations and not so much morph targets by themselves.

scurest commented 3 years ago

In order to use a sparse accessor, most of the accessor items must be zero. For morph positions, that means most of the vertices in the shapekey need to be unmoved from their Basis position.

donmccurdy commented 3 years ago

At one point I had implemented sparse accessors for shape key animation samplers in THREE.GLTFExporter (for https://threejs.org/). For various reasons, with the models I had available to test it did not seem to make much of a difference, so I didn't end up merging that change.

As @scurest mentions, a model where only a few vertices move during morphing would get some benefit from sparse accessors in the morph vertex data. If that's not the case in your model, you could also try https://github.com/zeux/meshoptimizer. It has two relevant features (MeshOpt compression and accessor quantization) either of which could probably reduce the size of your morph targets further. Such optimizations would require the application loading the model to support the glTF EXT_meshopt_compression and KHR_mesh_quantization extensions, respectively.

fire commented 3 years ago

What is the status of this for Blender 3.0?

Is https://github.com/scurest/glTF-Blender-IO/tree/sparse-morph ready to merge?

julienduroure commented 3 years ago

Hello, No plan on this for now. (And it seems that @scurest deleted this branch)

I will try to focus on bugs, any feature that can be done by external tools will come later.

donmccurdy commented 3 years ago

I'm tracking addition of this feature for glTF-Transform in https://github.com/donmccurdy/glTF-Transform/issues/351. There is another Blender addon that runs glTF-Transform post-processing optimizations on Blender glTF exports without additional manual work, so once the feature lands there it may be a good solution.

I should also note that morph targets and Draco compression are not really compatible. If you're using Draco compression to optimize other parts of your asset, this fix will probably not help you unfortunately. Meshopt compression might be a better option for such models; you can already test that by using gltfpack or gltf-transform on exported GLB models. You'll typically want to use both meshopt and gzip for best results.

Christopher-Hayes commented 3 years ago

Thanks for the update @donmccurdy! I'll leave this issue open if it helps others and you guys are planning to implement the feature.

On my side, I actually ended up doing exactly as you recommend. MeshOpt worked perfectly for a morphing object that needed Draco-like compression. As part of a PlayCanvas WebGL project, almost all assets were GZIP'd. So, that solution worked out perfect for me. I saw 2.5x decreases in 3D asset sizes (before PlayCanvas even supported KHR_QUANTIZATION) I quickly realized I might be too focused on optimizing the model, when the real bottleneck is the textures.

I put more info on this process in a PlayCanvas post for anyone curious: https://forum.playcanvas.com/t/tricks-to-decrease-morph-target-sizes/18628/9?u=chris And created a sample PlayCanvas project as well using MeshOpt: https://playcanvas.com/project/779762/overview/load-glb-model-with-meshopt

lyuma commented 1 year ago

I ran into this issue today. glTF files exported from blender are extremely large when they contain blend shapes.

Blender glTF export - 16.630MB bin + 155KB gltf

This model is very low poly and has no reason to be large other than blend shapes: 8595 tris, and one material, plus a few primitive debug meshes.

To contrast, FBX and FBX2glTF conversions are able to sparsely encode blend shapes. FBX - 656KB fbx FBX2glTF conversion (sparse accessors) - 855KB bin + 178KB gltf

blendshape_size_issue.zip

Here is the blend file: mesh_parent_test_2a7_rotated.blend.zip

(Model is permissively licensed and available from https://booth.pm/ja/items/2019040 "2A-7-4 / XXXX Coolk")

Given this, I think sparse accessors are important to implement, or failing that, automatic compression of gltf exports (such as zip). I was unable to find the branch by scurest, but I suspect it would not be too difficult to implement. If it's not desired by default, perhaps it could be made an export checkbox to reduce blendshape size.

donmccurdy commented 1 year ago

As additional context – I tried comparing the Blender→glTF and the Blender→FBX→glTF versions, they have roughly the same vertex counts and are otherwise similar. The difference does seem to be entirely in the use of sparse accessors. If I run the Blender→glTF version through...

gltf-transform sparse in.glb out.glb

... then the Blender export is reduced to the same size as the FBX2glTF export. The combination of sparse accessors and Meshopt compression seems to be ideal here, reducing the size further to about 470 KB.

I think it would be OK for sparse accessors to be enabled by default for blend shapes.

scurest commented 1 year ago

The patch just looks like this (not tested extensively)

Patch ```diff diff --git a/addons/io_scene_gltf2/blender/exp/gltf2_blender_gather_primitive_attributes.py b/addons/io_scene_gltf2/blender/exp/gltf2_blender_gather_primitive_attributes.py index 71fe2970..d5c847f4 100644 --- a/addons/io_scene_gltf2/blender/exp/gltf2_blender_gather_primitive_attributes.py +++ b/addons/io_scene_gltf2/blender/exp/gltf2_blender_gather_primitive_attributes.py @@ -45,7 +45,23 @@ def gather_primitive_attributes(blender_primitive, export_settings): return attributes -def array_to_accessor(array, component_type, data_type, include_max_and_min=False): +def array_to_accessor( + array, + component_type, + data_type, + include_max_and_min=False, + try_sparse=False, +): + buffer_view = None + sparse = None + + if try_sparse: + sparse = __try_sparse_accessor(array) + if not sparse: + buffer_view = gltf2_io_binary_data.BinaryData( + array.tobytes(), + gltf2_io_constants.BufferViewTarget.ARRAY_BUFFER, + ) amax = None amin = None @@ -54,7 +70,7 @@ def array_to_accessor(array, component_type, data_type, include_max_and_min=Fals amin = np.amin(array, axis=0).tolist() return gltf2_io.Accessor( - buffer_view=gltf2_io_binary_data.BinaryData(array.tobytes(), gltf2_io_constants.BufferViewTarget.ARRAY_BUFFER), + buffer_view=buffer_view, byte_offset=None, component_type=component_type, count=len(array), @@ -64,10 +80,79 @@ def array_to_accessor(array, component_type, data_type, include_max_and_min=Fals min=amin, name=None, normalized=None, - sparse=None, + sparse=sparse, type=data_type, ) + +def __try_sparse_accessor(array): + """ + Returns an AccessorSparse for array, or None if + writing a dense accessor would be better. + """ + # Find indices of non-zero elements + nonzero_indices = np.where(np.any(array, axis=1))[0] + + # For all-zero arrays, omitting sparse entirely is legal but poorly + # supported, so force nonzero_indices to be nonempty. + if len(nonzero_indices) == 0: + nonzero_indices = np.array([0]) + + # How big of indices do we need? + if nonzero_indices[-1] <= 255: + indices_type = gltf2_io_constants.ComponentType.UnsignedByte + elif nonzero_indices[-1] <= 65535: + indices_type = gltf2_io_constants.ComponentType.UnsignedShort + else: + indices_type = gltf2_io_constants.ComponentType.UnsignedInt + + # Cast indices to appropiate type (if needed) + nonzero_indices = nonzero_indices.astype( + gltf2_io_constants.ComponentType.to_numpy_dtype(indices_type), + copy=False, + ) + + # Calculate size if we don't use sparse + one_elem_size = len(array[:1].tobytes()) + dense_size = len(array) * one_elem_size + + # Calculate approximate size if we do use sparse + indices_size = ( + len(nonzero_indices[:1].tobytes()) * + len(nonzero_indices) + ) + values_size = len(nonzero_indices) * one_elem_size + json_increase = 170 # sparse makes the JSON about this much bigger + penalty = 64 # further penalty avoids sparse in marginal cases + sparse_size = indices_size + values_size + json_increase + penalty + + if sparse_size >= dense_size: + return None + + return gltf2_io.AccessorSparse( + count=len(nonzero_indices), + extensions=None, + extras=None, + indices=gltf2_io.AccessorSparseIndices( + buffer_view=gltf2_io_binary_data.BinaryData( + nonzero_indices.tobytes() + ), + byte_offset=None, + component_type=indices_type, + extensions=None, + extras=None, + ), + values=gltf2_io.AccessorSparseValues( + buffer_view=gltf2_io_binary_data.BinaryData( + array[nonzero_indices].tobytes() + ), + byte_offset=None, + extensions=None, + extras=None, + ), + ) + + def __gather_skins(blender_primitive, export_settings): attributes = {} diff --git a/addons/io_scene_gltf2/blender/exp/gltf2_blender_gather_primitives.py b/addons/io_scene_gltf2/blender/exp/gltf2_blender_gather_primitives.py index f2f5ae61..04e823dd 100644 --- a/addons/io_scene_gltf2/blender/exp/gltf2_blender_gather_primitives.py +++ b/addons/io_scene_gltf2/blender/exp/gltf2_blender_gather_primitives.py @@ -216,6 +216,7 @@ def __gather_targets(blender_primitive, blender_mesh, modifiers, export_settings component_type=gltf2_io_constants.ComponentType.Float, data_type=gltf2_io_constants.DataType.Vec3, include_max_and_min=True, + try_sparse=True, ) if export_settings['gltf_normals'] \ @@ -227,6 +228,7 @@ def __gather_targets(blender_primitive, blender_mesh, modifiers, export_settings internal_target_normal, component_type=gltf2_io_constants.ComponentType.Float, data_type=gltf2_io_constants.DataType.Vec3, + try_sparse=True, ) if export_settings['gltf_tangents'] \ @@ -237,6 +239,7 @@ def __gather_targets(blender_primitive, blender_mesh, modifiers, export_settings internal_target_tangent, component_type=gltf2_io_constants.ComponentType.Float, data_type=gltf2_io_constants.DataType.Vec3, + try_sparse=True, ) targets.append(target) morph_index += 1 ```