Closed Christopher-Hayes closed 1 year ago
Are you able to share an example (glTF or .blend) of a model you are hoping this feature would optimize?
Try this branch: https://github.com/scurest/glTF-Blender-IO/tree/sparse-morph
@donmccurdy I could email it to you if you want. But, since originally making this ticket I'm realizing I might've misunderstood how Sparse Accessors worked. For my use-case, I don't have any animations at all. It's 100% morph targets. So, there wouldn't be any room for optimization between animation key frames.
I also did a breakdown of how the morph targets impacted the file size, and I'm now realizing that a 2x increase in model size on a model that heavily uses blend shapes actually isn't that unexpected. Below is a random mesh selected from the model, this mesh morphs along the X axis and the Z axis:
Position: 1158x * 12 B (Vec3) = 14KB
Normal: 1158x * 12 B (Vec3) = 14 KB
Texcoord_0: 1158x * 8 B (Vec2) = 9 KB
Indices: 2964x * 2 B (Scalar) = 6 KB
Morph Target (Base Positions): 1158x * 12 B (Vec3) = 14 KB
Morph Target (Positions morphed along the X axis): 1158x * 12B (Vec3) = 14 KB
Morph Target (Positions morphed along the Z axis): 1158x * 12B (Vec3) = 14 KB
Total Mesh Size: 71 KB
The meshes vary, but in general the above shows that morphs accounted for 42 KB of 71 KB (60%) on this particular mesh.
@scurest Thank you for your work on this feature, I did try your branch and I didn't notice any differences in the file size. I searched in the GLB for the "sparse" keyword to see if sparse accessors were used anywhere, but didn't find any. The exported model doesn't have any animations, so I'm now realizing sparse accessors are probably more for animations and not so much morph targets by themselves.
In order to use a sparse accessor, most of the accessor items must be zero. For morph positions, that means most of the vertices in the shapekey need to be unmoved from their Basis position.
At one point I had implemented sparse accessors for shape key animation samplers in THREE.GLTFExporter (for https://threejs.org/). For various reasons, with the models I had available to test it did not seem to make much of a difference, so I didn't end up merging that change.
As @scurest mentions, a model where only a few vertices move during morphing would get some benefit from sparse accessors in the morph vertex data. If that's not the case in your model, you could also try https://github.com/zeux/meshoptimizer. It has two relevant features (MeshOpt compression and accessor quantization) either of which could probably reduce the size of your morph targets further. Such optimizations would require the application loading the model to support the glTF EXT_meshopt_compression
and KHR_mesh_quantization
extensions, respectively.
What is the status of this for Blender 3.0?
Is https://github.com/scurest/glTF-Blender-IO/tree/sparse-morph ready to merge?
Hello, No plan on this for now. (And it seems that @scurest deleted this branch)
I will try to focus on bugs, any feature that can be done by external tools will come later.
I'm tracking addition of this feature for glTF-Transform in https://github.com/donmccurdy/glTF-Transform/issues/351. There is another Blender addon that runs glTF-Transform post-processing optimizations on Blender glTF exports without additional manual work, so once the feature lands there it may be a good solution.
I should also note that morph targets and Draco compression are not really compatible. If you're using Draco compression to optimize other parts of your asset, this fix will probably not help you unfortunately. Meshopt compression might be a better option for such models; you can already test that by using gltfpack or gltf-transform on exported GLB models. You'll typically want to use both meshopt and gzip for best results.
Thanks for the update @donmccurdy! I'll leave this issue open if it helps others and you guys are planning to implement the feature.
On my side, I actually ended up doing exactly as you recommend. MeshOpt worked perfectly for a morphing object that needed Draco-like compression. As part of a PlayCanvas WebGL project, almost all assets were GZIP'd. So, that solution worked out perfect for me. I saw 2.5x decreases in 3D asset sizes (before PlayCanvas even supported KHR_QUANTIZATION
) I quickly realized I might be too focused on optimizing the model, when the real bottleneck is the textures.
I put more info on this process in a PlayCanvas post for anyone curious: https://forum.playcanvas.com/t/tricks-to-decrease-morph-target-sizes/18628/9?u=chris And created a sample PlayCanvas project as well using MeshOpt: https://playcanvas.com/project/779762/overview/load-glb-model-with-meshopt
I ran into this issue today. glTF files exported from blender are extremely large when they contain blend shapes.
Blender glTF export - 16.630MB bin + 155KB gltf
This model is very low poly and has no reason to be large other than blend shapes: 8595 tris, and one material, plus a few primitive debug meshes.
To contrast, FBX and FBX2glTF conversions are able to sparsely encode blend shapes. FBX - 656KB fbx FBX2glTF conversion (sparse accessors) - 855KB bin + 178KB gltf
Here is the blend file: mesh_parent_test_2a7_rotated.blend.zip
(Model is permissively licensed and available from https://booth.pm/ja/items/2019040 "2A-7-4 / XXXX Coolk")
Given this, I think sparse accessors are important to implement, or failing that, automatic compression of gltf exports (such as zip). I was unable to find the branch by scurest, but I suspect it would not be too difficult to implement. If it's not desired by default, perhaps it could be made an export checkbox to reduce blendshape size.
As additional context – I tried comparing the Blender→glTF and the Blender→FBX→glTF versions, they have roughly the same vertex counts and are otherwise similar. The difference does seem to be entirely in the use of sparse accessors. If I run the Blender→glTF version through...
gltf-transform sparse in.glb out.glb
... then the Blender export is reduced to the same size as the FBX2glTF export. The combination of sparse accessors and Meshopt compression seems to be ideal here, reducing the size further to about 470 KB.
I think it would be OK for sparse accessors to be enabled by default for blend shapes.
The patch just looks like this (not tested extensively)
Sparse Accessors and Blender Currently, gltf-pipeline and fbx2gltf do not support sparse accessors at all (links are to the feature requests). It does look like glTF-Blender-IO can import models with sparse accessors. However, unless I'm mistaken Blender is not capable of encoding a model with sparse accessors that previously did not have them?
No Available Tools can Encode Sparse Accessors I receive my models as FBXs and convert them to glTF. Morph targets double the model size of the model. The morph targets in my models simply interpolate scaling/translations at fixed intervals along a linear curve. It would seem that sparse accessors would be hugely advantageous, but I have yet to find a glTF exporter/optimizer that is capable of this.
Feature Request Option to encode morph targets using sparse accessors to reduce model size.