Open hybridherbst opened 2 years ago
Related issue —
The Draco library deals only with vertex streams, and doesn't know anything about the blend shapes. glTF-Transform requests a sequential encoding, rather than higher-compression edgebreaker, when encoding primitives that have morph targets, to ensure that the vertex count and order does not change (which would break the morph targets). Unfortunately, there seems to be an unexpected case where the vertex count changes despite that. I would prefer if choosing a sequential encoding guaranteed the Draco library would not change the vertex count or order.
I'm not familiar with glTF-Transform but sequential encoding should not change number of points in the encoded draco::Mesh
. The number of points should correspond to the number of vertices in the glTF. If there is discrepancy it may be caused by the conversion from / to glTF.
I'm afraid that sequential encoding is (sometimes) really changing the number of vertices. There's no conversion, I'm passing the vertices in and counting the number of vertices out. It seems to be a rare issue, affecting only certain inputs. I'll see if I can make a minimal reproducible example in the next few weeks.
Don, can you point me to the code where it is used? I double checked the compression code and the decoder decodes the encoded number of points from the bitstream and sets it directly to the output mesh. The encoded number of points also comes straight from the input geometry. Basically what I'm trying to say is that I don't think the points/vertices get lost during the encoding process. It most likely happens somewhere in the surrounding code, which can still be in Draco but I'm just not sure where (that's why I would like to see how it being used).
I think the issue may be here: https://github.com/google/draco/blob/e4e34b0c63d631ec5eefa3c41bfd5dab868b82de/src/draco/javascript/emscripten/encoder_webidl_wrapper.cc#L253-L256
The encoder wrapper deduplicates input attribute values which can reduce the number of vertices. We can probably add an option to turn off this deduplication which should hopefully resolve this issue.
@donmccurdy : There is actually an existing API that could be used to turn off the deduplication. Instead of using encoderModule.Encoder
you can use encoderModule.ExpertEncoder
. In expert encoder, there is a method EncodeToDracoBuffer()
that has an argument deduplicate_values
that can be turned off which should hopefully solve the problem.
Note that you would have to change the way how you set quantization. ExpertEncoder
is using attribute ids to specify quantization parameters rather than attribute types.
The encoder wrapper deduplicates input attribute values which can reduce the number of vertices. We can probably add an option to turn off this deduplication which should hopefully resolve this issue.
Thanks — Yeah, I'd love to be able to turn that off, or to have it off by default with the sequential encoding. For me (and I suspect most glTF-Transform users) the main benefit of the sequential encoding is being able to get the same vertices out, in the same order. When the vertex stream changes it can become incompatible with other resources in the scene, including (but not limited to) morph targets.
Are there any examples of the expert encoder API? If it's just a matter of switching to attribute IDs then I think that's workable too.
I don't think we have a javascript example with ExpertEncoder. The main difference is that you provide attribute_id vs attribute _type when setting per-attribute options (you can see declaration here and compare it to the declaration of the regular Encoder above. To be more specific the differences are:
EncodeToDracoBuffer()
method.There shouldn't really be any more gotchas. We may add the option to skip deduplication to the basic Encoder as well but I don't have any ETA for that.
@ondys when using the ExpertEncoder API with the "deduplicate_values" option disabled, it still appears necessary to switch to the sequential encoder to prevent the vertex count from changing. Is that expected, or should this option have the same effect on edgebreaker encoding?
The combination of sequential encoding + deduplicate_values=false appears to work as intended, thanks!
Possibly related issue:
I believe this issue can be closed.
It seems that in some cases, Draco encoding does not respect any blendshapes that a mesh may have and that may move vertices away from each other - they're merged nonetheless and thus break the resulting file.
This attached file, which uses animated blendshapes: SimpleBlendshapeAnimation.zip
breaks when run through
gltf-transform draco
orgltf-pack
, both using this draco project for the job.gltf-transform draco
, for example, reports the following error:error: Compression reduced vertex count unexpectedly, corrupting morph targets. Applying the "weld" function before compression may resolve the issue.
My understanding is that in this model, compression should never reduce vertex count as the blendshape animation moves them around.
More information including more logs and testing with gltf-pack and gltf-transform trying to get to the root of this issue here:
There's also another sample model in the above issue that exhibits a similar (the same?) problem.
I'm happy to support more to get to the bottom of this issue. We'd love to be able to pipe "everything" through draco, but currently need to manually verify that no errors have occurred or (in some cases) wrong merges have happened without errors that break files.