google / draco

Draco is a library for compressing and decompressing 3D geometric meshes and point clouds. It is intended to improve the storage and transmission of 3D graphics.
https://google.github.io/draco/
Apache License 2.0
6.45k stars 961 forks source link

Encoding merges vertices and breaks blendshapes in the process #929

Open hybridherbst opened 1 year ago

hybridherbst commented 1 year ago

It seems that in some cases, Draco encoding does not respect any blendshapes that a mesh may have and that may move vertices away from each other - they're merged nonetheless and thus break the resulting file.

This attached file, which uses animated blendshapes: SimpleBlendshapeAnimation.zip

breaks when run through gltf-transform draco or gltf-pack, both using this draco project for the job.

gltf-transform draco, for example, reports the following error: error: Compression reduced vertex count unexpectedly, corrupting morph targets. Applying the "weld" function before compression may resolve the issue.

My understanding is that in this model, compression should never reduce vertex count as the blendshape animation moves them around.

More information including more logs and testing with gltf-pack and gltf-transform trying to get to the root of this issue here:

There's also another sample model in the above issue that exhibits a similar (the same?) problem.

I'm happy to support more to get to the bottom of this issue. We'd love to be able to pipe "everything" through draco, but currently need to manually verify that no errors have occurred or (in some cases) wrong merges have happened without errors that break files.

donmccurdy commented 1 year ago

Related issue —

The Draco library deals only with vertex streams, and doesn't know anything about the blend shapes. glTF-Transform requests a sequential encoding, rather than higher-compression edgebreaker, when encoding primitives that have morph targets, to ensure that the vertex count and order does not change (which would break the morph targets). Unfortunately, there seems to be an unexpected case where the vertex count changes despite that. I would prefer if choosing a sequential encoding guaranteed the Draco library would not change the vertex count or order.

ondys commented 1 year ago

I'm not familiar with glTF-Transform but sequential encoding should not change number of points in the encoded draco::Mesh. The number of points should correspond to the number of vertices in the glTF. If there is discrepancy it may be caused by the conversion from / to glTF.

donmccurdy commented 1 year ago

I'm afraid that sequential encoding is (sometimes) really changing the number of vertices. There's no conversion, I'm passing the vertices in and counting the number of vertices out. It seems to be a rare issue, affecting only certain inputs. I'll see if I can make a minimal reproducible example in the next few weeks.

ondys commented 1 year ago

Don, can you point me to the code where it is used? I double checked the compression code and the decoder decodes the encoded number of points from the bitstream and sets it directly to the output mesh. The encoded number of points also comes straight from the input geometry. Basically what I'm trying to say is that I don't think the points/vertices get lost during the encoding process. It most likely happens somewhere in the surrounding code, which can still be in Draco but I'm just not sure where (that's why I would like to see how it being used).

donmccurdy commented 1 year ago

Sure! This would be the section:

https://github.com/donmccurdy/glTF-Transform/blob/6ed4eea43bd0459272ead716d1290be8bfeaa2cc/packages/extensions/src/khr-draco-mesh-compression/encoder.ts#L130-L140

ondys commented 1 year ago

I think the issue may be here: https://github.com/google/draco/blob/e4e34b0c63d631ec5eefa3c41bfd5dab868b82de/src/draco/javascript/emscripten/encoder_webidl_wrapper.cc#L253-L256

The encoder wrapper deduplicates input attribute values which can reduce the number of vertices. We can probably add an option to turn off this deduplication which should hopefully resolve this issue.

ondys commented 1 year ago

@donmccurdy : There is actually an existing API that could be used to turn off the deduplication. Instead of using encoderModule.Encoder you can use encoderModule.ExpertEncoder. In expert encoder, there is a method EncodeToDracoBuffer() that has an argument deduplicate_values that can be turned off which should hopefully solve the problem. Note that you would have to change the way how you set quantization. ExpertEncoder is using attribute ids to specify quantization parameters rather than attribute types.

donmccurdy commented 1 year ago

The encoder wrapper deduplicates input attribute values which can reduce the number of vertices. We can probably add an option to turn off this deduplication which should hopefully resolve this issue.

Thanks — Yeah, I'd love to be able to turn that off, or to have it off by default with the sequential encoding. For me (and I suspect most glTF-Transform users) the main benefit of the sequential encoding is being able to get the same vertices out, in the same order. When the vertex stream changes it can become incompatible with other resources in the scene, including (but not limited to) morph targets.

Are there any examples of the expert encoder API? If it's just a matter of switching to attribute IDs then I think that's workable too.

ondys commented 1 year ago

I don't think we have a javascript example with ExpertEncoder. The main difference is that you provide attribute_id vs attribute _type when setting per-attribute options (you can see declaration here and compare it to the declaration of the regular Encoder above. To be more specific the differences are:

  1. ExpertEncoder is constructed for a given mesh / point cloud. When you create a new instance you must pass down the geometry into the constructor (as opposed to the regular Encoder where the geometry was passed down to the EncodeToDracoBuffer() method.
  2. As mentioned above all attribute options are referenced using attribute ids.
  3. The 2. has one implication which is that if you, let's say have two TEX_COORD attributes you would need to set the quantization two times (because each attribute has a different id). With the basic Encoder you would just set the quantization once for all attributes with the TEX_COORD type.

There shouldn't really be any more gotchas. We may add the option to skip deduplication to the basic Encoder as well but I don't have any ETA for that.

donmccurdy commented 1 year ago

@ondys when using the ExpertEncoder API with the "deduplicate_values" option disabled, it still appears necessary to switch to the sequential encoder to prevent the vertex count from changing. Is that expected, or should this option have the same effect on edgebreaker encoding?

The combination of sequential encoding + deduplicate_values=false appears to work as intended, thanks!

donmccurdy commented 1 year ago

Possibly related issue:

donmccurdy commented 4 months ago

I believe this issue can be closed.