If the input model is already quantized with the same settings wrt
position encoding, then the computed quantization parameters result in a
no-op transform (offset 0, scale 1). However, repeat invocations of
gltfpack will each add one extra node with a no-op transform. With this
change, if we can afford to attach the mesh directly we will do so.
Note that in the future, we could also merge node transforms via the
hierarchy, but that should be a separate optional optimization as it can
interfere with application processing logic.
If the input model is already quantized with the same settings wrt position encoding, then the computed quantization parameters result in a no-op transform (offset 0, scale 1). However, repeat invocations of gltfpack will each add one extra node with a no-op transform. With this change, if we can afford to attach the mesh directly we will do so.
Note that in the future, we could also merge node transforms via the hierarchy, but that should be a separate optional optimization as it can interfere with application processing logic.