jkuhlmann / cgltf

:diamond_shape_with_a_dot_inside: Single-file glTF 2.0 loader and writer written in C99
MIT License
1.42k stars 135 forks source link

Double-precision transform hierarchy #228

Open zeux opened 10 months ago

zeux commented 10 months ago

Currently cgltf uses cgltf_float typedef for all floating point numbers that are part of the scene definition, including materials, transforms, and vertex attributes.

Unfortunately, single precision floating-point numbers may not be enough to represent node translation. This is a prominent problem in glTF when used for 3D Tiles, as these files may use geospatial coordinate systems that place the origin in the center of the Earth, and place the object on the surface of the Earth which is Pretty Far Away from the origin.

This can be worked around by carefully placing nodes at locations that happen to be exactly representable as 32-bit floats, but that requires special data preparation that doesn't seem to be commonplace. Meanwhile Web glTF loaders typically handle large transforms without issues as they default to 64-bit floating point math, so you only need to do camera-relative rendering to avoid GPU precision issues.

Ideally it should be possible to configure cgltf to use 64-bit floats for node transforms. Crucially, right now just redefining cgltf_float wouldn't work because among other things it is used to extract accessor data (e.g. cgltf_accessor_unpack_floats), which should keep using 32-bit floats for performance/memory reasons.

It's not clear to me if we need to use double-precision for just node transforms, or for everything that's not a vertex attribute. For example, materials have a handful of floating-point numbers for which precision should be inconsequential - does that matter?

Also unsure if we should invent a new type, like cgltf_real, for this, or keep using cgltf_float and use cgltf_float32 in a couple places where 32-bit floating point numbers are required.