I have an OBJ file which is placed at a specific position on the earth, which means that the coordinates in the file are large numbers. As far as I can see, obj2gltf uses 32 bit floating point numbers both when parsing the OBJ file and when writing to the glTF file. This means that the coordinates loose precision and the geometry end up becoming distorted.
As far as I've understood the reason for using 32 bit floating point numbers is that WebGL doesn't support 64 bit floating point numbers?
The way I've solved this for now is to subtract a number from all the coordinates in the file before I convert it with obj2gltf, and then set a translation on the glTF with those coordinates afterwards. This works, but I was wondering if you had any other solution, and if this is something you could fix, or a limitation that you can't fix because of WebGL?
I have an OBJ file which is placed at a specific position on the earth, which means that the coordinates in the file are large numbers. As far as I can see, obj2gltf uses 32 bit floating point numbers both when parsing the OBJ file and when writing to the glTF file. This means that the coordinates loose precision and the geometry end up becoming distorted.
As far as I've understood the reason for using 32 bit floating point numbers is that WebGL doesn't support 64 bit floating point numbers?
The way I've solved this for now is to subtract a number from all the coordinates in the file before I convert it with obj2gltf, and then set a translation on the glTF with those coordinates afterwards. This works, but I was wondering if you had any other solution, and if this is something you could fix, or a limitation that you can't fix because of WebGL?
Here is the OBJ file: high-coordinates.obj.txt
Here it is directly converted to glTF: high-coordinates-distorted.gltf.txt
And here it is converted by subtracting coordinates and applying a translation: high-coordinates-proper.gltf.txt