petrbroz / svf-utils

Utilities for working with the SVF file format used by Autodesk Platform Services.
https://petrbroz.github.io/svf-utils/
MIT License
123 stars 53 forks source link

Center models that require double precision & store offset #47

Closed FreakTheMighty closed 2 years ago

FreakTheMighty commented 2 years ago

When using the --center option it would be helpful to recover the offset. Perhaps the value could be stored inside the glTF using this cesium extension


Adding a bit more context:

Models that are very far from the origin suffer from rendering artifacts after being converted to glTf. The typical solution is to offset large (double precision) values in the model into a more reasonable floating point range then storing that offset to recover the double precision values later.

Example of rendering artifacts when using forge and forge-convert-utils to go IFC -> SVF -> GLB image-20220311-155844

wallabyway commented 2 years ago

isn't the offset stored in the parent glTF node, when you add the --center option?

we could duplicate this same value, into the cesium extension...

"extensions": { "CESIUM_RTC": { "center": [6378137.0, 0.0, 0.0] } }

would that work?

FreakTheMighty commented 2 years ago

@wallabyway thanks for the feedback. I haven't checked the root node, but I would hope that the transform is not applied to the root node as very large transforms need to be handled in a special way by many real time rendering engines. For example, in Unity if we apply a 6378137.0 unit translation the model will appear to jump erratically as the engine only support single precision floats.

Instead, we would like the model very close to the origin and have the offset stored elsewhere incase we need to perform a coordinate system mapping.

FreakTheMighty commented 2 years ago

@wallabyway , but it does look like you're right https://github.com/petrbroz/forge-convert-utils/blob/ab7057ba1ad465b725667873320fa2d9a745cbfd/src/gltf/writer.ts#L226-L242

Perhaps we need an additional option as I can see how this option could be valuable for different use cases. For large offsets the offset needs to be applied either to the individual node's transforms, or in some cases the mesh data itself.

wallabyway commented 2 years ago

The vertices and node matrix offset, should be normalized. ie. The 16bit float precision in the vertices, should be maximized around the origin, with offsets moved into the 32bit float precision in the node matrix.

Is this not the case? Perhaps you have an example?

FreakTheMighty commented 2 years ago

@wallabyway I was with you right up until you said it shod be moved into the 32 but float node matrix. I think it should go into the extension to avoid having 16bit precision rendering engines from applying the transform.

With regards to the normalization I think it can be a bit complicated cause the translation could happen anywhere in the hierarchy all the way down to vertices. The easier thing is to bake the transforms into the vertices then normalize, but this means losing relative transforms.

petrbroz commented 2 years ago

I think what @wallabyway is referring to is using a 3x3 matrix of floats representing rotation and scale, together with 3 doubles representing the offset. I'm not sure if Unity does that, though.

@FreakTheMighty do you have any sample SVF that would demonstrate this issue (glTF geometry being "wobbly" due to floating point precision issues in Unity)?

FreakTheMighty commented 2 years ago

@petrbroz let me work on getting you a good sample of that. You're right, that's a good place to start.

I feel like we're on the same page with regards to storing an offset with double precision, that is the CESIUM_RTC extension that was originally proposed.

Unity, nor any of the existing Unity glTF plugins that I'm aware of will do anything with that offset natively. So storing the offset leaves it up to the developer to handle it however they see fit.

wallabyway commented 2 years ago

Maybe what you are looking for is 3D-Tiles-Next ?

To get a very large offset, at planet scale and high precision, create a tileset.json file, put in two glTF file references, and position each glTF with crazy large offsets.

Treat 3D-Tiles-Next as a 'sort of' scene graph (json file), with scene node boxes, that can point directly to glTF url. You don't need the HLOD feature of 3D-Tiles-Next.

Also, if you are interested in this area, we are also discussing other formats besides cesium_RTC, like Geopose, 3D-Tiles-Next, I3S, and a few others. You can join the public workspace: gltfworkspace.slack.com look for channel: #geospatial

FreakTheMighty commented 2 years ago

Apologies for the delay. I have some screenshots to share, but I still need to get approval for the source files. I will post files as soon as I have approval.

Source Rendering Viewer
IFC (original) image-20220311-155656 BIM Vision
GLB no centering image-20220311-155844 https://sandbox.babylonjs.com/
GLB centered image-20220311-155744 https://sandbox.babylonjs.com/

@wallabyway we use 3d tiles (v1) elsewhere in our product, but for this tool it feels a little heavy handed to require the authoring of a tileset.json especially if this project isn't intending to support more features from the spec. With that said, it is unclear what the status of CESIUM_RTC is as it was originally authored against gltf 1.0 4 years ago. 🤷 maybe the most "standard" and modern approach would be to spit out a tileset.json that uses the new 3DTILES_content_gltf

FreakTheMighty commented 2 years ago

@petrbroz @wallabyway I've attached two zip files. The first contains an source IFC and two resulting glbs after undergoingIFC-> SVF -> GLB conversion. One of the GLB's has been converted with the --center option the other without. Not that both display floating precision artifacts, but without --center the GLB renders with new artifacts any time a the camera moves.

Test_View_ifc_and_glb.zip svf_output.zip

Source Rendering Viewer
IFC (original) image BIM Vision
GLB no centering image https://sandbox.babylonjs.com/
GLB centered image https://sandbox.babylonjs.com/
FreakTheMighty commented 2 years ago

@petrbroz I want to note two challenges for offsetting models with these sorts of large distances.

  1. Large values can exist anywhere in the transformation hierarchy all the way down to the individual vertices. I don't imagine this is "best practice", but I received a customer file that was very far from the origin, but where there were only identity transforms and each vertices had very large values. This makes determining an appropriate offset and figuring out where it should be applied a little less straight forward.
  2. I haven't run into this, but of course you could have a file where no offset would produce good results for the whole file. For example, a file that contains two sites that are very far apart.
wallabyway commented 2 years ago

Thanks @FreakTheMighty - the center.glb file, looks 'good' in Three.js (see screenshot).

However, babylon.js seems to have a problem... Hmmm 🤔 (@bghgary) Also, when I add MeshOpt and/or mesh_quantization to the glb (@zeux), it appears more like the Babylon.js version, the broken box in your screenshot above. seems like a precision issue, but I'm not sure.

I'll need to dig further as to why three.js works, but not Babylonjs and also why glTFpack -noq still shows precision issues.

Validation Report Errors:

  1. bad accessors from the report. I wonder what that is @petrbroz ?
  2. it says 'Generator: forge-convert-utils' , yet forge-convert-utils doesn't save glb. What tool did you use to convert glTF into this glb ? It's not reporting itself.

Screen Shot 2022-04-08 at 12 21 00 PM )

FreakTheMighty commented 2 years ago

@wallabyway a colleague generated those glbs, so I'll need to get back to you on how they were generated. Meanwhile, I've regenerated from the attached SVF and I'm attaching the original gltf as output from forge-convert.

0.zip centered-0.zip


With regards to the three.js working, while babylon.js doesn't, I'm pretty curious to understand why. With that said, I think the glTF spec is pretty clear, the matrix in the node is stored as a float, not double. I think this is confusing cause technically JSON can store doubles, but the glTF specifies float.

https://github.com/KhronosGroup/glTF/blob/8e798b02d254cea97659a333cfcb20875b62bdd4/specification/2.0/schema/node.schema.json#L27-L36

I think this is pretty important to us though, cause even if we put an offset in transform not all viewers will handle it correctly and some will handle it very badly :D.

bghgary commented 2 years ago

I think the glTF spec is pretty clear, the matrix in the node is stored as a float, not double.

I'm not sure what you are reading that indicates this. The referenced schema says floating-point values which doesn't indicate single-precision or double-precision. I will see if I can determine what Babylon.js is doing.

FreakTheMighty commented 2 years ago

@bghgary perhaps I’ve misread or conflated those terms. In C# the types float vs double denote precision. Also, in the spec regarding accessors it does say it is limited to single precision.

https://github.com/KhronosGroup/glTF/blob/8e798b02d254cea97659a333cfcb20875b62bdd4/specification/2.0/Specification.adoc#3622-accessor-data-types

The other reason I thought it was single precision is because I thought webgl has single precision limitation. Issues like this seem to point to the problem and workaround’s https://github.com/mapbox/mapbox-gl-js/issues/7268#issuecomment-420711536

bghgary commented 2 years ago

It's very likely the issue is because of single-precision issues. Babylon.js will eventually use the matrix as floats in the shader. I'm just noting that floating-point doesn't imply single-precision. The accessor data types are for accessors. The matrix you are referring to are stored as JSON properties which are not accessors. Maybe @lexaknyazev can help us with this part. @lexaknyazev Is there a restriction on what kind of float values are accepted in the matrix property of a node?

FreakTheMighty commented 2 years ago

https://github.com/KhronosGroup/glTF/issues/2140#issuecomment-1093975959

I think this answers the question about the spec pretty clearly, but doesn’t exactly answer how to deal with the problem at hand. I have limited control over how an engine handles these values.

javagl commented 2 years ago

I stumbled over this via the linked issue, and without knowing much of the specific context here (and sorry if this hint is already obsolete) :

(I don't have the perfect place to link to for the latter - fow now, it is just one side note in the spec )

And this indeed does raise the question of what a rendering engine will do when the translation contains a value that is "too large for float (but not too large for double)" - it boils down to some trickery when passing the data to the GPU, but that has to be sorted out by the rendering experts...

deltakosh commented 2 years ago

Hey team! I've been investigating this issue and I have a way to fix it.

When creating the Babylon Engine, we enable it with 32bits precision matrices. With very large numbers like here it is not enough. We picked 32bits first for perf reasons of course.

The solution is to use 64 bits. To do so, simply initialize the engine with this line:

this._engine = new Engine(this._canvas, antialias, {
    useHighPrecisionMatrix: true
});

The output: image

deltakosh commented 2 years ago

By the way I noticed that the glb file has a lot of errors: image

deltakosh commented 2 years ago

Update: We decided to switch the Sandbox to 64bits precision by default ;)

Popov72 commented 2 years ago

Could be relevant: https://github.com/BabylonJS/Babylon.js/issues/4516#issuecomment-769069062

deltakosh commented 2 years ago

Sandbox update is live

FreakTheMighty commented 2 years ago

@petrbroz @wallabyway I think this ticket could be closed. It does seems like best practice is to put the offset into the root node and that its up to the viewer to handle this well.

petrbroz commented 2 years ago

Sorry to chime in this late guys. It looks like a lot has been discussed here. So if I understand it correctly, there's no need to address the double-precision issue in the converter, but we still need to figure out what the validation issues are. I will look into that next week.