Closed sakrist closed 6 years ago
Hi @sakrist, thanks for the idea. For my understanding, you are suggesting just to make the schema more explicit? Not an actual change to the spec?
Could you also expand on:
Usually not described properties are skipped.
What skips them?
Hi @sakrist, thanks for the idea. For my understanding, you are suggesting just to make the schema more explicit? Not an actual change to the spec?
Yes. It's actually for scheme compilers. Some of them skip empty properties, some create empty object or dictionary.
Ah, I see. Sounds like a good change to make. If you have the bandwidth, a pull request would be deeply appreciated.
We should also test to make sure that https://github.com/AnalyticalGraphicsInc/wetzel still generates the correct reference doc.
If anyone is interested in getting started contributing to the glTF spec/schema, this is a nice beginner issue to start with.
I think we should require (or at least strongly suggest) that the root any extras
property be an object. This is how it's used in all implementations today.
I think we should require (or at least strongly suggest) that the root extras property be an object.
+1. extras
handling would be simpler in Importer/Exporter if we allow only an object. See https://github.com/mrdoob/three.js/issues/14343
Oh, a point that may have been unclear from my earlier comment — by "root" I meant the entire value of any .extras
entry, as opposed to subproperties like .extras.foo
. I was not referring to the .extras
property of the root glTF object specifically. Subproperties like .extras.foo=25
can certainly be primitive values.
In short, three.js exposes custom node data to users as an object — when .extras
is an object, the mapping to custom data we provide developers is intuitively 1:1. When .extras
is not an object we either have to ignore it or make up an arbitrary property name for it, like .gltfExtras=25
. Blender import is probably in the same situation here, where it needs to convert custom node data to a Python dictionary somehow.
My assumption is that no tools currently write .extras
as a primitive value, and using an object is the de facto best practice anyway. It would simplify client implementations to remove the possibility of other values. If tightening the spec isn't safe, does it seem reasonable for clients like three.js and Blender to just ignore primitive .extras
objects?
/cc @bghgary
does it seem reasonable for clients like three.js and Blender to just ignore primitive .extras objects?
I think so, considering there is no specific requirement on how extras
is to be used.
@bghgary I'm curious as to why it's not "safe" to change the schema for extras
to specify it's an object. Certainly there will be legacy versions of software that validate against the old schema, but, as far as we know none of those pieces of software do anything with primitive extras. Is there concern that a malicious payload would try to exploit a primitive extras object that would pass legacy validation somehow? Is there concern that legacy validation will somehow get paired with a newer parser that doesn't tolerate a primitive extras object? What's the actual snag here?
The concern is backwards compatibility. We don't know for sure if there are assets out there that use non-object extras. We unfortunately don't have telemetry for this. For the sake of argument, let's assume there are. Many of our products validate against the schema on load. If the schema changes and we update the schema on our products, then assets with non-object extras that used to load will no longer load. In general, we should consider all changes to the schema a breaking change and avoid doing it.
@bghgary OK, thanks. I will point out that the suggested path forward here will have much the same effect. If we have Blender and ThreeJS and other clients automatically ignore any primitive extras, they will still pass validation but the extras feature won't work for them. For some clients, there is no reasonable way around this, other than to ignore, warn, or even mutilate primitive extras. The validator would probably still be updated to flag primitives as a warning, on account of not being supported by a large portion of the ecosystem. Future glTF applications built against the current schema could start taking advantage of the continued availability of primitive extras, compounding the problem by introducing new assets into the ecosystem.
Not saying we have to go one way or the other, just trying to understand the exact difference in impact between a small "breaking" schema change, vs the possibility of broken primitive extras growing in numbers in the ecosystem.
Thanks all — per this thread and WG discussion, we cannot safely require .extras
to be an object in this version of glTF. However, it should be considered best practice to use an object anyway, and (because use of .extras
is not mandated) it's fine for clients like Blender and three.js to ignore non-objects.
Leaving this issue open with the "breaking change" label, for consideration in a future version of glTF, but this won't be changed in the current version.
@donmccurdy Could we add a note to the spec about possible compatibility issues with three.js? Otherwise, I don't think that validation hint is justified.
This is not specific to three.js — in every engine and DCC tool I'm aware of with a mechanism for application-specific metadata, that data is stored in key/values pairs:
I feel pretty strongly that using an object for extras should be considered best practice. In general best practices are more likely to be followed if they're hinted at by the validator, than if they were just implementation notes. Could we include both?
Could we include both?
Yes, that's the goal. Validation hint will be implemented in the next validator's update.
For
extension
glTF hasobject
type inadditionalProperties
, which is representation of a dictionary.extras
does not have any specifications. I suggest add something similar to code below:Motivation: Usually not described properties are skipped.