Open ziriax opened 6 years ago
I believe one option would be to export the joint hierarchy and write node.skin
, without including node.mesh
or any mesh/material data. Assuming the skeletons match for each exported file, it seems reasonable to assume that the animations can be applied to the original skinned mesh. I say "reasonable" but not guaranteed because there is nothing currently in the spec about references to external .gltf
files, or how a loader should handle a skin without a mesh. You may find it easier to do this export and combine the resulting glTF files in an offline processing step.
For example — in three.js this is supported, if and only if the joints are uniquely and consistently named.
Hm, per @lexaknyazev's note (https://github.com/KhronosGroup/glTF/issues/1403#issuecomment-422077297), omitting the mesh is not allowed by the schema (which is probably all the better for predictable behavior from existing engines). Instead, you could I suppose write a trivially small mesh into each file, but that feels much more like a hack.
I guess it's safer to say the spec doesn't have a clear mechanism for this, and merging single-animation files into a multi-animation glTF asset with an offline script seems like the least-fragile currently-available option.
One more note: That "trivially small mesh" would still need weights and joints data for an asset to pass validation.
Thanks for the feedback, it confirms my gut feeling :-)
Any actions left here?
It is fine to have a skin
object with joints hierarchy and animations but without "target" mesh. Such asset won't be rendered (since it won't contain any vertex data) however it will pass the validation with "unused object" hints. Linking it with target meshes at runtime is obviously an engine's job.
At WonderMedia, the approach we aim for is to:
export the rig to a full GLTF file, where each node has the name of the corresponding Maya node.
export each animation to a full GLTF file, where each node has the name of the corresponding Maya node. However, we will use a separate buffer for the mesh.
we'll make a tool that processes these for game export, where we perform re-indexing when it turns out the rig file was modified, but each animation file will now use the same mesh buffer of the rig GLTF file, or a single GLTF file will be generated.
That is purely theoretical, so will most likely not work ;-)
So our global idea is to abuse GLTF buffers for sharing the mesh data, but the node hierarchy will be duplicated in each file, or we will just generate a single file from the seperately exported files.
export each animation to a full GLTF file, where each node has the name of the corresponding Maya node. However, we will use a separate buffer for the mesh.
I think you could simplify this, slightly, by omitting the meshes entirely from the animation files. But each animation will still need a buffer for the animation data of course. Then only the node hierarchy is duplicated.
I guess you are right, but the advantage of keeping the mesh data in de animation file is that it can be viewed by any standard application. Another option would be to make our Maya2glTF exporter aware of the rig GLTF, and have the anim gltf file refer to the same mesh buffer... Mmm, like that we can have our cake and eat it ;-)
Maybe for glTF.3 , and for the sake of simplification, scene representation and animations could go in separate schemas?
Hi all. My 50 cents.
One more advantage of separating animations from a mesh (which is seems not mentioned here) is a possibility to share animations between different models.
Also, it is possible to have a model which consists of sub-models (parts) which are optional. Different human heads, different outfits, etc. They share skeleton, and can be merged (in game) together in different combinations. So actually scene consists of different "components": a few meshes, a few animations, and they could be used nearly randomly. Placing everything into a single file won't give a good results anyway - how the model with multiple heads will appear? On the other side, current glTF specs doesn't provide any possibility to store these parts separately with possible assembly. Of course it is possible to extend format or interpret specs in own way, but this may lead to "fragmentation" of format and making things incompatible between different glTF software.
In my opinion, it would be nice to have some specs saying that animation may be stored separately from a mesh with providing some attributes and not providing others.
Maybe for glTF.3 , and for the sake of simplification, scene representation and animations could go in separate schemas?
The challenge here is that animation must refer to specific nodes in the scene hierarchy. Node indices are not an ideal source of unique identifiers across files, so this (while possible) increases complexity.
... current glTF specs doesn't provide any possibility to store these parts separately with possible assembly.
There are three.js users doing this already — the glTF specification does not prevent it, and it requires no spec changes. The joint hierarchy is replicated in each animation file, but the mesh data is not, so it's a relatively small amount of metadata overhead. At runtime the animation is just retargeted to the (identical) joint hierarchy from the mesh file.
We plan to implement something like this as well. Currently the plan is to define an extension, roughly like this:
VENDOR_animation_retargeting
id
.
id
must be unique across the gltf scene
.id
generation can be done using the node names and a pass to verify uniqueness.id
is equal to the name, id
(and thus the extension) may be omitted.id
.animations
property, animation files export only the nodes (and perhaps a few triangles) necessary to be able to display the animation. This aims to achieve a few things:
gltf scene
with additional animation files, anything but the animations
property of the animation files is ignored.id
does not exist in the gltf scene
), it is ignored.We also intend to use the animation target id
to do animation blending in our engine. How to blend the animations will be defined outside of the .gltf files.
"nodes": [
{
"mesh": 0,
"name": "Teapot001",
"extensions":{
"VENDOR_animation_retargeting":{
"id":"Teapot001"
}
}
}
]
"animations": [
{
"name": "animation001",
"channels": [
{
"sampler": 0,
"target": {
"node": 0,
"path": "translation"
"extensions": {
"VENDOR_animation_retargeting": {
"id": "Teapot001"
}
}
}
}
],
"samplers": [
{
"input": 0,
"interpolation": "LINEAR",
"output": 1
}
]
},
]
@Selmar this way of splitting animations looks good to me, although I would use different naming for the extensions in Node and Channels.
I use automatic code generation, and having the same name for two extensions used in two different contexts would lead to generating two classes with the same name. Maybe other schema generated libraries might have the same problem.
Additionally, I think there's some proposals to support animations for materials properties, so taking in account that, an animation might target not only a Node, but also a Material sub channel. which means that material objects might need to support the extension too!
@Selmar this way of splitting animations looks good to me, although I would use different naming for the extensions in Node and Channels.
I took this convention from the KHR_lights_punctual example, and I think it makes sense conceptually, because they are really tied together. I'm not strongly for or against.
Automatic code generation is cool by the way, what are you using?
Additionally, I think there's some proposals to support animations for materials properties, so taking in account that, an animation might target not only a Node, but also a Material sub channel. which means that material objects might need to support the extension too!
Every animation target, indeed. We need to do the same for our camera animations (we're animating yfov
).
@Selmar did this work have any motion. I'm interested in VENDOR_animation_retargeting.
We use an unofficial extension in our asset pipeline, called ASOBO_animation_retargeting
. We have a json schema, but it's currently only published as part of our SDK. We haven't made any pull requests (to be honest, we didn't really think of it).
The extension we implemented is almost identical what I described above; only for the IDs that we throw on the animation targets we decided to use a separate extension called ASOBO_unique_id
, because its purpose may transcend that of animation retargeting.
Hi @Selmar - I'm struggling to get the engine to pick up the join. My XML is correct as per the SDK documentation but I'm still not able to get the two objects to merge in engine. Note: I don't need animation, just simple joining of gLTF objects.
Is the ASOBO_unique_id the common name that links the models? i.e.
in arm.gltf
_"name": "arm", "extensions": { "ASOBO_uniqueid": { "id": "body" }
in leg.gltf
_"name": "leg", "extensions": { "ASOBO_uniqueid": { "id": "body" }
? I'm having trouble getting the objects to merge. My XML is referencing them correctly.
Thanks for any help you can provide.
@SFSimDev Hey, that's very experimental of you! Unfortunately this feature is not yet live and once it goes live it won't support skinned meshes yet. Also, the unique_id
is to identify the object itself, so leg
and arm
would need to both be parented to a node with the unique id body
.
This isn't really the right place to ask about something related to MSFS though, please use the support website for that.
@SFSimDev Hey, that's very experimental of you! Unfortunately this feature is not yet live and once it goes live it won't support skinned meshes yet. Also, the
unique_id
is to identify the object itself, soleg
andarm
would need to both be parented to a node with the unique idbody
.This isn't really the right place to ask about something related to MSFS though, please use the support website for that.
Great, thank you for letting me know! I was referred to a page in the documentation here https://docs.flightsimulator.com/html/index.htm#t=Asset_Creation%2F3D_Models%2FSubmodel_Merging.htm that states the uniqueID for submodel merging has been implemented - Great to know I'm on the cutting edge ;)
Unfortunately I've lost a day digging into this though :-/
@SFSimDev Yes, I checked the documentation after you arrived here and noticed that it was published, this is an error on our side. Sorry!
@SFSimDev Yes, I checked the documentation after you arrived here and noticed that it was published, this is an error on our side. Sorry!
It's cool - Looking forward to when it does arrive so that I can optimise some models. At least now I can stop headscratching :)
Elements do seem to be implemented though. UniqueID logic is in the build package so it's failing on me but the problem was not knowing why (especially as I use Blender and not 3DS) so I'm writing bits by hand.
Any remaining actions here? Sounds like some workarounds were discovered.
Any remaining actions here? Sounds like some workarounds were discovered.
I suppose one action could be for us to formalize the extensions we use to merge animations. They are still subject to change, though.
We are switching to a Maya workflow where we have a single animation per Maya scene. The anim scene references a rigged model scene.
I need to modify our open source Maya2glTF exporter to support this.
Artists would export the model first to glTF, and each animation clip to another gLTF that doesn't contain the model. The latter would be an invalid glTF, unless it refers to the model somehow.
Then at runtime we need to load all these glTFs as if they formed a single one.
This allows incremental work by multiple people (we have multiple animators working on different animations for a single character in parallel)
As far as I understand the spec, this is not supported by glTF 2.0
What approaches do you recommend for this?
Thanks!