Closed gogo199432 closed 3 years ago
From the technical standpoint this is called animation re-targeting.
Here's a brief research from Google:
https://godotengine.org/qa/34084/is-it-possible-to-do-animation-retargeting-in-godot-3-0
However, the animations and skeletons are different, so asking for this particular feature won't get what you want.
I'm assuming the problem is if I have an skeleton and some animations, one wants to playback the animation on the other skeleton with the same animation.
Edited:
Not all skeletons are humanoid.
Well, I'm not sure what it's exactly called but an example reallife use-case would be that I have loads of Mocap animations which I want to use on multiple human meshes.
For the re targeting to work, the mesh can be different, but the skeleton needs to be identical. Otherwise the animations break. If you want your UVs and bones weight painting to work as well, the number of vertices should probably stay the same too to my knowledge.
Both mesh and skeleton can be different. However, the approach I'm thinking of is untested so I don't know if it works or not.
How can the skeleton be different without the animations breaking or introducing animation bugs? Animation == Translation of Skeleton Bones. If there is a new bone or the bone arrangement changes, the software would have to calculate the translation for these bones. But since this won't be backed by a clever machine learning AI, the Intention of how bones should move is unkown to the algorithm. Therefore the result can only be an interpolation between points. This however is not what intention driven animation is. In case of a character the result would have to be touched up by an animator. This is work that should be done in Animation software like Blender, not in Godot. Or am I missing something here?
I'll state a problem since I don't have a solution for it.
Given https://github.com/vrm-c/UniVRM/tree/master/Tests/Models/Alicia_vrm-0.51 which is a GLTF2 model and you have a motion capture system like the https://en.wikipedia.org/wiki/HTC_Vive which gives you 3 to 8 to unlimited sensors. Each sensor is capable of rotation and position.
The player standing in the vr area has a different skeleton than the Alicia GLTF2 model.
Solve the old skeleton so the animations from the player is able to move the Alicia model in proper ways.
Edited:
We can set the location of every bone of the new animation as a sensor to re-target the old skeleton bones for the topic problem.
How can the skeleton be different without the animations breaking or introducing animation bugs? Animation == Translation of Skeleton Bones. If there is a new bone or the bone arrangement changes, the software would have to calculate the translation for these bones. But since this won't be backed by a clever machine learning AI, the Intention of how bones should move is unkown to the algorithm. Therefore the result can only be an interpolation between points. This however is not what intention driven animation is. In case of a character the result would have to be touched up by an animator. This is work that should be done in Animation software like Blender, not in Godot. Or am I missing something here?
AFAIK you are correct, in that the skeleton has to be the same, or close enough. The software could for example guess bone equality based on hierarchy also, this way not all bones have to be named the same. (Had this issue where the bones were the same, but the names not, so lot of manual work was needed) User could for example select the root node in both skeletons, and Godot could match their motion as long as their hierarchy matches
@fire Professional motion capture for animation (not VR) is done directly onto the skeletons for which the animation is targeted. Capture points are assigned directly to bones and usually are already weight painted specifically to match the actor, especially if the skeleton is not humanoid; guaranteeing the cleanest possible capture. In spite all these efforts animators still have to do a lot of cleaning up to do by hand, after the fact.
VR motion capture techniques are not something suitable for animation retargeting at all. Because it has to work in such a variety of cases if you want to support multiple very different skeletons with very different hierarchies, the capture is incredibly simplified and generic.
Retargeting makes a lot more sense if the skeleton is the same.
If you give Godot users tools to do it with different skeletons within the engine, but badly because of interpolated animations, the result will be that you will see more bad animations in Godot games. Because if there is a feature, people will use it.
Bad animations in Godot games is the argument for rejecting this re-target feature by reduz when I last spoke to him about it.
It's impossible for Godot to dictate one common skeleton, so I don't know how one can make a unified skeleton for Godot.
Therefore, targeting different skeletons was mentioned.
If the same skeleton is used, I would investigate the GSOC motion matching system for choosing animations that are common to one skeleton.
Edited:
So one argument for submitting a pr is if the system that was designed had good animation results.
You can already do that in Godot. From Is it possible to do animation retargeting in Godot 3.0?:
Now any two characters that are the same height,size and bone count can already share animations, Just copy and paste animations in your 3D editor.
For 3D it is already in Godot you just need to figure out how to use it.
Regarding what you want to make:
story designer where the player's could import their own avatars
The problem you are facing is probably not very common. I would restrain from implementing such a big feature only because of this use case. By the way, are we talking 2d or 3d?
Closing in favor of https://github.com/godotengine/godot-proposals/issues/2619, which has more details about the feature to implement (animation retargeting).
Describe the project you are working on:
Planned project would be a story designer where the player's could import their own avatars as AI companions. For this to work the animations need to be reusable.
Describe the problem or limitation you are having in your project:
In Godot currently the meshes (in this case humanoid characters) are linked with their animation upon import. Thus it is impossible to reuse the same animation without having to import them multiple times with their meshes. This creates duplicates, same animation in multiple model files.
Describe how this feature / enhancement will help you overcome this problem or limitation:
If we could store animation separately from meshes, we could create one armature that all characters share. Then import the animated armature separately and link it with any of the actual meshes later as needed. This not only enables reuse, but also makes iteration easier, since there is no need to reimport the whole model and possibly introduce issues along the way. Also better for source control with multiple artists.
Show a mock up screenshots/video or a flow diagram explaining how your proposal will work:
When selecting a model file for import, we could choose to only import animation/mesh. Once both of them were imported separately, then in the animation player add the animation to the character scene, and use it as usual. Godot should try to automatically match up the skeletons in the two files if possible. Alternatively set up a definitions model that assigns each bone to a predefined human model. (This feature is possible in Unity, please take a look for inspiration if needed)
Is there a reason why this should be core and not an add-on in the asset library?:
This feature would be a change to the way (core) importing works in Godot. Since it's a pretty big and useful feature, it should be shipped in the engine by default.