Open BastiaanOlij opened 11 months ago
cc @lyuma @fire sorry it took me awhile before I was able to transcribe this after our meeting. Let's continue the discussion and see if we can add more detail into this as is needed.
OpenXR hand tracking (XR_EXT_hand_tracking) is just one type of "live streaming" animation source. For example Rokoko expose their full-body tracking data over TCP or UDP. Other types of software (Faceware Studio, Live Link Face, Optitrack, etc) support similar protocols.
It's exceptionally easy to stuff this tracking data into an Animation
resource; but currently the only way to expose these Animation
instances to an AnimationTree
for configuration or mixing is to add it to an AnimationLibrary
and add the library to the AnimationTree
.
One solution might be to define AnimationProvider nodes that can be added as children to AnimationTree nodes, which automatically add their animations.
It may be possible to prototype this by:
AnimationTree.add_animation_library
when readyOpenXRInterface.get_hand_joint_xxx
methods
Describe the project you are working on
Core OpenXR support in Godot
Describe the problem or limitation you are having in your project
Currently we are limited in our options to blend in skeletal data from sources other than animations loaded into an animation tree.
A core use case in XR is the ability to apply hand tracking data to the skeleton of a mesh. On the horizon is that this logic will be extended beyond hands and include body/arm/leg data which only increases the issues with the limitations we currently have.
We only support this in OpenXR in a very limited way, the tracking data is simply applied through a helper node (OpenXRHands) with no ability to influence this data, and limiting the use to meshes specifically designed for the hands of the platform being used.
In its most basic form we want to be able to improve this by blending the data with animations so selected fingers can be placed strategicly while other fingers react to the finger placement of the user.
In its fully form we want to be able to take a humanoid skeleton and:
So here we are talking about blending 4 sources of pose data into a single skeleton.
While the above sketches use cases for XR, in discussing the proposed solution with the animation team a number of non-XR use cases were identified:
Describe the feature / enhancement and how it helps to overcome the problem or limitation
We change the XR implementation so a
Skeleton3D
node is populated with hand skeleton that is posed according to the tracking data. Ideal would be if we actually had a skeleton resource that the node consumes but that may be a too disruptive change, so having aSkeleton3D
node populated would do. This could mean subclasses to anXRSkeleton3D
and/or aOpenXRSkeleton3D
node with the added logic embedded.Then we add a new blend tree node that has a path to a
Skeleton3D
node and will retarget the pose data in that skeleton of the Mesh being animated by the blend tree.One of the suggestions made was to define a Godot standard skeleton for the
XRSkeleton3D
node, the XR platform is then responsible for adjusting the incoming data to what we're expecting. This would make it easier to make platform agnostic games that can be deployed using OpenXR, WebXR, or alternative XR interfaces such as for Apple Vision Pro.Note that one thing that needs to be taken into account is that there are two competing lines of through in the XR space on how skeletons should work:
Both scenarios need to be supported.
Describe how your proposal will work, with code, pseudo-code, mock-ups, and/or diagrams
See above
If this enhancement will not be used often, can it be worked around with a few lines of script?
Unless we introduce a way to create blend tree nodes from GDScript or GDExtension, no.
Is there a reason why this should be core and not an add-on in the asset library?
As the use cases of this cover other animation needs, it makes sense to have this as core functionality