Open prideout opened 3 years ago
Hi @prideout:
Thank you very much for, at least, having in mind the enhancement of this, as you say, "too low-level" approach to animations under gltfio.
Best regards.
@prideout A layer on top would probably be best indeed.
Hi there @prideout
Any news on this subject?
I think it would not be difficult to write a small animation helper in Kotlin, and put it in filament-utils-android
.
I'm working on it in the SceneForm part because I think that it must be based on the SceneView frame time. But, maybe, some parts could be moved directly to filament.
Can you confirm that I well undestand the principal of multiple animations applied to a unique Renderable :
Thanks
A single glTF asset contains an array of animation
objects that can be referenced using a zero-based index. Each animation has a duration (which is implied by its max length sampler) and a fixed set of renderables that it affects (which is implied by the union of all its channels).
According to the glTF specification, there is nothing that prohibits multiple animations from being applied simultaneously, although it is generally recommended that each animation is self contained as an action. For example, "Walk" and "Run" animations might each contain multiple channels targeting a model's various bones.
See the notes near the beginning of the Animations section here: https://github.com/KhronosGroup/glTF/blob/master/specification/2.0/README.md#animations
Hi there, @ThomasGorisse:
Only to follow the path paced by @prideout, but in a visual way:
Anyway, about all this subject of implementing animations in the (so called here) "not-too-low-level" way (AKA "really-working-way"), I think It's just replicating what several major game engines already do. Only by example, what the terrific BabylonJS (BJS) offers to WebGL developers. Please take a look at https://doc.babylonjs.com/divingDeeper/animation, with a special attention to: https://doc.babylonjs.com/divingDeeper/animation/animation_method.
BTW, thorough and well worked documentation is a must for good SW development (without any additional and unnecessary pain). A little slap on the wrist to Google here, on all these ARCore, Sceneform, Filament, .... subjects, that in my opinion suffer from a lack of documentation. Exception made for model-view web component.
Best regards.
Thanks for this complete briefing. I'll try to comment and .md as much as my brain can. Even if in this particular 3D case that must be accessible to Android developers, I think that having a maximum number of samples cibling common use cases is more precious for developers than a complete documentation.
Question :
Is it Filament responsability to manage Animation Samplers/Interpolators = Did you already implemented it directly in Animator.applyAnimation(@IntRange(from = 0) int animationIndex, float time)
?
I continue to investigate and come back soon with a proposal interface.
Is it Filament responsability to manage Animation Samplers/Interpolators
Animation samplers are hidden by gltfio::Animator
, so clients do not need to know about them.
Did you already implemented it directly in Animator.applyAnimation
Yes, the implementation of this method performs interpolation for you, and it broadcasts all the changes to the associated renderables.
Do you have access to the animation frame rate or to the total frame count information inside the gltf ?
I want (like in BabylonJS) to allow users to target a specific start and end key frame
number instead of elapsed times
.
The elapsed time is good for rendering transformations specialy with interpolators and scene frame rate specific context but for 3D designers to developers communication purposes the frameNumber*frameRate*speedRatio = elapsedTime
should be self managed by the API.
I will make public a animation.setFrameRate(float frameRate)
function with a default 0.24 value but maybe this value is already exported in the gltf definition.
The goal is to be able to create an animation instance based on the start and end keyframes of the "Walk" and "Run" subparts of the Blender (or other 3D modeling SW) timeline.
Secondly:
Babylon.js animations creation/management are made from the scene object level mostly in order to allow AnimationGroup
and AnimationWeights
management.
I think that it will lost users who attemps to have access to animation directly from a RenderableInstance
. Where do you think we should handle animations ?
Do you have access to the animation frame rate or to the total frame count information inside the gltf ?
The glTF file format does not have the concepts of "frame count" or "frame rate", so the gltfio Animator does not expose these parameters. However it might make sense to add these concepts to a higher-level layer that does not yet exist.
I think that it will lost users who attemps to have access to animation directly from a RenderableInstance. Where do you think we should handle animations ?
This type of animation system would be super useful as a higher-level library. The Filament core is meant to be only a renderer, and the gltfio library is meant to be only a glTF loader.
Everything looks mostly clear for me now.
After inspecting the animations part of different existing "middlle level" projects (easy to use but covering most parts of gltf capabilities) listed here
Mostly because they are the only ones wich seems to manage animations more than just play/pause I'll go to the same direction/vision as :
I mostly finished the Animations part inside SceneForm. If you are ok, I'll will then make a pull request to include it directly inside gtltfio. Everything is well documented directly on the code and I will try to make a Github Page when I'm ready.
I'm quite happy with the result and since everything is build uppon ObjectAnimator, AnimatorSet, PropertiesValuesHolers the new APIs are very easy to involve and quite user friendly.
The last part I'm missing to include is the Weight Blending Mode. I have read a lot about it and seen that gltfio Animator.cpp already use the weight channel but I can't figure out how to concreclty apply the right weight value to an animation index.
In a very basic sense, Additive mode will simply ADD to the existing transform that exists on a bone. Using numbers for example...if a bone's x position is currently 5, and the additive animations bone x pos is 1, then it will add them together for a result of 6. With Blend mode, it will blend all active animations together. So if you have two animations playing at 0.5 weight each, and one has a bone x pos of 2 and one has a bone x pos of 4...it will then blend between those values to a value of 3.
How to apply a weight value at a specific Animation index ?
Filament cannot have dependencies on any Sceneform types, it sounds like your project might suitable for its own repo.
According to the glTF specification, the weights that get animated are morph weights, not skinning weights. I think that's the source of your confusion.
Filament cannot have dependencies on any Sceneform types, it sounds like your project might suitable for its own repo.
I was speaking of contributing to the Filament repo maybe in the gltfio-android module not making any reference to SceneForm.
From what you will see here : https://thomasgorisse.github.io/sceneform-android-sdk/javadoc/ in the animation package, the only thing to do is to add the ModelAnimation.java
and ModelAnimator.java
classes and to implement the AnimatableModel
in FilamentInstance.java
Here are the use cases coming from what is already in place (model could be a FilamentAsset or FilamentInstance instead of a sceneform RenderableInstance) :
model.animate(ValueAnimator.INFINITE).start();
ObjectAnimator walkAnimator = ModelAnimator.ofAnimation(model, "walk");
walkButton.setOnClickListener(v -> walkAnimator.start());
ObjectAnimator runAnimator = ModelAnimator.ofAnimation(model, "run");
runButton.setOnClickListener(v -> runAnimator.start());
AnimatorSet animatorSet = new AnimatorSet();
ObjectAnimator liftOff = ModelAnimator.ofAnimationFraction(planeModel, "FlyAltitude",0, 40);
liftOff.setInterpolator(new AccelerateInterpolator());
AnimatorSet flying = new AnimatorSet();
ObjectAnimator flyAround = ModelAnimator.ofAnimation(planeModel, "FlyAround");
flyAround.setRepeatCount(ValueAnimator.INFINITE);
flyAround.setDuration(10000);
ObjectAnimator airportBusHome = ModelAnimator.ofAnimationFraction(busModel, "location", 0);
flying.playTogether(flyAround, airportBusHome);
ObjectAnimator land = ModelAnimator.ofAnimationFraction(planeModel, "FlyAltitude", 0);
land.setInterpolator(new DecelerateInterpolator());
animatorSet.playSequentially(liftOff, flying, land);
PropertyValuesHolder cubeTop = ModelAnimator.PropertyValuesHolder.ofAnimationFrame("CubeAction",10);
PropertyValuesHolder sphereLeft = ModelAnimator.PropertyValuesHolder.ofAnimationFrame("SphereAction", 20);
ModelAnimator.ofPropertyValuesHolder(model, cubeTop, sphereLeft).start();
I let you gess how many line of codes it takes in Kotlin with apply
According to the glTF specification, the weights that get animated are morph weights, not skinning weights. I think that's the source of your confusion.
Sorry for the weight term I could have used another one but, Thee.js and Babylon also use it for speaking of the influence on a global AnimationClip not only the influence inside a channel.
What I clearly want is to have a float weightFactor (0 to 1)
on the applyAnimation(...)
function.
This weight will just make a channel.weight *= weightFactor
.
This way, the morph weights of, for example, a walk animation can make the character go slower by having less legs amplitude.
Instead of more explainations, you will understand everything I'm talking about watching this : Three.js Sample
Try to modify the animations weights and crossfades between them
You can see what it looks like here : Maintained Sceneform SDK for Android
Hi @ThomasGorisse:
Regarding the "Animations made easy" effort in your Sceneform Fork at Maintained Sceneform SDK for Android, a question I want to ask is if you have plans for implementing any kind of end-of-animation (last frame/time reached) event firing (and/or even a promise chain mechanism).
Best regards.
Animation samplers are hidden by gltfio::Animator, so clients do not need to know about them.
We would like to have a higher-level animation API that supports blending between animation states and also blending between animation states and the initial node transforms. (Like the above ThreeJS sample)
As this is not currently supported directly by the gltfio:Animator
and the Animator also hides the samplers, such a system can not be implemented as a layer on top by our client using the current API.
If gltfio is only supposed to be a GLTF loader and not the place for a higher level stateful interface with play/pause, would it make sense to at least add more lower level functions to the gltfio::Animator
to be able to achieve blending of multiple animation states and initial node transforms on the client side?
@prideout
If we add a weightFactor to the applyAnimation function like @ThomasGorisse proposed, then I think that demo UI in that animation should be possible, right? To blend animations, clients can simply call applyAnimation multiple times, each time with a different animation index.
I'm also open to exposing samplers, especially in a read-only sense, to Java, although I'm still somewhat fuzzy on why this would be necessary to achieve the above ThreeJS sample. (Note that they're already exposed to C++ via the getSourceAsset
backdoor.)
You mean that applyAnimation would interpolate between the current state and the new state based on the weight factor?
How would we then be able to reset the node transforms and morph weights back to the original (rest pose) state? As in if all animations have a weight of 0 or their sum of weights is smaller than 1? We don't want to reset the entire hierarchy each frame, but only the nodes that were affected by the animator previously.
There are any news about this issue??
@pixelflinger this is a common user request but I recommend closing this issue and directing users to the sceneview-andoird library and @ThomasGorisse.
Some users find the stateless Animator interface to be too low-level because it merely pushes a user-provided "elapsed time" into the glTF animation machinery. It is currently agnostic of concepts like play / pause / loop / reverse.
If we make this enhancement, users may need callback notifications as well.
We can either enhance Animator, or add a layer on top. I'm leaning towards the latter since callbacks often need to be expressed in a platform-specific (or language-specific) way.