KybernetikGames / animancer

Documentation for the Animancer Unity Plugin.
65 stars 8 forks source link

What can Animancer do to make networking easier? #210

Open KybernetikGames opened 2 years ago

KybernetikGames commented 2 years ago

Background

Copied from Animancer's Documentation:

Since all the states, parameters, and transitions in an Animator Controller need to be defined upfront, the current state identifier and parameter values can be easily sent over a network for another computer to lookup the correct state apply those parameter values.

But since Animancer's animation system and this FSM system only need to know about a state when it is actually used, there is no identifier or common set of parameter values that could be sent over the network or used to lookup the corresponding details on another computer.

And even if animations did have an identifier, there would be no point in having a character play an Attack animation if the scripts on that computer don't actually know that the character is attacking. That's not a problem with Animator Controllers because they operate on their own internal logic so your scripts already need to be constantly checking what state it's in. But Animancer doesn't do any decision making for you, it only plays what you tell it to so your scripts can know what's playing without needing to check.

That doesn't mean Animancer and its FSM system can't be used in networked games, it just means they can't do the work for you automatically. If you use a Keyed State Machine where you register all your states on startup with an enum as the key, then you will be able to send the key over the network for the other computer to lookup the appropriate state. This means networking with Animancer is basically the same as networking any other script.

Problem

Synchronising the logical state machine but not the exact animation states should work for individual states, but wouldn't capture things like fade details and therefore couldn't support more complex networking techniques like Roll-back Netcode.

Developing a Solution

It might be useful to have a serializable type that can capture a snapshot of the current animation details to be sent over a network. But what would that entail?

Naming

What would the type be called?

AnimancerPlayableSnapshot?

Data

What values would it need to include?

Keys

Is it the user's responsibility to grab a specific state's data and apply it back to that same state or would you want to snapshot an entire AnimancerComponent into one object then apply it back in one go?

Events

What about Events?

Transition Triggers

Transitions triggered by events are currently non-deterministic based on the frame rate. For example:

  1. Animation A is playing.
  2. An End Event occurs at t=1s which plays Animation B.
  3. Time advances to t=2s. Animation B is now at t=1s.
  4. -Time rolls back to t=0s so A is playing again.
  5. AnimancerComponent.Evaluate(delta time = 2) is used to simulate forwards in one step.
  6. Animation A is jumps to t=2s and triggers its End Event which plays Animation B.
  7. Now we are at t=2s overall, but Animation B has only just been started so it's still at t=0s.

What if Animation B also has an End Event at t=0.5s which would have also been triggered to play Animation C in that time?

Do all End Events need to be replaced with something like a ClipTransitionSequence so that all the animations and timings are specified upfront in one place so it can figure out what should be happening at any time without needing to step through each animation's events?

Other State Types

Supporting more than the basic ClipStates would require additional data for them, meaning the serialization system would need to support polymorphism. Mixers in particular are very common for movement.

What Now?

That's far too many open questions for me to just implement a solution on my own, but I'd be more than happy to work with anyone making a networked game to help find a solution that meets their needs and move towards coming up with a generalised solution that can be included with Animancer itself.

Marsunpaisti commented 2 years ago

I'm not an expert on the subject but I'm working on trying to network Animancer with Photon Fusion. I've also pointed this issue out to some experienced people to get as much expertise out on this subject. From what I know, some usual key things in networking are:

  1. The animancer is being advanced manually via Evaluate()
  2. We must be able to Get and store the state of an AnimancerState, preferrably in a serializable struct of primitive types
  3. At any given time we have to be able to "blit in" a stored state to an AnimancerState (Or an AnimancerLayer? I believe the deeper the ability to sync goes the better, but I might be mistaken)

In current implementations, the most diffcult things have been

  1. Syncing of fading. I'm not sure if I have a bug in my implementation, but when I re-apply a Time a few frames back to an AnimancerState, and then re-simulate it forward with several calls to Evaluate, the Weight obviously advances since rolling back the time to a certain point does not implicitly also roll back the Weight. To remediate this, I of course also set the Weight of the state to the same point that it was on the said Time. For some reason, Animancer doesn't seem to behave in the same way on these re-simulations as it would have. I'll try to do some tests so I can provide more concerete data on the subject.
  2. Syncing of variable-speed states such as locomotion blend trees. It's been difficult to find a way of storing/reapplying the state of a blend tree in such a way that it can be resimulated forward in a deterministic manner. It seems the ITransition.Speed of the blend tree only reports the top-level Speed of the asset, not accounting for what its doing with speed extrapolation and whatnot. Edit: It seems my implementations syncing of WeightTarget / FadeSpeed was lacking. This might have been the cause as these get seem to get reset when the weight of the state hits 1 (the target weight) for the first time

As far as I know, the people who have gotten their implementations working usually have had to resort to implementations where they control the state pretty much completely themselves, by calculating it from a stored StartTime of when the animation started playing, and calculating the elapsed time from that, and just applying it manually and calling a blank Evaluate(). I dont yet know how they have gotten fading / weights working as I haven't even got my own implementations of that to work yet.

Marsunpaisti commented 2 years ago

I got a somewhat working implementation for Clip Transitions, linear mixers speed is still acting funky. What it requires is before calling evaluate(delta) I set all stored parameters from the end of last frame from either the client, or from the server if they exist. I'm also calling a blank evaluate() before the actual evaluate(delta) just to avoid the weirdness of the animations not advancing in time if their .Time has been set for this update

    public virtual void LoadSavedState()
    {
        TransitionState.IsPlaying = AnimancerIsPlaying;
        TransitionState.Time = AnimancerTime;
        TransitionState.SetWeight(AnimancerWeight);
        TransitionState.TargetWeight = AnimancerTargetWeight;
        TransitionState.FadeSpeed = AnimancerFadeSpeed;
    }

After evaluate(), I store the newest state. This is re-applied at the start of the next tick in the above function. If server has sent data for that frame, it will have overwritten these variables.

    public virtual void SaveState()
    {
        AnimancerIsPlaying = TransitionState.IsPlaying;
        AnimancerTime = TransitionState.Time;
        AnimancerWeight = TransitionState.Weight;
        AnimancerTargetWeight = TransitionState.TargetWeight;
        AnimancerFadeSpeed = TransitionState.FadeSpeed;
    }

Linear mixers also save / load their mixer parameter respectively in their override of the above functions (TransitionState as Animancer.LinearMixerState).Parameter = AnimancerMixerParameter; AnimancerMixerParameter = (TransitionState as Animancer.LinearMixerState).Parameter;

The mixer speed seems to be the only thing syncing wrong now. Perhaps its not based on the parameter at the call to evaluate(). Experimenting more

KybernetikGames commented 2 years ago

How do you handle transitions? Are you keeping one of those storage objects for every state or only syncing the current state?

You could try setting TransitionState.Time = AnimancerTime + Time.deltaTime * TransitionState.EffectiveSpeed; to advance it by one frame and hopefully avoid the extra Evaluate calls.

Marsunpaisti commented 2 years ago

How do you handle transitions? Are you keeping one of those storage objects for every state or only syncing the current state?

You could try setting TransitionState.Time = AnimancerTime + Time.deltaTime * TransitionState.EffectiveSpeed; to advance it by one frame and hopefully avoid the extra Evaluate calls.

How bad are the calls to evaluate performance-wise? Also I think I should use MoveTime() in that case right? Does the mixer work as it should if I just set the blend parameter first and then run TransitionState.MoveTime(AnimancerTime + Time.deltaTime * TransitionState.EffectiveSpeed);?

I got my current implementation to work with mixers too now, so I'm slightly afraid to touch it after a week of working on it.

What it required me to do was a struct like this to store the state after Animancer has been evaluted in a tick:

public struct AnimancerStateData : INetworkStruct
{
    public bool AnimancerIsPlaying;
    public float AnimancerTime;
    public float AnimancerWeight;
    public float AnimancerTargetWeight;
    public float AnimancerFadeSpeed;

    public AnimancerStateData(AnimancerState TransitionState)
    {
        AnimancerIsPlaying = TransitionState.IsPlaying;
        AnimancerTime = TransitionState.Time;
        AnimancerWeight = TransitionState.Weight;
        AnimancerTargetWeight = TransitionState.TargetWeight;
        AnimancerFadeSpeed = TransitionState.FadeSpeed;
    }

    public void ApplyToState(AnimancerState TransitionState)
    {
        TransitionState.IsPlaying = AnimancerIsPlaying;
        TransitionState.Time = AnimancerTime;
        TransitionState.SetWeight(AnimancerWeight);
        TransitionState.TargetWeight = AnimancerTargetWeight;
        TransitionState.FadeSpeed = AnimancerFadeSpeed;
    }
}
  public virtual void SaveStateAfterEval()
  {
      savedStateData = new(TransitionState);
  }
  public virtual void LoadSavedStateBeforeEval()
  {
      if (TransitionState == null) transitionAsset.createStateAndApply(_animancer);
      savedStateData.ApplyToState(TransitionState);
  }

Linear mixers do it for all their children also

    public override void LoadSavedState()
    {
        base.LoadSavedState();
        var asMixerState = (TransitionState as Animancer.LinearMixerState);
        asMixerState.Parameter = AnimancerMixerParameter;

        int i = 0;
        foreach (var stateData in savedChildStateData)
        {
            if (i < asMixerState.ChildCount)
            {
                stateData.ApplyToState(asMixerState.GetChild(i));
            }
            i++;
        }

    }
    public override void SaveState()
    {
        base.SaveState();
        var asMixerState = (TransitionState as Animancer.LinearMixerState);
        AnimancerMixerParameter = asMixerState.Parameter;

        int i = 0;
        foreach (var state in (TransitionState as Animancer.LinearMixerState).ChildStates)
        {
            savedChildStateData.Set(i, new(state));
            i++;
        }
    }

I have a keyed state machine, where the state object itself contains the AnimancerTransitionAsset and AnimancerState references, and I'm working with the acceptable limitation of only having one AnimancerState per TransitionAsset (I'm using the fademode in Play() where it does not create additional states) so every state knows what TransitionAsset to use to create its state. Perhaps if animancer natively supported networking multiple states it would somehow have to create keys for the states so they uniquely identify AnimancerStates yet it would have to be the same for every client. Perhaps a combination TransitionAsset.Key + index of state created from that asset is enough? Then there might be no need for my own state machine wrapper for each state.

About your question regarding events:

What about Events?

If an event is triggered then time is rolled back, do you need a way to know that it rolled back over that event so you can figure out how to undo it and/or avoid re-triggering the same event again when it re-simulates forward?

There would no need to handle rolling back events on an animancer level, as when properly networked the effects of the events themselves could be rolled back. If an event modifies the networked simulation state, the networked variables themselves can handle rolling back whatever effects the event caused.

Ultimately, the simplest way to describe how to make a roll-backable and easily networkable state for anything is to make it behave such that given a current state, the previous state always simulates into a valid next state, no matter how many times its ran from that same state i.e. NextState = Simulate(PreviousState) should always be satisfied

In animancer terms now that I have my prototype working, I believe the current state of an animancer layer should be possible to boil down to a dictionary of <StateKey, AnimancerStateData> assuming StateKey can be consistent across clients without an user defined wrapper as I have now done it.

Then the state could theoretically be applied to a layer in approximately the following manner, with some additional handling for removing/adding states as necessary so they conform to the one specified by the dict, removing current extra states in animancer and adding nonexisting states if one is in the dict but not in animancer

forEach(state in Layer.states){
   state.setData(Dictionary.get(state));
}

I'm not exactly sure how much complexity it would add to somehow include the clip / transition asset or whatever it is that created the state in the StateData so that animancer would know what asset or clip to use when creating the state if it doesnt yet exist.

KybernetikGames commented 2 years ago

How bad are the calls to evaluate performance-wise?

Pretty bad because it updates the entire graph and applies the output to the model. So if you also have the regular animation update every frame you're essentially doubling the cost of your animations.

Also I think I should use MoveTime() in that case right?

If you want events and root motion for that time period to be applied then yes.

Does the mixer work as it should if I just set the blend parameter first and then run TransitionState.MoveTime(AnimancerTime + Time.deltaTime * TransitionState.EffectiveSpeed);?

Yes, it should.

I'm not exactly sure how much complexity it would add to somehow include the clip / transition asset or whatever it is that created the state in the StateData so that animancer would know what asset or clip to use when creating the state if it doesnt yet exist.

There are 2 main challenges with that:

  1. Nothing has a runtime serializable unique ID:
  1. If we come up with a way of identifying them, they also need to be registered in a dictionary somewhere so they can be looked up by ID. I can think of two main possibilities there:

Maybe it would be possible to allow both. The system works using a centralised animation dictionary, but you can add to it at runtime if you want to define your transitions elsewhere.

Actually, a centralised animation dictionary would also solve the ID issue because it can just have pairs of ID and Transition.

This would also face similar issues to implementing transition sets: https://github.com/KybernetikGames/animancer/issues/80

Marsunpaisti commented 2 years ago

There are 2 main challenges with that:

Nothing has a runtime serializable unique ID:

Is it not possible to somehow get a hash for an AnimancerTransitionAsset.Unshared? After all it references an asset, cant it be hashed somehow?

KybernetikGames commented 2 years ago

From some simple testing, the hash of an AnimationClip asset actually does seem to be stable in the Unity Editor, but unfortunately it isn't in runtime builds. Each time you run the application, the asset gets a different hash (I'd guess that it's based on the memory address of the object).

I could use hash of the clip's name, but that would prevent you from using the same clip in multiple different transitions and I wouldn't want to force people to create duplicate clips with different names just to get around a limitation like this.

Marsunpaisti commented 2 years ago

A combination of the clip name and some of the more important transition settings would perhaps do? So its stable on every run but can change in another build if the settings are edited, which should be an acceptable limitation. Perhaps its possible to just script auto-generate or codegen magic generate ids at build time for all assets of a type, if thats a thing it would even be possible to identify if it was just an animationclip being played or a transitionasset from the ID alone

KybernetikGames commented 2 years ago

Generating a hash from the transition's fields might work, but would mean you can't modify transitions at runtime (because that would change the hash) which is too close to the limitations of Animator Controllers for my liking. Generating IDs at build time might solve that for builds, but not in the Unity Editor, so connecting a build and the editor to the same game wouldn't work reliably.

BinaryJared commented 2 years ago

Is this under active development? It would make my life a lot easier to have out of the box networking support that we can apply to any architecture.

KybernetikGames commented 2 years ago

The "What Now" section at the bottom of the OP still applies. I simply don't know enough about networking or have enough spare time to be able to mess around blindly. So if you can provide any insight into those questions I raised it would help me move forward, but otherwise the idea will remain stalled.

And even once I'm able to implement something, it's unlikely to be possible for it to support just any architecture. Most of the limitations I hate so much about Animator Controllers are likely partially due to a desire for the system to be inherently networkable. But all of the dynamic things Animancer lets you do like play animations from any source without registering them in a central location beforehand and configuring everything at runtime probably won't be possible in a networkable context.

h-sigma commented 1 year ago

My experiences and opinions: Replicating the animator state by making it serializable might be a fool's errand, in all honesty, due to the vast variety of both animation and game-play requirements for different games. A generic-enough solution worth developing would be a way to replicate the entire state of the graph over the network (thus serialized). But at that point, you'd be sending the whole state (hundreds of bytes) per replicated component in the worst case.

I'd suggest taking a look at popular networking libraries and finding out the standard ways to do networked animations in each of them (especially photon fusion). Also at different games and what animancer would have to do to support the kind of requirements on those games, e.g. rollback, tick-perfect, prediction.

Here's a few different cases to keep in mind:

  1. Game does not need animations themselves to be networked, i.e. the animator is always a function of the real networked state (maybe the character controller). This can be done using one mega-mixer.
  2. Animations may not need to be tick-perfect. Tick-perfect animations are always in the correct state on every tick of the network, usually because there are related features that need this, e.g. hitboxes.
  3. A lot of games get-by simply using RPCs for changing animations. Wild, I know, but depending on the game this is completely viable.
  4. Photon PUN's example network animator got away with simply tracking all Animator inputs (floats, triggers, etc.) and replicating them on change.

The first thing I did when using animancer for my networked kart game was throw out events. They are simply not reliable for gameplay logic with all of the time-changed, evaluations, pauses, Area Of Interest culling, etc. going on.

A worthwhile thing to do would be to solve specific networking use-cases and update the docs with them. It would also be helpful to write exactly what needs to be networked per animancer state-type to achieve visual replication (fade, length, speed, start time, etc.). This lets users clear any doubts on whether Animancer will help with their networking needs.

Here's a brief explanation of my current project setup in terms of networking/animations:

  1. AnimationRepository: Each character/vehicle/entity has an animation repository. It's a scriptable-object list of all animations defined for this entity. Each animation is keyed by a byte. When networking, this byte is sent back-and-forth and whenever it needs to be translated to an ITransition/AnimationClip, the networked entity looks inside its animation repository. This is probably the most useful thing for you to consider as even if you manage to replicate the runtime states, you need a hook or something to resolve the networked IDs into static assets.
  2. AnimationSource: A base class that implements ITransition that I use with the animation repository (odin serializer ftw). The important thing is that each AnimationSource also acts as a container for some baked data, which is usually the length of the animation and some "markers". Think of markers like tags instead of reactive events. While events are invoked and provide a convenient way to hook up things in the unity editor, markers just store data. E.g. SoloMarker is a 0-1 normalized timestamp with the tag Execute. When the character casts an ability, it can read the data in the animation. (Normally such data can be stored on the ability prefab/description itself, but in this game we wanted small differences for each character using the same abilities).
  3. Animancer Graph: I use a modified animancer component with the following fixed structure. Layer0: ManualMixerState. The ManualMixerState has two children: CharacterControllerState and OverrideAnimationState. The CharacterControllerState is a mega-state that has a bunch of mixers under it. The key point is that the animation state of this component is derivable from the character controller's networked & local states, so there is no need to network this itself. OverrideAnimationState is not a specific state, it's every other animation that may override the base. You will realize this is similar to a LayerMixer with 2 layers, except I only allow 1 animation for the override and I control the weight between them manually (thus giving me fading).
  4. NetworkAnimator: Here's all I'm serializing over the network.
    [Networked] int animationStartTick;
    [Networked] byte previousAnimation;
    [Networked] byte currentAnimation;
    [Networked] float fadeInDuration;

    This gives me all of the information I need to do my manual fade/blend between the character "layer" and the override "layer".

  5. PlacerAnimancerComponent: This is the actual component handing all local animation logic described in point 3. Additionally, I only do Evaluate() when I need to "snap" an animation, and the rest of the time I let it update based on client time. If I happen to change properties like Speed, it's because I have some networked data like "Harvest Time" or "Speed Modifier", but the speed of the animation never influences the outcome.

As you can see, by making some small sacrifices and writing the animation logic according to my project requirements, a hard problem becomes a lot more straightforward. My personal motto for this is to bake as much data before-hand as possible, and not use animation-driven game-play logic or rely on animation events for game-play.

You may want to check out https://zephyrl.itch.io/network-animancer . It looks very promising for the fusion scene. Photon fusion also have a Battle Royale sample where they use playables directly.

I hope this was a little helpful :)

KybernetikGames commented 1 year ago

Thanks, there's a lot of useful insight in there.

The first thing I did when using animancer for my networked kart game was throw out events.

While events are invoked and provide a convenient way to hook up things in the unity editor, markers just store data.

That sounds like it's basically just events (name + time), but instead of giving them to Animancer to automatically execute callbacks, you're querying them yourself. Do you think it's something Animancer could/should provide a generalised solution for?

Each character/vehicle/entity has an animation repository. It's a scriptable-object list of all animations defined for this entity.

That sounds roughly similar to the idea of Transition Sets which is something I'd like to tackle as the big feature in a major version update at some point.

You may want to check out https://zephyrl.itch.io/network-animancer.

The author of that plugin was kind enough to give me a free key for it so I had a quick look at it and the idea seems promising, but I haven't really tried it out in depth. The documentation seems minimal and the code quality doesn't fill me with confidence, but I can't comment on the effectiveness of the end result.

There's also a studio partnered with Photon who are currently making a Co-Op 3rd Person Shooter sample based on Photon Fusion and Animancer. It's not ready yet, but the gameplay video they showed me was quite impressive and based on how complete it looked I wouldn't be surprised if it released within the next few months.

nscimerical commented 1 year ago

I've recently reached Animancer on my search for a 100% tick accurate animation system. This asset has been talked about on the Photon and Fish-Net discords and I think is the only asset to provide perfectly accurate animations on a tick. Mecanim unfortunately produces different animations even if the same time is set on both the client and server. There is a reason why Photon Fusion's 200BR sample project does not use Mecanim and instead uses the Playables API directly.

A 100% tick accurate animation is pretty much mandatory to get a game like CSGO, Overwatch, and more possible. It's to ensure that a player's arm is on a certain position on an animation on the server, the same exact thing should be seen on the client side. You don't want to shoot a body part but end up missing because it is on the different location on the server due to animation inconsistencies during rollback.

From what I'm gathering here, majority of the issue is due to supporting dynamic actions like playing animations without registering them in a central location. Unfortunately, I don't think there is a way around it unless you want to spend ridiculous amounts of bandwidth. All netcode libraries operate a central database to ID prefabs. This includes Photon Fusion, Fish-Net, Netcode for GameObjects, Mirror, and probably more.

In my opinion, when doing networking with Animancer, all the networkable animations should be ID'ed in a central location. The IDs in byte or ushort would then be used to play the animation on other clients along with the necessary data to 100% replicate the same animation on that client.

You don't even need to do integration with the networking libraries. All we need is a mechanism to get the current state (e.g. the ID and whatever data needed to replicate the animation on other clients) and set them (e.g. on clients). Something like Layer.GetState() and Layer.SetState() that we can easily retrieve/send the state from. The character controller assets on the asset store follow the same thing with Character.GetState() and Character.SetState() methods.

KybernetikGames commented 1 year ago

The animancerComponent.States dictionary lets you look up states with whatever keys you give them, so as long as you use serializable keys that problem should already be solved.

I have some ideas for setting up an ID library and making serializable forms of the data in states which I want to have a go at for the next major version, but I haven't started on it yet.

jiristary commented 1 year ago

Hello guys, I accidentaly stumbled upon this page when looking for some answers for my personal project. I must admit I haven't read all the posts in detail but I wanted to let you know that there is now Animation documentation page and Fusion Animations tech sample that should help with animations using Photon Fusion. The tech sample features also a solution using Animancer (though not a tick accurate one).

@KybernetikGames, I hope you don't mind posting it here. We should have probably touch base long time ago 🙃 I've written both the doc and the tech sample above. I am also behind player animations in that unannounced co-op game you mentioned above and with my collegue Jiri we put together the BR200 project. Let me know if you would like to correct some info in the doc regarding Animancer or discuss other (animation) matters 😉

nscimerical commented 1 year ago

@jiristary I just read those documentation pages and they are great! Pretty much explains all the current animation options for netcode development in Unity.

After spending a month playing around Animancer, I did come to the same conclusion as what was written in the documentation. My method involves writing my own FSM and a tick accurate wrapper around Animancer states. After doing validation checks, I can confirm that the animations aren't tick accurate, they are off a very small amount.

I wasn't able to track down what was causing the small discrepancies, but your documentation mentions that it was due to the creation of weightless states during certain fades. I was scratching my head for many weeks, it's good that someone finally got to the root cause of it.

KybernetikGames commented 1 year ago

I'm more than happy to have you post here. I had a quick read through the documentation which mostly looks good and I'm keen to check out the actual samples when I get time.

The "creation of weightless states during certain fades" thing you mentioned is explained on the Fade Modes page if you want to link to it.

In the "Animancer + Tick Accurate Wrapper Around Animancer States" section, the "Synchronize a whole array of states" approach would likely be much more efficient if you gather only the states with Weight > 0 and sync them instead of syncing everything that has been created.

Also, having everything in such a long page makes navigation a bit annoying if you aren't linearly reading through the whole thing. I generally prefer to split my documentation pages.

jiristary commented 1 year ago

@nscimerical, I am very glad you find it helpful. Regarding your animations being slightly off, I suspect it might be something else than weightless states as those would cause issues really only during the fading time (and only in specific fade scenario) and after the fade it should be in sync again.

The "creation of weightless states during certain fades" thing you mentioned is explained on the Fade Modes page if you want to link to it.

Thank you, I will add the link. I can imagine that setting the WeightlessThreshold to 1, effectively disabling weightless states, could be a viable option for certain solutions.

In the "Animancer + Tick Accurate Wrapper Around Animancer States" section, the "Synchronize a whole array of states" approach would likely be much more efficient if you gather only the states with Weight > 0 and sync them instead of syncing everything that has been created.

That is not a real issue with Fusion. It uses delta compression which means that only bits of networked data that changed are transfered over the network. But definitely the solution should not try to save other parameters to networked data when the weight and target weight is 0.

Also, having everything in such a long page makes navigation a bit annoying if you aren't linearly reading through the whole thing. I generally prefer to split my documentation pages.

Noted :) I will bring it up to the team.

KybernetikGames commented 1 year ago

I can imagine that setting the WeightlessThreshold to 1, effectively disabling weightless states, could be a viable option for certain solutions.

I had never considered it, but yes that would work. I'll add it to my docs.

That is not a real issue with Fusion. It uses delta compression which means that only bits of networked data that changed are transfered over the network.

Maybe not for the actual networked data, but serializing/diffing several dozen or more states is going to be far slower than only needing a couple of float comparisons per state so you only need to serialize/diff the 1-3 active states.

Now that I think about it, I might even be able to have Animancer keep track of the active states without too much overhead and expose them publicly. That would also help with internal stuff like playing a new animation which currently iterates through and stops everything else.

Hopefully one day I'll figure out how to implement Inertialization which could remove the need for cross fading, meaning no clones and only ever 1 active state at a time (per layer).

KybernetikGames commented 4 weeks ago

Animancer v8.0 is now available and includes a new Animation Serialization sample which may help with this sort of thing.