Avindr / MxM-IssueTracking

7 stars 0 forks source link

Mixer Improvement - Speed Normalized Blend Tree Mixers #106

Closed Craigjw closed 3 years ago

Craigjw commented 3 years ago

Currently, when two clips in a mixer are of different length, when they aren't normalized, they desync. when used with walk animations, this causes weird behaviour. If the mixer is normalized, the result is that one of the clips is either slowed or sped up relative tot he parent clip used to create the mixer. A speed normalized mixer would properly blend two clips of different length, without any desync of the clips being mixed.

Having speed synchronized mixers would be a benefit to MxM and would solve various issues, such as having to manually create various animation blends such as walk-to-run and run-to-walk animations, while also give rise to a much higher quality of blending of the character animations when transitioning from the blend tree to another animation within MxM, as the calibration data involving joint velocities would have less error when the costing function is applied.

Each frame of the mixers should have it's speed set in proportion to the various clips used for the mixer. For Reference, below is an example how to create a speed synchronized mixer using the playables API from a different animation asset;

`protected void ApplySynchroniseChildren(ref bool needsMoreUpdates) { if (_SynchronisedChildren == null || _SynchronisedChildren.Count <= 1) return;

        needsMoreUpdates = true;

        var deltaTime = AnimancerPlayable.DeltaTime * CalculateRealEffectiveSpeed();
        if (deltaTime == 0)
            return;

        var count = _SynchronisedChildren.Count;

        // Calculate the weighted average normalized time and normalized speed of all children.

        var totalWeight = 0f;
        var weightedNormalizedTime = 0f;
        var weightedNormalizedSpeed = 0f;

        for (int i = 0; i < count; i++)
        {
            var state = _SynchronisedChildren[i];

            var weight = state.Weight;  //blend weight of each node within the mixer
            if (weight == 0)
                continue;

            var length = state.Length;  //Total time the clip woould take to play at speed 1.
            if (length == 0)
                continue;

            totalWeight += weight;

            weight /= length;

            weightedNormalizedTime += state.Time * weight;  //state.time is the number of seconds that have passed since the start of the animation
            weightedNormalizedSpeed += state.Speed * weight;
        }
        //Some Pre-Processor stuff here, but Github won't allow them.  Please see attached document.

        // If the total weight is too small, pretend they are all at Weight = 1.
        if (totalWeight < MinimumSynchroniseChildrenWeight)
        {
            weightedNormalizedTime = 0;
            weightedNormalizedSpeed = 0;

            var nonZeroCount = 0;
            for (int i = 0; i < count; i++)
            {
                var state = _SynchronisedChildren[i];

                var length = state.Length;
                if (length == 0)
                    continue;

                length = 1f / length;

                weightedNormalizedTime += state.Time * length;
                weightedNormalizedSpeed += state.Speed * length;

                nonZeroCount++;
            }

            totalWeight = nonZeroCount;
        }

        // Increment that time value according to delta time.
        weightedNormalizedTime += deltaTime * weightedNormalizedSpeed;
        weightedNormalizedTime /= totalWeight;

        var inverseDeltaTime = 1f / deltaTime;

        // Modify the speed of all children to go from their current normalized time to the average in one frame.
        for (int i = 0; i < count; i++)
        {
            var state = _SynchronisedChildren[i];
            var length = state.Length;
            if (length == 0)
                continue;

            var normalizedTime = state.Time / length;
            var speed = (weightedNormalizedTime - normalizedTime) * length * inverseDeltaTime;
            state._Playable.SetSpeed(speed);
        }

        // After this, all the playables will update and advance according to their new speeds this frame.
    }`

SpeedSyncedMixerExample.txt

Craigjw commented 3 years ago

A further improvement, a system could be added to specifically handle the walk/run/walk transitions, which would blend the trajectory between walk to run etc based on an acceleration variable.

Avindr commented 3 years ago

While having speed normalizers on the blend spaces would be beneficial, I don't think it will work out as you envision it. There's no matching on active blend spaces in MxM. Rather the blend spaces are baked into separate samples of blended clips. It is this way due to limitations of motion matching itself. By setting up a 'walk to run' blend space it would have to change pose every single frame to smoothly simulate the acceleration. This requires matching every single frame which is inefficient and it is also unlikely to turn out for the better (i.e. pick the correct pose). Ideally, poses for an animation section are sequential so that the movement does not get interrupted.

BlendSpaces (or rather scatter spaces) were implemented in MxM to cover directional gaps in continuous movement. I think extrapolating their use to this case is not an ideal solution that I want to invest too much time into.

I think the 'further improvement' you mentioned in your second comment is more beneficial because it would provide that continuous progression of sequential poses from walk to run. This is basically a composite animation with a blend time between clips and I will probably implement this for v2.10 (the update after the one I'm dropping today) Regardless, I'll look into updating the blend spaces with speed syncing for that update too but I don't think it is the solution you think it will be.

Craigjw commented 3 years ago

I still think that speed normalized blend spaces is viable and a worthwhile investment. There are too many assets where the clips for categories of animation sets are of different length (ie strafing etc.) When used with blend spaces, they end up being janky and difficult to configure mxm to make them sync properly.

If the blend spaces were speed normalized, the quality of animation "slices" generated from blend spaces would be of a much higher quality. In a perfect world, we'd have decent clips, where they are all of the same length, but we don't. This is a potential solution for that issue and would lessen the effect of crap in = crap out.

With regard to walk to run. In mecanim, we can theoretically create a single clip by making a linear blend tree with walk and run, then over a given time, lerp the blend value from 0 to 1 or 1 to 0 for walk to run & run to walk, respectively. Then record the animation clip generated, however, the issue with this is that sampling that animation and exporting it to an fbx animation clip is extremely difficult. I suggested this as merely a possibility, for what might be possible in the future for mxm; An animation sequence could be sampled from this lerp, then used in place of having an explicit walk to run animation.

Avindr commented 3 years ago

Noted, as mentioned on the Discord I plan to implement this despite my doubts about to it's effectiveness within the realm of motion matching. However, it's just going to have to wait until I have the time to implement it. v2.10 is my current expectation.

Avindr commented 3 years ago

The normalization on blend spaces in MxM has been improved / updated to be speed normalized. This was included in the last update. Closing this issue now.