CrendKing / avisynth_filter

DirectShow filters that put AviSynth and VapourSynth into video playing
MIT License
107 stars 8 forks source link

Possible audio desync due to incorrect frame times #27

Closed chainikdn closed 3 years ago

chainikdn commented 3 years ago

One user reported audio desync after playing 30+ mins of video. So I decided to check (one more time :)) how AVSF calculates frame times.

Consider a 23.976 fps constant-rate video, the frame times after decoder are usually NOT a constant value but something like 42 ms - 41 - 42 - 42 - 41 - ... So it's efficiently a variable-rate sequence. Now when FRC is active AND the FRC rate is an integer value (e.g. 2) there's no problem, the output sequence will be 42/2, 42/2, 41/2, 41/2, 42/2, etc. But what happens with a non-integer FRC rate - e.g. 2.5? AVSF will produce something like this: 42/2.5, 42/2.5, 42/2.5, 41/2.5, 41/2.5, 42/2.5, ... But it's not correct! The correct sequence must be: 42/2.5, 42/2.5, (42/2.5 + 41/2.5)/2, 41/2.5, 41/2.5, 42/2.5, ... In this example we already made a 0.2 ms desync on the first 3 source frames.

source frames:       |-----|----------|
x2.5, avsf output:   |--|--|--|----|----|
x2.5, correct times: |--|--|---|----|----|
CrendKing commented 3 years ago

Edited your illustration to monospace.

Would the desync be gone if they seek?

I don't understand the relationship between the illustration and description of the problem. I thought we discussed of this before and I addressed it.

The frame time is calculated as you described. The type is REFERENCE_TIME, which is integer, so there could be slight floating point precision drift. The starting time is a rolling offset. For example, if FRC is 2.5,

source frames:       ----------
                     ^    ^
x2.5, avsf output:   ----------
                     ^ ^ ^ ^ ^

^ indicates the starting offset of a frame. This is better than mixing | directly in the -s.

So if this desync is caused by floating point error, we can do some sort of periodical "sync ups", where we force the starting time of input frame and output frame to be the same. If this is caused by other reasons, then we need to properly reproduce the issue. I hope we don't need to watch a video for 30 minutes to trigger this. I remember I've watched long video with the filter before and not experience problem.

chainikdn commented 3 years ago

I don't understand the relationship between the illustration and description of the problem.

two adjacent source frames have different lengths and an interpolated one "covers" part of the 1st source frame plus part of the 2nd source frame

on my picture length of 1st frame is 5 and length of the 2nd frame is 10 interpolated frames must be: 2, 2, 3, 4, 4, while AVSF will give 2, 2, 2, 4 ,4 and the total length of interpolated frames must be same as source = 15 = 5+10 = 2+2+(1+2)+4+4, and AVSF will produce only 2+2+2+4+4 = 14 ==> audio desync

I hope we don't need to watch a video for 30 minutes to trigger this

I'd start with the correct math ;)

CrendKing commented 3 years ago

I don't think your math is correct. The interpolated frame's start or stop time are never guarantee to sync up with its source frame's time if FRC is not integer. The only guarantee is that the last interpolated start time is less than the source frame's stop time.

As in your example, if source frames have 5 and 10 ticks (sums to 15), the interpolated frames are 2, 2, 2, 4, 4, 4, which sums to 18 ticks. This happens because after 5 interpolated frames, there were only 14 ticks, which is less than 15, so avsf will deliver one more inter frame.

Another example, suppose source is 5, 10 5 (sum 20), inter frames will be 2, 2, 2, 4, 4, 4, 2 (sum 20). The next inter frame will be synced with the 4th upcoming source frame.

You see, in this rolling system, if the video is infinite, no matter what FRC rate is (as long as it is rational number), at some point the source and output should be synced.

I'm not saying this algorithm is perfect visually, but I think it is correct mathematically. At least it is simple on principle: all inter frames' duration are the same, and they eventually sync.

Now to your algorithm, why would the 3rd inter frame have 1 more tick than the rest? If inter frames don't have equal duration, what prevent it from being 2, 2, 2, 5, 4 or 1, 4, 2, 5, 3? In your example, they sync up on 2 frames, in my example they sync on 3. Why is your algorithm better?

chainikdn commented 3 years ago

Now to your algorithm, why would the 3rd inter frame have 1 more tick than the rest?

Interpolation algorithm produces intermediate frames at specific time positions. For 2.5x rate, the output sequence is 0.0 - 0.4 - 0.8 - 1.2 - 1.6 - 2.0 - ..., where 0.0 is the 1st frame, 1.0 - 2nd frame, 2.0 - 3rd frame. AVSF forces to display these frames at incorrect times.

I don't think your math is correct.

It is correct :)

chainikdn commented 3 years ago

In fact, SVP libs for Vapoursynth set the correct frame duration which allows mpv to place output frames where their belong... something like this:

double dur, dur_rel = 256.0/frcValue;
if (tm + dur_rel < 256.0)
        dur = dur_src/frcValue;
else
        dur = dur_src*(256.0-tm)/256.0 + dur_ref*(tm+dur_rel-256.0)/256.0;

where

And before this math was developed there was similar audio desync issue in mpv.

CrendKing commented 3 years ago

Interpolation algorithm produces intermediate frames at specific time positions. For 2.5x rate, the output sequence is 0.0 - 0.4 - 0.8 - 1.2 - 1.6 - 2.0 - ..., where 0.0 is the 1st frame, 1.0 - 2nd frame, 2.0 - 3rd frame. AVSF forces to display these frames at incorrect times.

By your logic, the 3rd frame should only have half of the duration, so that the 4th inter frame syncs with its source frame, like 2, 2, 1, 4, 4, 2, since that's how the inter frames are produced. Is that what you want? One of the older version of avsf had that logic, and I remember you specifically called it out to be wrong.

Otherwise, I don't understand why the 3rd inter frame should be (3rd + 4th) / 2. By doing that, you are offsetting the 4th frame even further from its source frame.

Also, what about precision loss, like if the source duration is 7 and FRC is 2.3? Doesn't your method lose duration by syncing up locally instead of globally?

chainikdn commented 3 years ago

By your logic, the 3rd frame should only have half of the duration

nope... by my logic all the frames have relative duration of 0.4 dunno how to make it more clear

source frame 0 has timestamp 0 ms source frame 1 has timestamp 100 ms source frame 2 has timestamp 300 ms then interpolated frames must be placed at: 0 ms, 40 ms, 80 ms, 140 ms, 220 ms, 300 ms

CrendKing commented 3 years ago

Wait, tell me one thing. When SVP computes the inter frame for 80ms, does it take only source frame 0 as input or both source frame 0 and 1 as input? You logic makes sense if it takes both as input.

Assuming you confirm it takes both, then still it could be only SVP. Other frame interpolation libraries could just take one source frame. Then your logic doesn't make sense for them. There needs to be some thing to tell avsf what algorithm should be used.

chainikdn commented 3 years ago

yes, obviously it takes both frames, and calculates the inner objects movement for the exact timestamp = 0.4 between them

Other frame interpolation libraries could just take one source frame. Then your logic doesn't make sense for them.

it doesn't matter at all how many frames it takes for computation. the whole point of motion interpolation is move the objects to the particular time point.

chainikdn commented 3 years ago

Anyway, considering a 10000-frames 6 mins 00 secs video, it must be a 25000 frames and the exact same 6 mins 00 secs after ANY x2.5 frame rate conversion. Which is not the case with current AVSF algorithm.

CrendKing commented 3 years ago

obviously it takes both frames motion interpolation

For frame interpolation, yes it should use two frames as input. But there could be library that just duplicate each source frame with fractional rate. For those, if source frames have variable duration, it doesn't make sense to your logic.

I think the best approach is default to one algorithm, and allow user to switch to the other algorithm via setting.

Anyway, considering a 10000-frames 6 mins 00 secs video, it must be a 25000 frames and the exact same 6 mins 00 secs after ANY x2.5 frame rate conversion. Which is not the case with current AVSF algorithm.

My current algorithm guarantees that after any amount of time, the output is no longer than current_timestamp + (last_source_frame_duration) / frc_rate.

Your algorithm guarantees that after odd number of frames, the output is no longer than current_timestamp + (future_even_frame_duration) / frc_rate. And then the drift goes away after the even frame.

Both algorithm has temporal drift, which are contained within the limit of one output frame length. I don't think there would be noticeable audio desync with that little of drift either way. It is more of a difference in flavor.

chainikdn commented 3 years ago

Not agreed at all :D Your algorithm behaves differently for integer and non-integer frc rates, which is just wrong. Also keep in mind that from Avisynth's point of view all the videos are CFR only.

My current algorithm guarantees

I can't see how it guarantees anything. In some circumstances it can do ANY time drift. it just a matter of probability.

CrendKing commented 3 years ago

differently for integer and non-integer frc rates, which is just wrong it can do ANY time drift. it just a matter of probability

Then go back to read the code. It's just about 10 lines. Also my explanation here:

Another example, suppose source is 5, 10 5 (sum 20), inter frames will be 2, 2, 2, 4, 4, 4, 2 (sum 20). The next inter frame will be synced with the 4th upcoming source frame.

Basically, assuming there are totally 4 source frames (5, 10, 5, 10). You algorithm syncs at the end of frame 2, and drifts by 1 tick at the end of frame 3. Mine drifts by 1 tick at the end of frame 2 and syncs at the end of frame 3.

CFR only

The algorithm does not matter if it CFR or VFR. As long as source frame duration / FRC is not integer, the temporary drift will happen. It is just when it happens.

chainikdn commented 3 years ago

Mine drifts by 1 tick at the end of frame 2 and syncs at the end of frame 3.

it's only a coincidence within this particular example your code will implicitly sync only when the time drifts to much in the past (which looks more like a dirty hack to me):

if (_nextOutputFrameStartTime < preSrcFrameInfo.startTime)
            _nextOutputFrameStartTime = preSrcFrameInfo.startTime;

the rest 8 lines of code won't sync with anything ever

CrendKing commented 3 years ago

Well, I see you still don't understand the code. Let's draw the conclusion here. I'll first do some research on AviSynth. Then maybe try to implement your algorithm. Then probably provide setting to switch the behavior.

chainikdn commented 3 years ago

ok, on my side I'll fix the code that I don't understand and see if it'll help the user to remove audio desync ;)

chainikdn commented 3 years ago

this is what I'm talking about

        SourceFrameInfo &preSrcFrameInfo = iter->second;

        if (_nextSourceFrameNb <= 1) {
            _nextOutputFrameStartTime = preSrcFrameInfo.startTime;
        }

        REFERENCE_TIME num, den;
        g_avs->GetScriptFRCRate(num, den);
        double relativeDuration = double(den) / num;

        {
            std::unique_lock outLock(_outputFramesMutex);

            while (srcFrameInfo.startTime > _nextOutputFrameStartTime) {
                REFERENCE_TIME outputFrameTime = -1;
                REFERENCE_TIME duration = srcFrameInfo.startTime - preSrcFrameInfo.startTime;

                if (relativeDuration < 1.0) { //only increased frame rates require special treatment
                    double relativeFrameStart = double(_nextOutputFrameNb) * relativeDuration - preSrcFrameInfo.frameNb;

                    if (relativeFrameStart < -relativeDuration) { // should not happen!                         
                        g_env.Log("ACHTUNG!!!");
                    }
                    else if (relativeFrameStart < -0.001) {
                        if (--iter != _sourceFrames.cend()) {
                            const REFERENCE_TIME prePreSrcDuration = preSrcFrameInfo.startTime - iter->second.startTime;
                            outputFrameTime = LONGLONG(-relativeFrameStart * prePreSrcDuration + (relativeDuration + relativeFrameStart) * duration);
                        }
                    }
                    else if (relativeFrameStart + relativeDuration > 1.001) {
                        break; //passes the end of source frame, will continue on the next frame
                    }
                }
                if (outputFrameTime < 0) { //default behaivor
                    outputFrameTime = llMulDiv(duration, g_avs->GetScriptAvgFrameTime(), g_avs->GetSourceAvgFrameTime(), 0);
                }

                const REFERENCE_TIME outStartTime = _nextOutputFrameStartTime;
                const REFERENCE_TIME outStopTime = outStartTime + outputFrameTime;
                _nextOutputFrameStartTime = outStopTime;

                g_env.Log("Create output frame %6i for source frame %6i at %10lli ~ %10lli", _nextOutputFrameNb, preSrcFrameInfo.frameNb, outStartTime, outStopTime);

                _outputFrames.emplace_back(OutputFrameInfo { _nextOutputFrameNb, outStartTime, outStopTime, &preSrcFrameInfo });
                _nextOutputFrameNb += 1;
                preSrcFrameInfo.refCount += 1;
            }
        }

// !!!
auto AvsHandler::GetScriptFRCRate(REFERENCE_TIME& num, REFERENCE_TIME& den) const -> void {
    num = (REFERENCE_TIME)_scriptVideoInfo.fps_numerator * _sourceVideoInfo.fps_denominator;
    den = (REFERENCE_TIME)_scriptVideoInfo.fps_denominator * _sourceVideoInfo.fps_numerator;
}

Edited: double(g_avs->GetScriptAvgFrameTime()) / g_avs->GetSourceAvgFrameTime() was not good enough, better fetch real numerators / denominators from AVS.

CrendKing commented 3 years ago

My demonstration is here: https://github.com/CrendKing/avisynth_filter/commit/46a7c32286e16f8376a2e9e099858b739f791a84

I added a log to print out the frame time drift. I used the same video you once sent me (the anime with wrong frame stop time, and being a VFR video), SVP FRC rate 2.5. Here's the log for my algorithm and your algorithm:

CrendKing.log

chainikdn.log

Things to check from the log:

  1. In CrendKing.log, all output frame's duration for the same source frame are the same. For example, the 3 output frames for source frame 0 is 135999, and the 2 frames for source frame 1 is 131999.
  2. In chainikdn.log, the "edge" output frame has different duration than the others. For example, the first 2 output frames for source frame 0 is 135999, the 3rd (edge) is 134000, because the duration of source frame 1 is shorter than source frame 0.
  3. Go down the line, you can see from both log file that the "Frame time drift" number increases and decreases over the time, but it never go over one output frame time (e.g. as large as 163999).

If my implementation is wrong, or I still misunderstand your logic, let me know. If you confirm this code is correct, I'll proceed next step.

chainikdn commented 3 years ago

If you confirm this code is correct, I'll proceed next step.

yeah, looks good btw I'm still waiting for the user to confirm (or not) if this timestamps fix solved audio desync for him...

CrendKing commented 3 years ago

Like I said, there shouldn't be desync caused by the algorithm. I suspect his video was out of sync to begin with.

Also, I did some research on existing AviSynth plugins and libraries. I don't find any reasonable frame insertion plugin that takes only one source frame on edges. Both ConvertFPS(), mvtools and SVP do sort-of blending approaches between two frames. Thus I'll just switch the default without providing setting.

chainikdn commented 3 years ago

I suspect his video was out of sync to begin with.

nope. moreover, it doesn't go out of sync with ffdshow.

Thus I'll just switch the default without providing setting.

using the correct math is a good thing in any case, even if desync is caused by something else however as I said before I already fixed a real audio desync in a Vapoursynth plugin exactly by switching to this "algorithm"

chainikdn commented 3 years ago

The user said he doesn't feel audio desync with the fixed build.

CrendKing commented 3 years ago

Good to know. Thanks.

chainikdn commented 3 years ago

ok, one user found a video that hangs with the new algorithm both old algorithm and mine (see the code above) don't hang sample file (hangs between 0:30~0:40)


for example for FRC = x2 rate it sometimes gives strange frame times (precision / rounding error probably?), but I'm not sure it's the actual reason for hanging, cause it also hangs in x2.5 mode with "normal" frame times in the log...

T   6968 @    37277: Create output frame   1348 for source frame    695 at  371120720 ~  371370998 [+250278]
T   6968 @    37277: Create output frame   1349 for source frame    695 at  371370998 ~  371371000 [+2]
...
T  22500 @    37321: Start processing output frame   1349 at  371370998 ~  371371000 frameTime          2 for source    695 Output queue size  0 Front     -1 Back     -1
T  22500 @    37346: Delivered frame   1349
T  22500 @    37346: GarbageCollect frames until    695 pre size   4 post size   3
<hangs here>
CrendKing commented 3 years ago

I can reproduce. Let me take a look.

CrendKing commented 3 years ago

The reason of hang is that the sample file has lots of frames with duplicate start time. For example, the 176th and 177th source frames has the start time 59392667. The filter has a bad frame time detection, but it was comparing to output frame time, not previous source frame time, which didn't trigger. I changed the code and now it's working.

The weird frame time thingy was due to precision loss. You can see that the average source frame duration should be 333666, but the first frame duration is 333667 (because the second frame starts at 333667). Doing FRC x2 results in both output frames being 166833 long, requiring a third frame to make up the 1 time difference. This introduces some frame time drift, but if the source frames remain the same duration, it would be fine. The problem became worse when suddenly at 171th frame the video becomes VFR.

I'm testing a small trick that if an output frame's stop time is 1ms (10 in the 100ns unit) close to the next source frame's start time, I artificially extends that output frame to make up the difference (padding). With this, it can almost guarantee 0 frame time drift at integer FRC ratio. My question to you is, does SVP has similar trick internally? If not, then there could be 1 frame out-of-sync for every 16666 frames in 60fps video.

Test build: AviSynthFilter.zip (Check the log for all the changes in action) Commit: https://github.com/CrendKing/avisynth_filter/commit/8de6129e7b6b9a7253eaed9a76b9cbaf4dd477d7

chainikdn commented 3 years ago

My question to you is, does SVP has similar trick internally?

SVP as the "Avisynth plugin" knows nothing about frame times and durations, it just makes N*frc_rate frames out of N source frames.

CrendKing commented 3 years ago

Let me rephrase the question. How does SVP determine which two source frames to use when generating an output?

Let me illustration a fantasized example. Suppose we have a series of source frames, each have frame duration 7. We are doing FRC 2x. So each output frame from SVP should have frame duration 3 due to precision loss, right?

So it goes as OF (output frame) 0 starts 0, stops 3, SF (source frames) 0 and 1 OF 1 starts 3, stops 6, SF 0 and 1 OF 2 starts 6, stops 9, SF 0 and 1 <-- OF 3 starts 9, stops 12, SF 1 and 2 OF 4 starts 12, stops 15, SF 1 and 2 <-- OF 5 starts 15, stops 18, SF 2 and 3

Now, imagine I'm padding some output frames if their stop time is very close (no more than 1) to the next source frame's start time. This makes up the precision loss. It goes as

OF 0 starts 0, stops 3, SF 0 and 1 OF 1 starts 3, stops 7, SF 0 and 1 OF 2 starts 7, stops 10, SF 1 and 2 <-- OF 3 starts 10, stops 14, SF 1 and 2 OF 4 starts 14, stops 17, SF 2 and 3 <-- OF 5 starts 17, stops 21, SF 2 and 3

The second sequence definitely looks nice, but you can see a difference of SF in OF 2 and 4. Which sequence actually happens in SVP?

chainikdn commented 3 years ago

How does SVP determine which two source frames to use when generating an output?

AVS requests frame number N from SVP filter -> SVP filter requests frames N/frc_rate and N/frc_rate+1 from the upstream filter. OF 2 will always be made from SF 1 and 2.

CrendKing commented 3 years ago

Thanks for explaining. So does the test build solve all the problems? If yes, do you urgently need a release for this?

chainikdn commented 3 years ago

So far so good... at least it doesn't hang anymore. I'll let you know if (when) more problems arise :)

do you urgently need a release for this?

If you mean like "0.8.3" - it's up to you, I'm good with the provided binaries.

chainikdn commented 3 years ago

actually now another problem is possible:

T   2512 @     6903: Processed source frame:      0 at   74580000 ~   74996667 duration(literal)     416667 nextSourceFrameNb      0 nextOutputFrameStartTime          0
T   2512 @     6908: Processed source frame:      1 at   75000000 ~   75416667 duration(literal)     416667 nextSourceFrameNb      1 nextOutputFrameStartTime          0
T   2512 @     6910: Processed source frame:      2 at   75420000 ~   75836667 duration(literal)     416667 nextSourceFrameNb      2 nextOutputFrameStartTime          0
T   2512 @     6910: Create output frame      0 for source frame      0 at          0 ~     118124 duration     118124
T   2512 @     6910: Create output frame      1 for source frame      0 at     118124 ~     236248 duration     118124
T   2512 @     6910: Create output frame      2 for source frame      0 at     236248 ~     354372 duration     118124
T   2512 @     6910: Create output frame      3 for source frame      0 at     354372 ~     472496 duration     118124
T   2512 @     6910: Create output frame      4 for source frame      0 at     472496 ~     590620 duration     118124
T   2512 @     6910: Create output frame      5 for source frame      0 at     590620 ~     708744 duration     118124
T   2512 @     6911: Create output frame      6 for source frame      0 at     708744 ~     826868 duration     118124
T   2512 @     6911: Create output frame      7 for source frame      0 at     826868 ~     944992 duration     118124
T   2512 @     6911: Create output frame      8 for source frame      0 at     944992 ~    1063116 duration     118124
T   2512 @     6911: Create output frame      9 for source frame      0 at    1063116 ~    1181240 duration     118124
...
T   2512 @     6914: Create output frame    633 for source frame      0 at   74772492 ~   74890616 duration     118124
T   2512 @     6914: Create output frame    634 for source frame      0 at   74890616 ~   75000000 duration     109384
T   2512 @     6914: Frame time drift:          0
T  11764 @     6914: Start processing output frame      0 at          0 ~     118124 duration     118124 for source      0 Output queue size 634 Front      1 Back    634
T  29164 @     6914: Get source frame: frameNb      1 Input queue size  3
T  13260 @     6914: Get source frame: frameNb      0 Input queue size  3
T  11280 @     6914: Get source frame: frameNb      1 Input queue size  3
T  18408 @     6914: Get source frame: frameNb      1 Input queue size  3
T   2512 @     6917: Processed source frame:      3 at   75830000 ~   76246667 duration(literal)     416667 nextSourceFrameNb      3 nextOutputFrameStartTime   75000000
chainikdn commented 3 years ago

and a bonus question: consider a good-old-straight-cfr 24 fps source playing on a 85 hz screen ->frc rate = 32/9, all the output frames expected to have the same duration

T  17236 @     2668: Processed source frame:      0 at    6670000 ~    7086667 [+416667], nextSourceFrameNb      0 nextOutputFrameStartTime          0
T  17236 @     2673: Processed source frame:      1 at    7080000 ~    7496667 [+416667], nextSourceFrameNb      1 nextOutputFrameStartTime          0
T  17236 @     2676: Processed source frame:      2 at    7500000 ~    7916667 [+416667], nextSourceFrameNb      2 nextOutputFrameStartTime          0
T  17236 @     2676: Create output frame      0 for source frame      0 at    6670000 ~    6785311 duration     115311
T  17236 @     2676: Create output frame      1 for source frame      0 at    6785311 ~    6900622 duration     115311
T  17236 @     2676: Create output frame      2 for source frame      0 at    6900622 ~    7015933 duration     115311
T  17236 @     2676: Create output frame      3 for source frame      0 at    7015933 ~    7080000 duration      64067
...
T  17236 @     2681: Processed source frame:      3 at    7920000 ~    8336667 [+416667], nextSourceFrameNb      3 nextOutputFrameStartTime    7080000
T  17236 @     2681: Create output frame      4 for source frame      1 at    7080000 ~    7198124 duration     118124
T  17236 @     2681: Create output frame      5 for source frame      1 at    7198124 ~    7316248 duration     118124
T  17236 @     2681: Create output frame      6 for source frame      1 at    7316248 ~    7434372 duration     118124
T  17236 @     2681: Create output frame      7 for source frame      1 at    7434372 ~    7500000 duration      65628

used to work correctly in 0.8.2

also that MAX_OUTPUT_FRAME_DURATION_PADDING seems to add huge audio desync by itself overall I just returned to 0.8.2 plus a fix for "start time not going forward"

CrendKing commented 3 years ago

Oops, that's a bad bug. Sorry about that. I removed something I should not when introducing the mechanic. The fix is simple. I also think both problems you mentioned are due to the same bug, not MAX_OUTPUT_FRAME_DURATION_PADDING. Can you try AviSynthFilter.zip? Let me know if it still has issue.

Also, I delisted v0.8.3 due to this issue. I'll release v0.8.4 once you confirm the fix.

Sorry again.

chainikdn commented 3 years ago

your fix is for issue #1 only. the frame times in 0.8.3 are still incorrect.

chainikdn commented 3 years ago

and the MAX_OUTPUT_FRAME_DURATION_PADDING condition should probably be like this to stop audio desync: if( outStopTime < preSrcFrameInfoAfterEdge.startTime && outStopTime >= preSrcFrameInfoAfterEdge.startTime - MAX_OUTPUT_FRAME_DURATION_PADDING)

... and fix wrong frame times btw

CrendKing commented 3 years ago

Fixed the padding. AviSynthFilter.zip The frame time seems to be related the to padding. Let me know.

chainikdn commented 3 years ago

this's better