Open ahmadsharif1 opened 1 week ago
I have seen videos that have AVFrames in a single stream have different dimensions.
Example: the first AVFrame of the stream could have size 100x100 but the second one could have size 200x200
We should at the minimum detect this condition and report it to the user.
Or handle it gracefully somehow. For non-batch functions we could return the size of the actual frame.
For batch functions that return a stacked tensor we could just fail or return a resized frame
Do we have such a video that we could add to our testing suite?
I have seen videos that have AVFrames in a single stream have different dimensions.
Example: the first AVFrame of the stream could have size 100x100 but the second one could have size 200x200
We should at the minimum detect this condition and report it to the user.
Or handle it gracefully somehow. For non-batch functions we could return the size of the actual frame.
For batch functions that return a stacked tensor we could just fail or return a resized frame