metal3d / keras-video-generators

Keras generators to generate sequences from videos as input
MIT License
99 stars 32 forks source link

[SlidingFrameGenerator] Sequence_time not working #32

Open TomSeestern opened 3 years ago

TomSeestern commented 3 years ago

System Information

Describe the bug Sequence_time always defaults to the Full Video Length. See Colab Notebook for example code: Colab Notebook

metal3d commented 3 years ago

Hello,

Sorry but I don't see the problem in your example notebook, there are 5 frames in the sequence as expected. And the batch size is 5 as you defined it.

Can you explain where I miss the problem ?

TomSeestern commented 3 years ago

Hey there!

Thanks for the fast reply! Maybe I misunderstood how the Generator works. I expected the First batch to contain only images of the first Sequence(_time) step.
So in my example I guessed the first Batch would contain 5 Images from Range Frame 0 to Frame 15. (Sequence_time=0.5 @ 30fps )
Instead I got a Batch with 5 Frames from the Range Frame 0 to Frame 110. Like the VideoFrameGenerator without Sliding Window does(?).

Is that not how it supposed to work? :)

metal3d commented 3 years ago

OK, the example with basketball players seems to be different that the last one... I see there the same sequences as you mention. And no, it's not exepected to produce this, you should have a sliding window as you say it.

That's weird, each test I did hasn't got that problem - so I will make some tests and check what happens.

Thanks a lot for that issue report ;)

Fab16BSB commented 2 years ago

Ok that post confirm why i always got overfitting when i try to train with sliding generator to improve performance it's because the generator always take the same image to make his sequence. If i have time i will try to look the code.

@TomSeestern if you need try the video generator i will use it to train en x images extracted from videos and next i predict continues videos sequence by sequence with successive image. The result are not the best but it work fine.

Fab16BSB commented 2 years ago

I am not a python OOP expert. I look the code and i don't understand why but the number of frame are good with shuffle are without on line 92 "frames": np.arange(i, i + stop_at)[::step][: self.nbframe]. But the problem seems start to line 177 (using cache or not) in my case all image of my batch seems be the same with the sliding generator. It try to add print((frames[0] == frames[1]).all()) before the return line 192 and i got True answer. I try to comment the transformation line 183 same result.

So i supose that the problem take space in the _get_frames method (generator.py file) because the calcul of step line 403 don't use the time_sequence define by the user. But not sure !

Fab16BSB commented 2 years ago

I propose an (not optimized) solution and i think it is correct but not sure. I had an optional parameter to get_frames for pass the define sequence and calcule the step with this info if is not none and sliding generator is chose