This package offers classes that generate sequences of frames from video files using Keras (officially included in TensorFlow as of the 2.0 release). The resulting frame sequences work with the Time Distributed, GRU, LSTM, and other recurrent layers.
See articles:
An provided example of usage can be displayed in nbviewer here.
Requirements are:
TensorFlow 2 works as well. This requirements is not integrated in the setup.py to let you choose the version, or to let you try with other backend. We mean that you will need to install a backend yourself (e.g. pip install tensorflow
)
If you want to compile the package, you need:
You can install the package via pip
:
pip install keras-video-generators
If you want to build from source, clone the repository then:
python setup.py build
The module name (keras_video) is different from the installation package name (keras-video-generators). Import the entire module with
import keras_video
or load a single generator:
from keras_video import VideoFrameGenerator
The package contains three generators that inherit the Sequence
interface and may be used with model.fit_generator()
:
VideoFrameGenerator
provides a pre-determined number of frames from the entire videoSlidingFrameGenerator
provides frames with decay for the entire video or with a sequence timeOpticalFlowGenerator
provides an optical flow sequence from frames with different methods (experimental)Each generator accepts a standard set of parameters:
glob_pattern
; must contain {classname}
, e.g. './videos/{classname}/*.avi' - the "classname" in string is used to detect classesnb_frames
; the number of frames in the sequencebatch_size
; the number of sequences in one batchtransformation
; can be None
or ImageDataGenerator (Keras) for data augmentationuse_frame_cache
; use with caution, if set to True
, the class will keep frames in memory (without augmentation). You need a lot of memorySee the class documentation for all parameters.
tensorflow.keras
split_test
and split_val