alexlee-gk / video_prediction

Stochastic Adversarial Video Prediction
https://alexlee-gk.github.io/video_prediction/
MIT License
303 stars 65 forks source link
adversarial gan generative-adversarial-network stochastic vae vae-gan variational-autoencoder video-generation video-prediction

Stochastic Adversarial Video Prediction

[Project Page] [Paper]

TensorFlow implementation for stochastic adversarial video prediction. Given a sequence of initial frames, our model is able to predict future frames of various possible futures. For example, in the next two sequences, we show the ground truth sequence on the left and random predictions of our model on the right. Predicted frames are indicated by the yellow bar at the bottom. For more examples, visit the project page.

Stochastic Adversarial Video Prediction,
Alex X. Lee, Richard Zhang, Frederik Ebert, Pieter Abbeel, Chelsea Finn, Sergey Levine.
arXiv preprint arXiv:1804.01523, 2018.

An alternative implementation of SAVP is available in the Tensor2Tensor library.

Getting Started

Prerequisites

Installation

Use a Pre-trained Model

Model Training

Datasets

Download the datasets using the following script. These datasets are collected by other researchers. Please cite their papers if you use the data.

To use a different dataset, preprocess it into TFRecords files and define a class for it. See kth_dataset.py for an example where the original dataset is given as videos.

Note: the bair dataset is used for both the action-free and action-conditioned experiments. Set the hyperparameter use_state=True to use the action-conditioned version of the dataset.

Models

The following are ablations of our model:

See pretrained_models/download_model.sh for a complete list of available pre-trained models.

Model and Training Hyperparameters

The implementation is designed such that each video prediction model defines its architecture and training procedure, and include reasonable hyperparameters as defaults. Still, a few of the hyperparameters should be overriden for each variant of dataset and model. The hyperparameters used in our experiments are provided in hparams as JSON files, and they can be passed onto the training script with the --model_hparams_dict flag.

Citation

If you find this useful for your research, please use the following.

@article{lee2018savp,
  title={Stochastic Adversarial Video Prediction},
  author={Alex X. Lee and Richard Zhang and Frederik Ebert and Pieter Abbeel and Chelsea Finn and Sergey Levine},
  journal={arXiv preprint arXiv:1804.01523},
  year={2018}
}