tud-amr / social_vrnn

GNU General Public License v3.0
22 stars 10 forks source link

Social-VRNN: One-Shot Multi-modal Trajectory Prediction for Interacting Pedestrians

This is the code associated with the following publications:

Conference Version: "Social-VRNN: One-Shot Multi-modal Trajectory Prediction for Interacting Pedestrians", published in CoRL 2020. Link to Paper Link to Video

This repository also contains the scripts to train and evaluate quantitatively and qualitatively the proposed network architecture.


Setup

This set of instructions were only tested for Ubuntu16 with Tensorflow 1.15.

You can find the trained_models folder, all the trained models and inside each model the qualitative results in a video format. The quantitative results are saved inside the src src folder on a csv file with the model name, e.g. <Model name>.csv.

Model

Code Description

The code is structure as it follows:

Please note models were trained using the original implementation of Social-Ways. The performance results presented for a smaller number of samples were obtained using the test function provided in this repository. The qualitative compariison results use the same inference scheme as provided in the original repo. Here we just provide the plot functions to visualize the predictions of the Social-Ways model.

Ackowledgements:

We would like to thanks Mark Pfeiffer who provided the implementation from which this code was built on: "A data-driven model for interaction-aware pedestrian motion prediction in object cluttered environments" Link to Paper.

If you find this code useful, please consider citing:

@inproceedings{,
  author = {},
  booktitle = {Conference on Robot Learning},
  date-modified = {2020-07-18 06:18:08 -0400},
  month = July,
  title = {Social-VRNN: One-Shot Multi-modal Trajectory Prediction for Interacting Pedestrians},
  year = {2020},
  url = {-},
  bdsk-url-1 = {-}
}