fpv-iplab / rulstm

Code for the Paper: Antonino Furnari and Giovanni Maria Farinella. What Would You Expect? Anticipating Egocentric Actions with Rolling-Unrolling LSTMs and Modality Attention. International Conference on Computer Vision, 2019.
http://iplab.dmi.unict.it/rulstm
132 stars 33 forks source link

What Would You Expect? Anticipating Egocentric Actions with Rolling-Unrolling LSTMs and Modality Attention

See the quickstart here 👉 Open In Colab

This repository hosts the code related to the following papers:

Antonino Furnari and Giovanni Maria Farinella, Rolling-Unrolling LSTMs for Action Anticipation from First-Person Video. IEEE Transactions on Pattern Analysis and Machine Intelligence (PAMI). 2020. Download

Antonino Furnari and Giovanni Maria Farinella, What Would You Expect? Anticipating Egocentric Actions with Rolling-Unrolling LSTMs and Modality Attention. International Conference on Computer Vision, 2019. Download

Please also see the project web page at http://iplab.dmi.unict.it/rulstm.

If you use the code/models hosted in this repository, please cite the following papers:

@article{furnari2020rulstm,
  author = {Antonino Furnari and Giovanni Maria Farinella},
  journal = {IEEE Transactions on Pattern Analysis and Machine Intelligence (PAMI)},
  title = {Rolling-Unrolling LSTMs for Action Anticipation from First-Person Video},
  year = {2020}
}
@inproceedings{furnari2019rulstm, 
  title = { What Would You Expect? Anticipating Egocentric Actions with Rolling-Unrolling LSTMs and Modality Attention. }, 
  author = { Antonino Furnari and Giovanni Maria Farinella },
  year = { 2019 },
  booktitle = { International Conference on Computer Vision (ICCV) },
}

Updates:

Overview

This repository provides the following components:

Please, refer to the paper for more technical details. The following sections document the released material.

RU-LSTM Implementation and main training/validation/test program

The provided implementation and training/validation/test program can be found in the RULSTM directory. In order to proceed to training, it is necessary to retrieve the pre-extracted features from our website. To save space and bandwidth, we provide features extracted only on the subset of frames used for the experiments (we sampled frames at about 4fps - please see the paper). These features are sufficient to train/validate/test the methods on the whole EPIC-KITCHENS-55 dataset following the settings reported in the paper.

Requirements

To run the code, you will need a Python3 interpreter and some libraries (including PyTorch).

Anaconda

An Anaconda environment file with a minimal set of requirements is provided in environment.yml. If you are using Anaconda, you can create a suitable environment with:

conda env create -f environment.yml

To activate the environment, type:

conda activate rulstm

Pip

If you are not using Anaconda, we provide a list of libraries in requirements.txt. You can install these libraries with:

pip install -r requirements.txt

Dataset, training/validaiton splits, and features

We provide CSVs for training/validation/and testing on EPIC-KITCHENS-55 in the data/ek55 directory. A brief description of each csv follows:

Training and validation CSVs report the following columns:

The test CSVs do not report the last three columns since test annotations are not public. These CSVs are provided to allow producing predicitons in JSON format to be submitted to the leaderboard.

Please note that time-stamps are reported in terms of frame numbers in the csvs. This has been done by assuming a fixed framerate of 30fps. Since the original videos have been collected a different framerates, we first converted all videos to 30fps using ffmpeg.

We provide pre-extracted features. The features are stored in LMDB datasets. To download them, run the following commands:

Alternatively, you can download features extracted from each frame by using the script:

Please note that this download is significantly heavier and that it is not required to run the training with default parameters on EPIC-KITCHENS-55.

This should populate three directories data/ek{55|100}/rgb, data/ek{55|100}/flow, data/ek{55|100}/obj with the LMDB datasets.

Trainining

Models can be trained using the main.py program. For instance, to train the RGB branch for the action anticipation task, use the following commands:

EPIC-KITCHENS-55

EPIC-KITCHENS-100

This will first pre-train using sequence completion, then fine-tune to the main anticipation task. All models will be stored in the models/ek{55|100} directory.

Optionally, a --visdom flag can be passed to the training program in order to enable loggin using visdom. To allow this, it is necessary to install visdom with:

pip install visdom

And run it with:

python -m visdom.server

Similar commands can be used to train all models. The following scripts contain all commands required to train the models for egocentric action anticipation and early action recognition:

Validation

The anticipation models can be validated using the following commands:

Action Anticipation

EPIC-KITCHENS-55
EPIC-KITCHENS-100

These instructions will evaluate the models using the official measures of the EPIC-KITCHENS-100 dataset for the action anticipation challenge.

Validation Jsons

You can produce validation jsons as follows:

Early Action Recognition

Similarly, for early action recognition:

EPIC-KITCHENS-55

Test

The main.py program also allows to run the models on the EPIC-KITCHENS-55 and EPIC-KITCHENS-100 test sets and produce jsons to be sent to the leaderboard (see http://epic-kitchens.github.io/). To test models, you can use the following commands:

EPIC-KITCHENS-55

EPIC-KITCHENS-100

Pretrained Models

EPIC-KITCHENS-55

We provide the official checkpoints used to report the results on EPIC-KITCHENS-55 in our ICCV paper. These can be downloaded using the script:

./script/download_models_ek55.sh

The models will be downloaded in models/ek55. You can test the model and obtain the results reported in the paper using the same main.py program. For instance:

python main.py test data/ek55 models/ek55 --modality fusion --task anticipation --json_directory jsons

EPIC-KITCHENS-100

We provide the checkpoints used to report the results in the EPIC-KITCHENS-100 paper (https://arxiv.org/abs/2006.13256). These can be downloaded using the script:

./script/download_models_ek100.sh

The models will be downloaded in models/ek100. You can produce the validation and test jsons replicating the results of the paper as follows:

TSN models

Can be downloaded from the following URLs:

EPIC-KITCHENS-55

EPIC-KITCHENS-100

Faster-RCNN Model Trained on EPIC-KITCHENS-55

We release the Faster-RCNN object detector trained on EPIC-KITCHENS-55 that we used for our experiments. The detector has been trained using the detectron library. The yaml configuration file used to train the model is available in the FasterRCNN directory of this repository. The weights can be downloaded from this link.

Usage

Make sure the detectron library is installed and available in the system path. A good idea might be to use a docker container. Please refer to https://github.com/facebookresearch/Detectron/blob/master/INSTALL.md for more details.

Sample usage:

A new file path/to/video.mp4_detections.npy will be created. The file will contain a list of arrays reporting the coordinates of the objects detected in each frame of the video. Specifically, the detections of a given frame will be contained in a tensor of shape N x 6, where:

Feature Extraction

A few example scripts showing how we performed feature extraction from video, can be found in the FEATEXT directory.

To extract features using the TSN models, it is necessary to install the pretrainedmodels package through pip install pretrainedmodels.

To run the examples follow these steps:

EGTEA Gaze+ Pre-Extracted Features

We provide the EGTEA Gaze+ features used for the experiments (see paper for the details) at https://iplab.dmi.unict.it/sharing/rulstm/features/egtea.zip. The features have been extracted using three different TSN models trained following the official splits proposed by the authors of EGTEA Gaze+ (see http://cbs.ic.gatech.edu/fpv/). The annotations formatted in a way to be directly usable with this repository can be found in RULSTM/data/egtea.

Note: a previous version of the zip file contained the following LMDB databases:

The first two databases had been included by mistake and should be ignored, instead, the remaining six databases should be used for the experiments when the standard evaluation protocol based on three splits is adopted. The following paragraph explains in detail how they have been created:

An updated version of the zip file including only the correct databases is available at https://iplab.dmi.unict.it/sharing/rulstm/features/egtea.zip.

Object detections on EPIC-KITCHENS-100

We provide object detections obtained on each frame of EPIC-KITCHENS-100. The detections have been obtained by running the Faster RCNN model trained on EPIC-KITCHENS-55 described above and included in this repository. You can download a zip file containing all detections through this link: https://iplab.dmi.unict.it/sharing/rulstm/detected_objects.zip.

Note These detections are a superset of the ones used for the original experiments on EPIC-KITCHENS-55. If you are experimenting with EK-55, you can just discard the extra videos not belonging to EK-55.

The zip file contains a npy file for each video in EPIC-KITCHENS-100. For examle:

P01_01.MP4_detections.npy
P01_02.MP4_detections.npy
P01_03.MP4_detections.npy
P01_04.MP4_detections.npy
P01_05.MP4_detections.npy
P01_06.MP4_detections.npy
...

Each file contains all object detections obtained in the video referenced in the filename. You can load these npy files as in this example code:

import numpy as np
data=np.load('P04_101.MP4_detections.npy', allow_pickle=True, encoding='latin1')

data will be a 1-dimensional numpy ndarray containing n entries, where n is the number of frames in the video. The n-th entry of the dataframe will be an array of shape m \times 6 where, m is the number of objects detected in the frame. The six columns contain respectively:

The following example code separates class ids, box coordinates and confidence scores:

object_classes = data[:,0]-1
object_boxes = data[:,1:5]
detection_scores = data[:,-1]

Related Works