fabiozappo / SkeletonGroupActivityRecognition

Learning Group Activities from Skeletons without Individual Action Labels
Apache License 2.0
21 stars 4 forks source link

Learning Group Activities from Skeletons without Individual Action Labels.

This is an official implementation of Learning Group Activities from Skeletons without Individual Action Labels paper [Link].

Requirements

Ubuntu >= 16.04
NVIDIA Container Toolkit

For instructions on getting started with the NVIDIA Container Toolkit, refer to the installation guide.

Dataset and Pretrained Network

SkeletonGroupActivityRecognition
|-- ...
|-- Dockerfile
|-- VDtracker.py
|-- Weights
|   |-- p3d_flow_199.checkpoint.pth.tar
|   `-- p3d_rgb_199.checkpoint.pth.tar
|-- extract_skeletons.py
|-- train.py
`-- volleyball_dataset
    `-- videos

Docker container

Build the container: cd into cloned repo and run:

docker build -t skeleton-group-activity-recognition:latest .

Run the container:

docker run \
  --rm -it \
  --gpus="device=all" \
  -v volleyball_dataset:/work/sk-gar/volleyball_dataset \
  skeleton-group-activity-recognition

Running the scripts

Person tracking & Skeleton extraction

python3 VDtracker.py
python3 extract_skeletons.py --no_display --save

After running, there should be two directories tracked_persons and tracked_skeletons.

SkeletonGroupActivityRecognition
`-- volleyball_dataset
    |-- tracked_persons
    |-- tracked_skeletons
    `-- videos

Group Activity Recognition

Citation

If you find this code to be useful in your own research, please consider citing our paper:

@inproceedings{zappardinoLearningGroupActivities2021,
  title = {Learning {{Group Activities}} from {{Skeletons}} without {{Individual Action Labels}}},
  booktitle = {2020 25th {{International Conference}} on {{Pattern Recognition}} ({{ICPR}})},
  author = {Zappardino, Fabio and Uricchio, Tiberio and Seidenari, Lorenzo and del Bimbo, Alberto},
  date = {2021-01},
  pages = {10412--10417},
  issn = {1051-4651},
  doi = {10.1109/ICPR48806.2021.9413195},
  abstract = {To understand human behavior we must not just recognize individual actions but model possibly complex group activity and interactions. Hierarchical models obtain the best results in group activity recognition but require fine grained individual action annotations at the actor level. In this paper we show that using only skeletal data we can train a state-of-the art end-to-end system using only group activity labels at the sequence level. Our experiments show that models trained without individual action supervision perform poorly. On the other hand we show that pseudo-labels can be computed from any pre-trained feature extractor with comparable final performance. Finally our carefully designed lean pose only architecture shows highly competitive results versus more complex multimodal approaches even in the self-supervised variant.},
  eventtitle = {2020 25th {{International Conference}} on {{Pattern Recognition}} ({{ICPR}})},
  keywords = {Activity recognition,Annotations,Art,Computational modeling,Computer architecture,Data privacy,Feature extraction}
}