PyTorch Implementation of Modeling 3D Infant Kinetics Using Adaptive Graph Convolutional Networks.
module load pytorch/1.13
pip install -Ur requirements.txt
. ./env.sh
docker run -v $(pwd):/work/infant-aagcn -w /work/infant-aagcn --user $(id -u):$(id -g) --gpus all --shm-size 16g -it infant-aagcn
Data (2 zip archives, 4.8GB) available upon request. Preprocessing expects .csv files with joint coordinates over time.
unzip.sh
preprocess.sh
By running the env script bin is put on path where all executables reside. The modules folder contain the models and dataloader etc.
train.py \
--data-dir data/streams/combined \
--output-dir results/aagcn \
--age-file metadata/combined.csv \
--learning-rate 0.01 \
--batch-size 32 \
--num-workers 16 \
--streams j \
--k-folds 10 \
--epochs 20 \
--adaptive \
--attention
Slurm: sbatch run/submit.sh
or for the full comparison sbatch run/experiment.sh
Training creates a results folder with all runs. The notebook folder then contains separate files for the ml baseline, aagcn inference and metrics calculation.
An example of how the models can be called to make predictions is avaialable in the submit script run/predict.sh
.
@article{holmberg2024modeling,
title={Modeling 3D Infant Kinetics Using Adaptive Graph Convolutional Networks},
author={Daniel Holmberg and Manu Airaksinen and Viviana Marchi and Andrea Guzzetta and Anna Kivi and Leena Haataja and Sampsa Vanhatalo and Teemu Roos},
journal={arXiv preprint arXiv:2402.14400},
year={2024}
}