Suggest a modification to MSTCN++ aiming to improve performances of gesture recognition task over specific data of physicians performing practice surgery operations.
The file which does all of the heavy lifting is main.py
.
main.py
is responsible for running the baseline and modified architectures, performing gesture recognition over the data videos. It also reports loss and accuracy over train and validation sets and generates graphs in clearML.
The script assumes the following paths:
/datashare/APAS/transcriptions_gestures/
where each video has a corresponding text file with the same name as the video, holding the ground truth labels in the following frame format:0 524 G0
525 662 G1
663 808 G2
809 898 G3
899 970 G4
The video features are provided in the absolute path /datashare/APAS/features/
where the directory tree is the following:
root
|->fold0
|->fold1
|->fold2
|->fold3
|->fold4
and each fold folder contains all the features of all the videos.
The Cross validation splitting methods are provided in the absolute path /datashare/APAS/folds/
where the directory tree is the following:
root
|->valid 0
|->valid 1
|->valid 2
|->valid 3
|->valid 4
|->test 0
|->test 1
|->test 2
|->test 3
|->test 4
each text file contains list of videos relevant to its own fold.
To reproduce the results use the following commands:
main.py --action baseline
main.py --action train
main.py --action train_tradeoff
All the above commands include performing prediction over the test data.
This script assumes the same dir structure as main.py
and outputs all of the metrics described in the report such as Accuracy, F1 and Edit Score over the test data (assumes predictions were already generated).
Additionally, this scripts plots segmentation plots and metric graphs.
This script was used to annotate videos with the ground truth labels and their predicted labels.
This git also includes 3 anotated videos (mp4 files). Those videos were annotated using our final trained model.