This is the implementation for SANet(Skeleton-Aware Neural Sign Language Translation) based on pytorch 1.6 and python 3.8
CSL dataset [1] which contains 25K labeled videos with 100 chinese sentences filmed by 50 signers(http://home.ustc.edu.cn/~pjh/openresources/cslr-dataset-2015/index.html)
German sign language dataset: the RWTH-PHOENIX-Weather 2014T [2] which contains 8257 weather forecast samples from 9 signers. (https://www-i6.informatik.rwth-aachen.de/~koller/RWTH-PHOENIX-2014-T/)
The most important part of SANet has been already made public. As for the whole project, it will not be released due to IP issues. Thank you for your understanding.
[1] Jie Huang, Wengang Zhou, Qilin Zhang, Houqiang Li, and Weiping Li. 2018. Video-based sign language recognition without temporal segmentation. In AAAI.
[2] N. C. Camgoz, S. Hadfield, O. Koller, H. Ney, and R. Bowden. 2018. Neural Sign Language Translation. In CVPR. 7784–7793. https://doi.org/10.1109/CVPR.2018.00812