rotem-shalev / Ham2Pose

Official implementation for "Ham2Pose: Animating Sign Language Notation into Pose Sequences" [CVPR 2023]
https://rotem-shalev.github.io/ham-to-pose
43 stars 5 forks source link

Dataset arrangements #1

Closed ZhengdiYu closed 1 year ago

ZhengdiYu commented 1 year ago

Hi,

Thank you for your amazing work!

I would like to know more about the dataset layout that you used in the paper. You have mentioned 4 datasets in the paper but I only found 2 in the code.

Also, will there be a guideline on how to organize the dataset for training your method? (e.g. How to download, data format, path)

Many thanks,

rotem-shalev commented 1 year ago

Hi Zhengdi, I used 3 datasets that contain 4 languages together:

  1. The DGS Corpus
  2. Dicta-Sign
  3. The corpus-based dictionary of Polish Sign Language.

The HamNoSys notations and links to the original videos are in data.json and the extracted pose estimations are in keypoints.zip so for training over this data all you need to do is run train.py.

Note that the trained model is also available under model.ckpt.

ZhengdiYu commented 1 year ago

Thank you for your reply! This is really helpful.