SignDiff / Processed-Data

Preprocessed data of SignDiff: Learning Diffusion Models for American Sign Language Production
https://signdiff.github.io/
10 stars 1 forks source link
asl gsl slp

Processed-Data

This repository stores the preprocessed data for paper:
SignDiff: Learning Diffusion Models for American Sign Language Production

Note: We're going to start a company, and the code is not going to be public.

How2Sign for ASLP

After preprocessing How2Sign dataset, the condensed data set obtained is as follows:

It can be used in the training of ASL production models.
Note: Because I later processed more data, the link above is four times the size of the one in the paper and is the result of the full How2Sign processing.

Phoenix-14T for GSLP

After preprocessing Phoenix-14T dataset, the condensed data set obtained is as follows:

It can be used in the training of GSL production models.

How2Sign for SignDiff

After preprocessing How2Sign dataset, the condensed data set obtained is as follows:

It can be used for the diffusion model training of pose2video in sign language. (Based on ControlNet)

How2Sign for Vid2Vid

After preprocessing How2Sign dataset, the condensed data set obtained is as follows:

It can be used for the GAN model training of pose2video in sign language. (Based on Vid2Vid)

Tool for Data

Our pre-processing tools: the data cleansing tool at SignDiff/tool.

Stay tuned. The data above should be sufficient for the time being.

@misc{fang2024signdiffdiffusionmodelsamerican,
      title={SignDiff: Diffusion Models for American Sign Language Production}, 
      author={Sen Fang and Chunyu Sui and Yanghao Zhou and Xuedong Zhang and Hongbin Zhong and Minyu Zhao and Yapeng Tian and Chen Chen},
      year={2024},
      eprint={2308.16082},
      archivePrefix={arXiv},
      primaryClass={cs.CV},
      url={https://arxiv.org/abs/2308.16082}, 
}

@misc{fang2024signllm,
      title={SignLLM: Sign Languages Production Large Language Models}, 
      author={Sen Fang and Lei Wang and Ce Zheng and Yapeng Tian and Chen Chen},
      year={2024},
      eprint={2405.10718},
      archivePrefix={arXiv},
      primaryClass={cs.CV}
}

Related Work