art-jang / Signing-Outside-the-Studio

5 stars 0 forks source link

Signing Outside the Studio: Benchmarking Background Robust Continuous Sign Language Recognition

This repository provides the official pytorch implementation of our paper:

Signing Outside the Studio: Benchmarking Background Robustness for Continuous Sign Language Recognition

Youngjoon Jang, Youngtaek Oh, Jae Won Cho, Dong-Jin Kim, Joon Son Chung, In So Kweon

BMVC 2022

[Paper] [Project Page]

This code includes two functionalities: (1) an algorithm to automatically and deterministically generate the proposed Scene-PHOENIX benchmark dataset using scene database LSUN and SUN397, and (2) a pytorch implementation of loading PHOENIX-2014 dataset including Scene-PHOENIX benchmark for evaluation.

Abstract

Figure 1: (a) VAC trained on monochromatic background sign language videos fails to attend to the signer in the video. (b) Both Baseline (Res18 + LSTM) and VAC severely degrade when tested on our Scene-PHOENIX. In contrast, our framework can still capture signer's expressions and favorably close the gap between test splits of the original PHOENIX-2014 and Scene-PHOENIX.

The goal of this work is background-robust continuous sign language recognition. Most existing Continuous Sign Language Recognition (CSLR) benchmarks have fixed backgrounds and are filmed in studios with a static monochromatic background. However, signing is not limited only to studios in the real world.

In order to analyze the robustness of CSLR models under background shifts, we first evaluate existing state-of-the-art CSLR models on diverse backgrounds. To synthesize the sign videos with a variety of backgrounds, we propose a pipeline to automatically generate a benchmark dataset utilizing existing CSLR benchmarks. Our newly constructed benchmark dataset consists of diverse scenes to simulate a real-world environment. We observe that even the most recent CSLR method cannot recognize glosses well on our new dataset with changed backgrounds.

In this regard, we also propose a simple yet effective training scheme including (1) background randomization and (2) feature disentanglement for CSLR models. The experimental results on our dataset demonstrate that our method generalizes well to other unseen background data with minimal additional training images.

Requirements

Step-by-step to generate Scene-PHOENIX

  1. Locate bg_dataset.py and generate_scene_phoenix.py on Human-Segmentation-PyTorch folder.

  2. Copy the attached lsun and SUN397 folders to your dataset path {DATA_PATH}.

    • The txt files specify the index of each data to be used for synthesizing Scene-PHOENIX as background.
  3. Run the generate_scene_phoenix.py.

    python generate_scene_phoenix.py  --sign_root {PATH_TO_PHOENIX}
    • Note that the variable bg_root in generate_scene_phoenix.py should be modified to your paths of LSUN and SUN397 data.
  4. The Scene-PHOENIX benchmark datasets are created on the locations of PHOENIX-2014 dataset.

Loading PHOENIX-2014 including evaluation splits with synthesized backgrounds

Citation

If you find our work useful for your research, please cite our work with the following bibtex:

@inproceedings{jang2022signing,
  title = {Signing Outside the Studio: Benchmarking Background Robustness for Continuous Sign Language Recognition},
  author = {Jang, Youngjoon and Oh, Youngtaek and Cho, Jae Won and Kim, Dong-Jin and Chung, Joon Son and Kweon, In So},
  booktitle = {British Machine Vision Conference (BMVC)},
  year = {2022}
}