HongwenZhang / JVCR-3Dlandmark

[TIP 2019] Adversarial Learning Semantic Volume for 2D/3D Face Shape Regression in the Wild
MIT License
152 stars 30 forks source link
3d-face 3d-landmark

Joint Voxel and Coordinate Regression (JVCR) for 3D Facial Landmark Localization

This repository includes the PyTorch code of the JVCR method described in Adversarial Learning Semantic Volume for 2D/3D Face Shape Regression in the Wild (IEEE Transactions on Image Processing, 2019).

Requirements

packages

Usage

Clone the repository and install the dependencies mentioned above

git clone https://github.com/HongwenZhang/JVCR-3Dlandmark.git
cd JVCR-3Dlandmark

Then, you can run the demo code or train a model from stratch.

Demo

  1. Download the pre-trained model (trained on 300W-LP) and put it into the checkpoint directory

  2. Run the demo code

python run_demo.py --verbose

Training

  1. Prepare the training and evaluation datasets
    • Download 300W-LP and AFLW3000-3D
    • Create soft links to the dataset directories
      ln -s /path/to/your/300W_LP data/300wLP/images
      ln -s /path/to/your/aflw2000 data/aflw2000/images
    • Download .json annotation files from here and put them into data/300wLP and data/aflw2000 respectively
  2. Run the training code
    python train.py --gpus 0 -j 4

Acknowledgment

The code is developed upon PyTorch-Pose. Thanks to the original author.

Citation

If the code is helpful in your research, please cite the following paper.

@article{zhang2019adversarial,
  title={Adversarial Learning Semantic Volume for 2D/3D Face Shape Regression in the Wild},
  author={Zhang, Hongwen and Li, Qi and Sun, Zhenan},
  journal={IEEE Transactions on Image Processing},
  volume={28},
  number={9},
  pages={4526--4540},
  year={2019},
  publisher={IEEE}
}