Maclory / Deep-Iterative-Collaboration

Pytorch implementation of Deep Face Super-Resolution with Iterative Collaboration between Attentive Recovery and Landmark Estimation (CVPR 2020)
MIT License
295 stars 63 forks source link

Creating Annotations for Training on Other Datasets #10

Closed coloneldbugger closed 4 years ago

coloneldbugger commented 4 years ago

I am attempting to try and train a new model using a higher resolution HR and LR data sets. The CelebA-HQ and FFHQ are both at 1024 resolution. I am wondering what type of annotation is expected for these datasets and if a custom annotation was used for the CelebA set provided here.

I notice on the CelebA homepage they talk about using 5 landmarks and 40 binary attributes in their annotation but it appears this repo is using 68 point dlib landmarking? Is detecting and generating the expected landmark annotations for an image set already something in this code I can leverage to create new annotations for new data sets? I see there are some pretrained detection models and am wondering if I should use those or should I use a tool like OpenFace or ImgLab to generate 68 point landmark annotations with dlib in pkl format?

I hope these aren't stupid questions and greatly appreciate any tips you can give on preparing a custom data sets for training

Steve-Tod commented 4 years ago

Hi, thanks for your interest. We use OpenFace to generate 68 landmarks for CelebA faces, and use the official 5-landmark annotation to check the result(We only use 68-landmark annotations whose errors are smaller than a threshold).