This is the official implementation of Semi-Supervised Wide-Angle Portraits Correction by Multi-Scale Transformer, CVPR 2022
We propose a semi-supervised network for wide-angle portraits correction. Wide-angle images often suffer from skew and distortion affected by perspective distortion, especially noticeable at the face regions. Previous deep learning based approaches need the ground-truth correction flow maps for training guidance. However, such labels are expensive, which can only be obtained manually. In this work, we design a semi-supervised scheme and build a high-quality unlabeled dataset with rich scenarios, allowing us to simultaneously use labeled and unlabeled data to improve performance. Specifically, our semi-supervised scheme takes advantage of the consistency mechanism, with several novel components such as direction and range consistency (DRC) and regression consistency (RC). Furthermore, different from the existing methods, we propose the Multi-Scale Swin-Unet (MS-Unet) based on the multi-scale swin transformer block (MSTB), which can simultaneously learn short-distance and long-distance information to avoid artifacts. Extensive experiments demonstrate that the proposed method is superior to the state-of-the-art methods and other representative baselines.
The pipline of semi-supervised wide-angle portraits correction framework with the surrogate task (segmentation)
In this repository, we will release the unlabeled dataset and MegDL implementation of our paper.
All codes are tested on Linux.
If you think this work is helpful, please cite
@inproceedings{zhu2022semi,
title={Semi-Supervised Wide-Angle Portraits Correction by Multi-Scale Transformer},
author={Zhu, Fushun and Zhao, Shan and Wang, Peng and Wang, Hao and Yan, Hua and Liu, Shuaicheng},
booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition},
pages={19689--19698},
year={2022}
}