Official PyTorch Implementation
The accompanying videos can be found on YouTube. For more details, please refer to the paper.
pip install -r requirements.txt
python train.py --cfg configs/mobile_stylegan_ffhq.json --gpus <n_gpus>
Our framework supports StyleGAN2 checkpoints format from rosinality/stylegan2-pytorch. To convert ckpt your own checkpoint of StyleGAN2 to our framework:
python convert_rosinality_ckpt.py --ckpt <path_to_rosinality_stylegan2_ckpt> --ckpt-mnet <path_to_output_mapping_network_ckpt> --ckpt-snet <path_to_output_synthesis_network_ckpt> --cfg-path <path_to_output_config_json>
To check that your checkpoint is converted correctly, just run demo visualization:
python demo.py --cfg <path_to_output_config_json> --ckpt "" --generator teacher
python generate.py --cfg configs/mobile_stylegan_ffhq.json --device cuda --ckpt <path_to_ckpt> --output-path <path_to_store_imgs> --batch-size <batch_size> --n-batches <n_batches>
To evaluate the FID score we use a modified version of pytorch-fid library:
python evaluate_fid.py <path_to_ref_dataset> <path_to_generated_imgs>
Run demo visualization using MobileStyleGAN:
python demo.py --cfg configs/mobile_stylegan_ffhq.json --ckpt <path_to_ckpt>
Run visual comparison using StyleGAN2 vs. MobileStyleGAN:
python compare.py --cfg configs/mobile_stylegan_ffhq.json --ckpt <path_to_ckpt>
python train.py --cfg configs/mobile_stylegan_ffhq.json --ckpt <path_to_ckpt> --export-model onnx --export-dir <output_dir>
python train.py --cfg configs/mobile_stylegan_ffhq.json --ckpt <path_to_ckpt> --export-model coreml --export-dir <output_dir>
We provide external library random_face as an example of deploying our model at the edge devices using the OpenVINO framework.
Name | FID |
---|---|
mobilestylegan_ffhq.ckpt | 7.75 |
(*) Our framework supports automatic download pretrained models, just use --ckpt <pretrined_model_name>
.
Code | Source | License |
---|---|---|
Custom CUDA kernels | https://github.com/NVlabs/stylegan2 | Nvidia License |
StyleGAN2 blocks | https://github.com/rosinality/stylegan2-pytorch | MIT |
We want to thank the people whose works contributed to our project::
If you are using the results and code of this work, please cite it as:
@misc{belousov2021mobilestylegan,
title={MobileStyleGAN: A Lightweight Convolutional Neural Network for High-Fidelity Image Synthesis},
author={Sergei Belousov},
year={2021},
eprint={2104.04767},
archivePrefix={arXiv},
primaryClass={cs.CV}
}
@article{BELOUSOV2021100115,
title = {MobileStyleGAN.pytorch: PyTorch-based toolkit to compress StyleGAN2 model},
journal = {Software Impacts},
year = {2021},
issn = {2665-9638},
doi = {https://doi.org/10.1016/j.simpa.2021.100115},
url = {https://www.sciencedirect.com/science/article/pii/S2665963821000452},
author = {Sergei Belousov},
}