EMI-Group / SoloGAN

Official implementation of SoloGAN
Apache License 2.0
11 stars 2 forks source link
generative-adversarial-network image-to-image-translation

Multimodal image-to-image translation via a single generative adversarial network

Our model architecture is defined as depicted bellow, please refer to the paper for more details:

Usage Guidance

Dependencies

  1. python 3.x
  2. pytorch 4.0+

Testing

Then the translated samples are stored in ./checkpoints/edges_shoes&handbags/edges_shoes&handbags_results directory. By default, it produce 5 random translation outputs.

Training

python ./train.py --name horse2zebra --d_num 2

Intermediate image outputs and model binary files are stored in ./checkpoints/horse2zebra/web

Results

Edges ↔ Shoes&handbags:

Horse ↔ Zebra:

Cat ↔ Dog ↔ Tiger:

Leopard ↔ Lion ↔ Tiger:

Photos ↔ Vangogh ↔ Monet ↔ Cezanne:

bibtex

If this work helps to easy your research, please cite this paper :

@article{huang2022multimodal,
  title={Multimodal image-to-image translation via a single generative adversarial network},
  author={Huang, Shihua and He, Cheng and Cheng, Ran},
  journal={IEEE Transactions on Artificial Intelligence},
  year={2022},
  publisher={IEEE}
}

Acknowledgment

The code used in this research is based on SingleGAN and CycleGAN.