Our model architecture is defined as depicted bellow, please refer to the paper for more details:
python ./test.py --name edges_shoes&handbags --d_num 2
Then the translated samples are stored in ./checkpoints/edges_shoes&handbags/edges_shoes&handbags_results directory. By default, it produce 5 random translation outputs.
├── datasets
│ └── horse2zebra
│ ├── trainA
│ ├── testA
│ ├── trainB
│ ├── testB
The Animals With Attributes (AWA) dataset can be downloaded from hear.
python ./train.py --name horse2zebra --d_num 2
Intermediate image outputs and model binary files are stored in ./checkpoints/horse2zebra/web
If this work helps to easy your research, please cite this paper :
@article{huang2022multimodal,
title={Multimodal image-to-image translation via a single generative adversarial network},
author={Huang, Shihua and He, Cheng and Cheng, Ran},
journal={IEEE Transactions on Artificial Intelligence},
year={2022},
publisher={IEEE}
}
The code used in this research is based on SingleGAN and CycleGAN.