To test our model we use multimodal CelebA-HQ which also provides the semantic labels, but you can use the Face parsing model provided on their GitHub to generate custom semantic labels.
Currently only instructions for gradio_img2img.py have been provided, which only works with CelebA-HQ multimodal.
I'll update as soon as possible the readme to clarify how to generate the entire test set of CelebA-HQ.
If you need anything else please do not hesitate to contact.
To test our model we use multimodal CelebA-HQ which also provides the semantic labels, but you can use the Face parsing model provided on their GitHub to generate custom semantic labels.
Currently only instructions for gradio_img2img.py have been provided, which only works with CelebA-HQ multimodal.
I'll update as soon as possible the readme to clarify how to generate the entire test set of CelebA-HQ.
If you need anything else please do not hesitate to contact.