Open justinpinkney opened 3 years ago
Hi Justin Thank you. Your paper looks interesting. Specially, you provides a method to interpolate image between two models (e.g., the based model and the adapted model) instead of the one shown in StyleGAN, which is able to synthesize new image using single model (i.e. the based model). The generated image is convincing and amazing. I really like it.
My work is different (but related ) to the usage of the pre-trained model. We investigated the knowledge transferring for the traditional I2I translation, where user can manipulate image on target domain (e.g. content and style), like cyclegan, stargan, munit, drit, etc..
Finally, thank you for your interesting. Your work is great.
Best Yaxing
Hi, I really like the look of the paper, only had a chance to skim through it so far, but looks really nice.
Reading your paragraph in the abstract:
I feel like there is a strong connection to my own research in Resolution Dependent GAN Interpolation for Controllable Image Synthesis Between Domains. Where we explore controlled interpolation of the "hierarchical features" (as you describe them) between two models generated by transfer learning. In particular when you note that:
I'd suggest that maybe the above paper is an example of this, as the Toonification results in section 3 is an image to image translation application produced by leveraging pre-trained GANs.
Interested to hear your thoughts! Cheers, Justin