HideUnderBush / UI2I_via_StyleGAN2

Unsupervised image-to-image translation method via pre-trained StyleGAN2 network
Other
222 stars 30 forks source link

Cite prior work on layer swapping #2

Closed justinpinkney closed 3 years ago

justinpinkney commented 3 years ago

Hi just stumbled across this and it looks great, particularly the anime generation images. Looks like you're essentially using the method I described in some of my blog posts around transfer learning, using one latent code from one model in another and layer swapping. (https://.www.justinpinkney.com) and I'm glad to see you cite Doron and I in for our Toonify work!

We actually have a paper on arxiv that descirbes this approach, particularly focussing on the idea of layer swapping you're using It would be really great if you could cite our actual paper: Resolution Dependent GAN Interpolation for Controllable Image Synthesis Between Domains

Perhaps as prior work where you are describing the "layer swapping" you perform? image

HideUnderBush commented 3 years ago

Hi Justin,

We did not notice there was a formal paper so we just cited Toonify directly, and yes, we would like to cite this paper as prior work of layer swapping. We are still refining the paper and would add the citation in the next revision.

We do love your work Toonify and especially for its real time performance, which can support lots of interesting video based applications. Different from Toonify, we aim to provide an overall multi-modal multi-domain solution to the general I2I translation problem. Since now the inversion part is still an optimization based method, it takes longer time than some feed-forward methods. We try to improve this in further work and maybe we can cooperate in the future to add some new features to Toonify like user-guided style selection/modification, etc. :)

Many Thanks, Chloe

justinpinkney commented 3 years ago

Excellent, thank you!

Yes, I guessed you hadn't seen our paper, as it looks like it was only submitted to arxiv the day before yours! I'm exciting to see what further work can be done in this area and would be very interested if you wanted to collaborate in any way.

Cheers, Justin