HalbertCH / IEContraAST

This is the official PyTorch implementation of our paper: "Artistic Style Transfer with Internal-external Learning and Contrastive Learning".
MIT License
78 stars 7 forks source link

Style transfer on faces #3

Closed Otje89 closed 2 years ago

Otje89 commented 2 years ago

Thank you very much for the great results you’ve achieved! It’s very impressive. However, I see that it doesn’t do well / looks natural on faces. Either it’s the color in the face that’s very much standing out (e.g. orange face, whereas the rest of the picture seems more natural) and/or nose/eyes etc disappear. Do you think this can be caused by the content dataset? Would including more faces in the dataset lead to better results?

Another question I have is about the WikiArt dataset. It consists of a lot of different styles. Does it matter that they are all mixed? Or could it help to categorize them first and make it a conditional model? Or would learning on one style lead to even better results on that one style?

Thank you!

HalbertCH commented 2 years ago

For the first question, I speculate there are two reasons. One is that since the content structure of faces is very simple and neat, our eyes will be very sensitive to their structure changes. The other reason is what you have mentioned. The content dataset consists few human faces. You can try to take a human face dataset to train the model. In addition, you can also try to increase the content weight for better content preservation.

For the second question, although different images in WikiArt vary greatly in fine details, they share a key commonality: they are all human-created artworks, whose brushstrokes, color distributions, texture patterns, tones, etc., are more consistent with human perception. Namely, they contain some human-aware style information that is lacked in synthesized stylizations. Therefore, we utilized such human-aware style information to improve stylization results.

“Could it help to categorize them first and make it a conditional model? Or would learning on one style lead to even better results on that one style?” In fact, some existing style transfer methods [1, 2, 3] have tried to collect some related style images from WikiArt (i.e., an artist’s artworks) to build a style dataset. However, these methods have to train a separate model for each artist based on his/her artworks, while our method aims at arbitrary style transfer. [1] A Style-Aware Content Loss for Real-time HD Style Transfer. ECCV 2018. [2] A Content Transformation Block For Image Style Transfer. CVPR 2019. [3] Content and Style Disentanglement for Artistic Style Transfer. ICCV 2019.

Otje89 commented 2 years ago

Thank you very much for your reply with clear explanations.

Otje89 commented 2 years ago

Thank you very much for your reply with clear explanations.