I'm trying to apply stargan on my own custom anime dataset to learn facial expression transfer.
My dataset includes 2000 images per each emotions. (2k happy, 2k sad, etc)
I run the training process for about 100k iterations.
The problem is that I only see some small changes specially in mouth part. Do you have any suggestion? How can I make network more flexible for changing different parts of faces?
Hi, thanks for your great work.
I'm trying to apply stargan on my own custom anime dataset to learn facial expression transfer. My dataset includes 2000 images per each emotions. (2k happy, 2k sad, etc)
I run the training process for about 100k iterations.
The problem is that I only see some small changes specially in mouth part. Do you have any suggestion? How can I make network more flexible for changing different parts of faces?
Some sample results: