I'm trying to apply cyclegan on my own custom anime dataset to learn facial expression transfer.
My dataset includes 2000 sad and 2000 happy images.
I run the training process for about 100k iterations.
The problem is that I only see some small changes specially in mouth part. Do you have any suggestion? How can I make network more flexible for changing different parts of faces?
I recommend that you use a smaller lambda for cycle consistency loss, remove identity loss, or increase the receptive field of the discriminator. You can also experiment with a bigger generator.
Hi, thanks for your great work.
I'm trying to apply cyclegan on my own custom anime dataset to learn facial expression transfer. My dataset includes 2000 sad and 2000 happy images.
I run the training process for about 100k iterations.
The problem is that I only see some small changes specially in mouth part. Do you have any suggestion? How can I make network more flexible for changing different parts of faces?
Some sample results