albertpumarola / GANimation

GANimation: Anatomically-aware Facial Animation from a Single Image (ECCV'18 Oral) [PyTorch]
http://www.albertpumarola.com/research/GANimation/index.html
GNU General Public License v3.0
1.96k stars 414 forks source link

About the increasing of cycle_consistency_loss #92

Open csh589 opened 5 years ago

csh589 commented 5 years ago

Thank you for your great job! We found something works not ideally in the training process, which is the increasing of the generator's cycle_consistency_loss. According to the essay, the loss forces the generator to reconstructed images more resemble to the original image. So we assume that the loss would decrease in the training process, but we found it actually increased while other generator losses were decreasing. Here we show the visualization of the losses. image In order to show the problem more concretely, here we also show 3 groups of images, with the order of 'origin-generated-reconstruction'. In the reconstruction, we found that some new features were generated in the generated image but fail to recover in the reconstruction(i.e. the front hair in the second group). Obviously it is not what we hope to happen: image So we want to ask that whether the increasing of the consistency loss happens in your training process. Further, do you have any suggestions for this phenomenon?

talengu commented 5 years ago

@csh589 Hello, I use 29000 pictures and get a bad results like this. Some advices? Can you share you parameters? And how many pictures are in your datasets? test_7

# train args
parser.add_argument('--face_dir', dest='face_dir', default='../data_prepare/_data/dataFile_all/',
                    help='the folder dir where store face images')
parser.add_argument('--au_pkl_dir', dest='au_pkl_dir', default='../data_prepare/_data/dataFile_all.pkl',
                    help='.pkl file store the au labels')

parser.add_argument('--batch_size', dest='batch_size', type=int, default=25, help='batch size of train')
parser.add_argument('--epoch', dest='epoch', type=int, default=30, help='epoch of this train')

parser.add_argument('--lambda_D_img', dest='lambda_D_img', type=float, default=1, help='')
parser.add_argument('--lambda_D_au', dest='lambda_D_au', type=float, default=4000, help='')
parser.add_argument('--lambda_D_gp', dest='lambda_D_gp', type=float, default=10, help='')
parser.add_argument('--lambda_cyc', dest='lambda_cyc', type=float, default=10, help='')
parser.add_argument('--lambda_mask', dest='lambda_mask', type=float, default=0.1, help='')
parser.add_argument('--lambda_mask_smooth', dest='lambda_mask_smooth', type=float, default=1e-5, help='')