knazeri / edge-connect

EdgeConnect: Structure Guided Image Inpainting using Edge Prediction, ICCV 2019 https://arxiv.org/abs/1901.00212
http://openaccess.thecvf.com/content_ICCVW_2019/html/AIM/Nazeri_EdgeConnect_Structure_Guided_Image_Inpainting_using_Edge_Prediction_ICCVW_2019_paper.html
Other
2.5k stars 530 forks source link

Why not use L1 loss between the masked parts in output img and input img. #142

Open ChenYutongTHU opened 4 years ago

ChenYutongTHU commented 4 years ago

Thanks for this awesome project.

I note that during training inpainter, L1_loss is measured on the pair of complete output img and complete input image.

gen_l1_loss = self.l1_loss(outputs, images) * self.config.L1_LOSS_WEIGHT / torch.mean(masks) gen_loss += gen_l1_loss

I wonder if it would be more proper to only compare the error only within the masked region since the final output image is a merged one of generator's restored masked region and the given context.

Thanks!