Official Pytorch implementation of "Learnable Gated Temporal Shift Module for Deep Video Inpainting. Chang et al. BMVC 2019." and the FVI dataset in "Free-form Video Inpainting with 3D Gated Convolution and Temporal PatchGAN, Chang et al. ICCV 2019"
Thanks for sharing your code. I am working on the same dataset FVI. I just wanted to know for how long did you trained the network and you have mentioned that you have used 1940 video and then applied a few data augmentation techniques. Did you apply data augmentation on all the videos? Can you please mention the final size of the dataset after augmentation? Also, It would be a great help if you can mention the weight of different losses (i.e. perceptual, style, reconstruction and adversarial) you used while training.
Hi,
Thanks for sharing your code. I am working on the same dataset FVI. I just wanted to know for how long did you trained the network and you have mentioned that you have used 1940 video and then applied a few data augmentation techniques. Did you apply data augmentation on all the videos? Can you please mention the final size of the dataset after augmentation? Also, It would be a great help if you can mention the weight of different losses (i.e. perceptual, style, reconstruction and adversarial) you used while training.
Looking for your help