Closed XiaoyuShi97 closed 3 years ago
It is officially provided by the authors.
Thx for your prompt reply. I use this model and pretrained checkpoint on vimeo dataset, and got 31.85 for PSNR and 0.956 for ssim, which is lower than 35.15. Did I misunderstand anything?
Following the paper and code, we do a preprocessing normalizing with mean, std = (0.5,0.5). You have to apply the same for the model at test-time.
One more question. During training, t is fixed to be 0.5 and when testing, t are 1/8, 2/8, 3/8, 4/8, 5/8, 6/8, 7/8?
For 2x interpolation, at train and test time t is fixed to be 0.5. For 8x interpolation, at train and test time, t is [1/8, 2/8, 3/8, 4/8, 5/8, 6/8, 7/8] - this is because models trained for 2x specifically do not transfer that well for multi frame interpolation ( they are strong baselines nevertheless). So our QVI models on GoPro and Vimeo are different.
Thx for your prompt reply. I use this model and pretrained checkpoint on vimeo dataset, and got 31.85 for PSNR and 0.956 for ssim, which is lower than 35.15. Did I misunderstand anything?
Hello, I have encountered the same problem. May I know how the pre processing area (normalizing with mean, std = (0.5,0.5) )is specifically set in code?
Hi. Thx for your efforts on benchmarking existing models. I wonder which repo you are using for quadratic video inpainting (QVI) model? Could you please share the link?