Thank you for your open source YouHQ dataset!
But I'm confused about the dataset had been cropped to 80*80 for training mentioned in your paper, if you had used the full size YouHQ videos during training phase? And if only using 80*80 videos, how the model can adapt to larger resolution videos for super-resolution?
Thank you for your open source YouHQ dataset! But I'm confused about the dataset had been cropped to 80*80 for training mentioned in your paper, if you had used the full size YouHQ videos during training phase? And if only using 80*80 videos, how the model can adapt to larger resolution videos for super-resolution?