When running the model on 384 * 384 size imaged, I get the following error from VGG discriminator in the ESRGAN
assert x.size(2) == 128 and x.size(3) == 128, (f'Input spatial size must be 128x128, '
AssertionError: Input spatial size must be 128x128, but received torch.Size([4, 3, 384, 384]).
When running the model on 384 * 384 size imaged, I get the following error from VGG discriminator in the ESRGAN
assert x.size(2) == 128 and x.size(3) == 128, (f'Input spatial size must be 128x128, ' AssertionError: Input spatial size must be 128x128, but received torch.Size([4, 3, 384, 384]).