How did you get 96X96 high resolution REAL images for comparison?
My understanding was that we take the original dataset (cifar-10, consisting images of dimension 32X32), down-sample the images to (say 4 times, i.e 8X8) and use this down-sampled and corresponding original version for the training pair.
While testing we will feed some down-sampled (i.e 8X8) and expect a 32X32 image which is very close to corresponding original 32X32 image. How come your outputs are 96X96? It seems you first up-scaled the images and then down-sampled. Will it not affect the quality of the output?
How did you get 96X96 high resolution REAL images for comparison? My understanding was that we take the original dataset (cifar-10, consisting images of dimension 32X32), down-sample the images to (say 4 times, i.e 8X8) and use this down-sampled and corresponding original version for the training pair.
While testing we will feed some down-sampled (i.e 8X8) and expect a 32X32 image which is very close to corresponding original 32X32 image. How come your outputs are 96X96? It seems you first up-scaled the images and then down-sampled. Will it not affect the quality of the output?