sczhou / IGNN

[NeurIPS 2020] Cross-Scale Internal Graph Neural Network for Image Super-Resolution
308 stars 40 forks source link

how did you get GraphAgg* #14

Open qiqiing opened 3 years ago

qiqiing commented 3 years ago

When you are proving ‘Effectiveness of Graph Aggregation Module’, how did you get GraphAgg*. I processed it like this: directly compare the input low-resolution image and its down-sampling form in the image domain to obtain idx_k, and then aggregate to obtain z_sr (using the GraphAggregation in your code) ). What is wrong with this approach? The PSNR of my aggregation result is lower than the result obtained by directly performing ‘bicubic’.

sczhou commented 3 years ago

Hi, Compared with the image domain, vgg feature domain is still more robust for patching matching in GraphAgg. In addition, the non-learning version GraphAgg is heavily affected by the degree of patch recurrence in the input image. You could try to obtain idx_k in the feature domain and test on an image that contains some cross-scale recurrent patches.

qiqiing commented 3 years ago

Hi, Compared with the image domain, vgg feature domain is still more robust for patching matching in GraphAgg. In addition, the non-learning version GraphAgg is heavily affected by the degree of patch recurrence in the input image. You could try to obtain idx_k in the feature domain and test on an image that contains some cross-scale recurrent patches.

Thank you, what you mean is to first input the input image and its down-sampled version into the pre-trained VGG19, look for idx_k in the feature domain, and then aggregate the image domain images according to the found idx_k (the original input and its low-resolution version ), is that correct?