First of all I have to say that I am quite impressed by the possibility your current work has shown. However, after downloading the test data that you linked on your repo (CUFED5) and running the shell script for testing it using both the pre-trained models made available , I have been quite disappointed by the accompanying results :
(the reference images have been scaled to just fit into the presentation but I think they are of 500x332 resolution)
I have not looked too much into the test script that you have provided but I did look at the architecture you proposed in the paper and here are some observations/comments/questions I have:
1. shouldn't the Reference Image Resolution be 4 times the LR ? (this does not seem to be the case with the data provided)
2. currently upon running the shell script for test, the Resolution seems to be the same as the input, so is the shared implementation more like detail enhancement rather than SR ?
3. The input and Reference look to be of the same quality, so technically I would not be expecting massive improvements in the SR result, however if you could do something like DSLR based reference on a phones camera, it would be quite novel and interesting experiment. (This is the goal I hope to achieve using your methodology)
4. More theoretically, what kind of results would you expect when the input and reference are completely mismatched ?
Extremely sorry for the many naive questions that I might have asked but I hope to hear your views on the same!
Same as question 2, the input image is actually much smaller than the reference image. Therefore, some of our testing results might not be quite satisfying as you mentioned, especially on the hunman faces (need improvements in future work).
The quality of the output image might be similar with that of SISR methods.
Hi!
First of all I have to say that I am quite impressed by the possibility your current work has shown. However, after downloading the test data that you linked on your repo (CUFED5) and running the shell script for testing it using both the pre-trained models made available , I have been quite disappointed by the accompanying results :
(the reference images have been scaled to just fit into the presentation but I think they are of 500x332 resolution)
I have not looked too much into the test script that you have provided but I did look at the architecture you proposed in the paper and here are some observations/comments/questions I have:
1. shouldn't the Reference Image Resolution be 4 times the LR ? (this does not seem to be the case with the data provided)
2. currently upon running the shell script for test, the Resolution seems to be the same as the input, so is the shared implementation more like detail enhancement rather than SR ?
3. The input and Reference look to be of the same quality, so technically I would not be expecting massive improvements in the SR result, however if you could do something like DSLR based reference on a phones camera, it would be quite novel and interesting experiment. (This is the goal I hope to achieve using your methodology)
4. More theoretically, what kind of results would you expect when the input and reference are completely mismatched ?
Extremely sorry for the many naive questions that I might have asked but I hope to hear your views on the same!