I just had one small confusion. What is MemNet lacking that you are trying to resolve?
Your point from the paper:
MemNet interpolates the original LR image to the desired size to form the input. This preprocessing step not only increases computation complexity quadratically but also loses some details of the original LR image.
I have read MenNet they also do have the same feature extraction initial network as yours followed by memblocks and reconnet.
But still confused with one minor doubt, that do you people also do long term memory persistence just like MemNet as in
adding dense connections across RDB blocks.
Hi, Really loved reading the paper!
I just had one small confusion. What is MemNet lacking that you are trying to resolve?
Your point from the paper: MemNet interpolates the original LR image to the desired size to form the input. This preprocessing step not only increases computation complexity quadratically but also loses some details of the original LR image.
I have read MenNet they also do have the same feature extraction initial network as yours followed by memblocks and reconnet.
So what does this above point mean?
Thank you!