I have a question related to implementation of DnCNN as follows. The architecture of DnCNN has the input layer with the size of [50,50,1]. But when using the trained model for prediction, it can be applied to images of arbitrary sizes. Does we need to divide the input image into patches or replace the input layer? I didn't find this info in your paper.
Thank you for your code.
I have a question related to implementation of DnCNN as follows. The architecture of DnCNN has the input layer with the size of [50,50,1]. But when using the trained model for prediction, it can be applied to images of arbitrary sizes. Does we need to divide the input image into patches or replace the input layer? I didn't find this info in your paper.
Thank you in advance.