Closed greatlog closed 2 years ago
Yes, we re-trained the DBSR model using the same training parameters as our proposed approach. That is, we trained the model using 4 GPUs, using a larger batch size (16), with a burst size of 14 during training. Furthermore model was trained for 500 epochs. See section 5.1 in our iccv paper for details.
The results of DBSR reported in your paper are slightly higher than that reported in the CVPR2021's paper. Do you re-train the models or use different ways to calculate the metrics?