I wanted to bring up an issue regarding the pretrained model in the repository. According to Table 6 in the paper, the TED+CURL model achieved a PSNR of 24.04 on the MIT5K DPE dataset. However, the pretrained model available in the repository is named "...testpsnr_23.584083321292365...pt," indicating a PSNR of 23.58. Additionally, when I performed an evaluation on the same dataset, I obtained a slightly different result, with a test PSNR of 23.62.
I'm curious about the significant gap between the pretrained model and the results presented in the paper. Is there a specific reason for this discrepancy, or could there be something I might be doing incorrectly during the evaluation process?
Once again, thank you for your project. Your insights would be greatly appreciated.
Hello @sjmoran, Thanks for your great work!
I wanted to bring up an issue regarding the pretrained model in the repository. According to Table 6 in the paper, the TED+CURL model achieved a PSNR of 24.04 on the MIT5K DPE dataset. However, the pretrained model available in the repository is named "...testpsnr_23.584083321292365...pt," indicating a PSNR of 23.58. Additionally, when I performed an evaluation on the same dataset, I obtained a slightly different result, with a test PSNR of 23.62.
I'm curious about the significant gap between the pretrained model and the results presented in the paper. Is there a specific reason for this discrepancy, or could there be something I might be doing incorrectly during the evaluation process?
Once again, thank you for your project. Your insights would be greatly appreciated.