caiyuanhao1998 / PNGAN

"Learning to Generate Realistic Noisy Images via Pixel-level Noise-aware Adversarial Training" (NeurIPS 2021)
https://arxiv.org/abs/2204.02844
MIT License
130 stars 23 forks source link

Questioning the finetuning experiments of MPRNET/MIRNET #10

Closed wuqi-coder closed 5 months ago

wuqi-coder commented 1 year ago

Dear author, According to the conclusion of your experiment based on the data proportion, fine-tuning only with the original SIDD data actually performs the best on the SIDD validation set. That is to say, if you only use the original SIDD data for finetuning in your quantity increment experiment, MIRNET can even achieve higher scores than 40.07, which means an improvement of more than 0.35DB. However, this is obviously not reasonable. After conducting numerous experiments, we found that with only the original SIDD data for finetuning, there is only a slight increase of 0.01~0.02DB, which is far below such a high level of improvement. So we want to ask how did you achieve this?I look forward to your response and would greatly appreciate it.

caiyuanhao1998 commented 5 months ago

We finetune the model on the training set of SIDD with slice patch training samples, which significantly increases the number of training samples.