TCL-AILab / Abandon_Bayer-Filter_See_in_the_Dark

Source code for CVPR2022 paper "Abandoning the Bayer-Filter to See in the Dark"
94 stars 12 forks source link

is that fair to compare with previous work when using lesstesting data #4

Closed lzg1988 closed 1 year ago

lzg1988 commented 1 year ago

I tested your pretrained model SID_weights_690000. If testing only 0.1 short exposure data, the psnr and ssim is 29.69 and 0.7962, better than your metrics in paper(29.65, 0.797). However, previous sota method(sid and did) is using the whole test data, including 0.1s, 0.04s and 0.033 shot exposure data. So,we test your model in the same testing data as sid, the metric is much worse. The psnr and ssim is 23.9432 and 0.6831.

Maybe it is not fair to compare the methods by using different testing data. Could you provide more solid result?

xingbod commented 1 year ago

Applying the model to SID directly is suboptimal as no Gray RAW is available.

lzg1988 commented 1 year ago

I use your SID pre-trained model to test on SID data. Of course, when testing in SID data, gray raw is unnecessary.

However, the main issue is that in the table 2 of your paper

We also train our model on the modified SID dataset to further validate our method for a fair comparison.The performance results are shown in the SID column in Table 2. As the results suggest, our method also outperformsall its counterparts. Specifically, our method can achieve a PSNR of 29.65dB, which is around 0.1dB higher than LDC, while the SSIM can achieve similar performance. Other methods including SID, DID, SGN, and RED can only achieve a PSNR around 28dB.

I test your pretrained model with the same dataset setting with previous sota(sid), the metric is much worse. The psnr and ssim is 23.9432 and 0.6831. Is your work reproducible? 1

lzg1988 commented 1 year ago

@xingbod