TCL-AILab / Abandon_Bayer-Filter_See_in_the_Dark

Source code for CVPR2022 paper "Abandoning the Bayer-Filter to See in the Dark"
91 stars 11 forks source link

is that fair to compare with previous work when using less testing data #6

Open lzg1988 opened 1 year ago

lzg1988 commented 1 year ago

from https://github.com/TCL-AILab/Abandon_Bayer-Filter_See_in_the_Dark/issues/4

Obviously, we want to figure out whether your work use less testing and training data compare to previous sota in SID dataset?

I tested your pretrained model SID_weights_690000. If testing only 0.1 short exposure data, the psnr and ssim is 29.69 and 0.7962, better than your metrics in paper(29.65, 0.797). However, previous sota method(sid and did) is using the whole test data, including 0.1s, 0.04s and 0.033 shot exposure data. So,we test your model in the same testing data as sid, the metric is much worse. The psnr and ssim is 23.9432 and 0.6831.

Maybe it is not fair to compare the methods by using less testing data.

However, the main issue is that in the table 2 of your paper

We also train our model on the modified SID dataset to further validate our method for a fair comparison.The performance results are shown in the SID column in Table 2. As the results suggest, our method also outperformsall its counterparts. Specifically, our method can achieve a PSNR of 29.65dB, which is around 0.1dB higher than LDC, while the SSIM can achieve similar performance. Other methods including SID, DID, SGN, and RED can only achieve a PSNR around 28dB.

1

lzg1988 commented 1 year ago

More training and testing data should be used. Perhaps extra experiment are required.

Cynicarlos commented 7 months ago

I test on the whole test dataset of SID_Sony with the provided pretrained model 'weights_690000.pth', but the result is not as good as that tested on part of the test dataset. Following is the psnr and ssim I got between the pre_rgb and the gt_rgb. For reference only. ABF_Sony