dectrfov / Wavelet-U-net-Dehazing

WAVELET U-NET AND THE CHROMATIC ADAPTATION TRANSFORM FOR SINGLE IMAGE DEHAZING - ICIP 2019
45 stars 6 forks source link

在SOTS的测试结果与论文结果相差较大 #1

Closed lccssr closed 3 years ago

lccssr commented 4 years ago

您好!首先很感谢您的开源代码。 我的问题是:利用所给的pkl文件对SOTS的outdoor部分进行了evaluate,并进行了psnr的计算,结果显示只有17db左右。 想问一下我是哪里出错了呢?或者说我使用的数据集有问题? 期待您的回复,万分感谢!

dectrfov commented 4 years ago

Thanks for your interesting. How many images do you evaluate? In my experimental setup, I randomly select 1400 clear images and corresponding hazy images for training and apply other images for testing. P.S. please use English for comment so that other people can read, Thanks

lccssr commented 4 years ago

Thanks for your interesting. How many images do you evaluate? In my experimental setup, I randomly select 1400 clear images and corresponding hazy images for training and apply other images for testing. P.S. please use English for comment so that other people can read, Thanks

Thanks for your replying. I use SOTS dataset for testing,it contained 500 images. The question is that I found that your code in train.py, line 113, use loss for psnr calculate loss = criterion(dehaze_image, ori_image) and psnr:10*math.log10(1.0/loss.item()) But in train.py, line 95, use Loss for calculate. Loss+=loss.item() Is it only considered one images' psnr? I used your code and got 23.9db for testing. But when I changed loss to Loss, i got only 17.3db for same testing dataset.

dectrfov commented 4 years ago

In train.py, line 113 The loss is the mean L2 loss per batch, so psnr: 10*math.log10(1.0/(AVERAGE loss)) And this value is the result of final batch. However, the AVERAGE psnr for all images is psnr: AVERAGE(10*math.log10(1.0/(loss))) Thus, in demo.py, I use for loop to calculate psnr for each image and average them. I believe you can achieve same results by setting batch_size = 1 and average all psnr when evaluation

lccssr commented 4 years ago

In train.py, line 113 The loss is the mean L2 loss per batch, so psnr: 10*math.log10(1.0/(AVERAGE loss)) And this value is the result of final batch. However, the AVERAGE psnr for all images is psnr: AVERAGE(10*math.log10(1.0/(loss))) Thus, in demo.py, I use for loop to calculate psnr for each image and average them. I believe you can achieve same results by setting batch_size = 1 and average all psnr when evaluation

I have done a experiment that used demo.py in SOTS dataset to get several processed-images, then I calculated each images‘ psnr and get the average results. The results shown that if i can't get good psnr. Some images can achieve 24db, some images only achieved 13db. The final result is 17.1db.