Algolzw / image-restoration-sde

Image Restoration with Mean-Reverting Stochastic Differential Equations, ICML 2023. Winning solution of the NTIRE 2023 Image Shadow Removal Challenge.
https://algolzw.github.io/ir-sde/index.html
MIT License
524 stars 38 forks source link

need dehazing pre-trained model #6

Open azlanqazi2012 opened 1 year ago

azlanqazi2012 commented 1 year ago

Hi,

can you please share pre-trained dehazing model? the existing latent-dehazing.pth is not generating the results as you have shown in the document.

Thanks

Algolzw commented 1 year ago

Hi,

You can find all pre-trained weights of the Refusion model here.

Best.

azlanqazi2012 commented 1 year ago

Great, thanks let me try it, much appreciated your help.

azlanqazi2012 commented 1 year ago

Hi,

When I am running python test.py -opt=options/dehazing/test/nasde.yml using codes\config\latent-dehazing\test.py, and latent-reffusion-dehazing.pth, I am getting bluish picture, and when I am using latent-dehazing.pth then getting totally different output, but not like you have presented as Non-Homogeneous dehazing, you can find the results here . I am new to this, it will be great if I may get your assistance, much appreciated the help.

Thanks

Algolzw commented 1 year ago

Hi,

The provided U-Net latent model is only trained for the NTIRE HR Non-Homogeneous dehazing dataset. If you want to use it for other haze datasets (such as SOTS indoor or NH-HAZE), you need to retrain the U-Net model for latent-refusion in this directory.

Or you can just download the images from the NTIRE challenge.

azlanqazi2012 commented 1 year ago

Thanks, but when I tried U-Net, I got multiple error, when fixed one got other, and some how I managed to run it, but couldn't got the expected results, the deraining algorithm works too good but can't manage to run dehazing algorithm (unet-latent), is the code up-to-date?

Thanks again and sorry for being pain.

Algolzw commented 1 year ago

Hi, are you meet the problems in training the u-net or testing the latent-Refusion model? I could write a separate paragraph to show how to train and test the latent-Refusion.

Algolzw commented 1 year ago

Now I updated the code for latent-Refusion, hope it works!

azlanqazi2012 commented 1 year ago

Yes, the code works fine now, thanks, but it is not dehazing the image, can you please share any image that you have tested, so that I may run it and test it on my system? for Unet-latent, I am using pretrained weight that is latent-dehazing.pth, but the input and output image looks identical, moreover I notice that the deraining model works great for many random images downloaded from google as long as the image has better resolution, if I used same image with lower resolution, it was not deraining.

My main target is to get dehaze model running.

Thanks, really appreciate your help

Algolzw commented 1 year ago

Great! But the unet-latent is only used to compress the image thus the input and output are almost the same. If you want to test the dehazing results, you should get into the "latent-dehazing" directory and change the dataset path and pre-trained model paths.

Here are some validation images you could have a try.

Best.

Algolzw commented 1 year ago

Moreover, note that the performance of our model highly depends on the training dataset. We only trained the model on the Rain100H dataset thus it makes sense that it didn't work very well on lower-resolution raining images. But you can easily retrain the model on your own dataset to get better performance.

azlanqazi2012 commented 1 year ago

Thanks, When I used Latent-dehazing, the output is purple colored image, which does not look like input image please check here,

Algolzw commented 1 year ago

Hi, you can test images from the HR dehazing dataset: https://codalab.lisn.upsaclay.fr/my/datasets/download/14df5793-f1c2-4f32-aaa7-d60b7d6dd6be

Algolzw commented 1 year ago

And if you want to test the indoor haze images, I can also provide another IR-SDE code and pretrained model for indoor dehazing.

azlanqazi2012 commented 1 year ago

Wow, Works great on these images, Thanks.

Just a suggestion, since your code is compatible on Linux, so I had to make some minor adjustments in it to make it work on windows, so if you update line number 10, 11 and 67 of options.py file, and line number 40 and 41 of test.py file, then it will work on both Linux and windows. you can add condition to check OS and use commands accordingly.

That's just a suggestion, I hope you won't mind.

Thanks, you have been great help.

azlanqazi2012 commented 1 year ago

And if you want to test the indoor haze images, I can also provide another IR-SDE code and pretrained model for indoor dehazing.

yes please, that will be great, much appreciated

Algolzw commented 1 year ago

Thank you for your suggestions! Since I don't have a Windows computer to test the code, I would be happy to add your comments to the readme file.

I will also provide the code for indoor dehazing later.

Sttrr commented 1 year ago

Hi,Did you train the dehazing task using only 40 hazy/haze-free pairs? This is too incredible. I would like to reproduce this experiment, can you provide a link to download the training data? thans!

Sttrr commented 1 year ago

Hi, you can test images from the HR dehazing dataset: https://codalab.lisn.upsaclay.fr/my/datasets/download/14df5793-f1c2-4f32-aaa7-d60b7d6dd6be

Hi,Did you train the dehazing task using only 40 hazy/haze-free pairs? This is too incredible. I would like to reproduce this experiment, can you provide a link to download the training data? thanks!

Algolzw commented 1 year ago

Sure. Here is the challenge website in which you can download the dehazing dataset (but you need to register the challenge first): https://codalab.lisn.upsaclay.fr/competitions/10216.

Sttrr commented 1 year ago

Sure. Here is the challenge website in which you can download the dehazing dataset (but you need to register the challenge first): https://codalab.lisn.upsaclay.fr/competitions/10216.

I have applied to participate in the challenge, but have not got a permission. Maybe the challenge is over and no one is in charge of it. Can you provide the training set, if possible?

Algolzw commented 1 year ago

Ok, I guess the dataset would be released later. But I can send you the training and testing data through email.

Sttrr commented 1 year ago

Ok, I guess the dataset would be released later. But I can send you the training and testing data through email.

If you can provide the data that would be great, here's my email: bit_lb@163.com

Sttrr commented 1 year ago

Ok, I guess the dataset would be released later. But I can send you the training and testing data through email.

another question,in the validation set of NonHomogeneous Dehazing, there is no haze-free data, so how did you set up the validation dataset 捕获

Algolzw commented 1 year ago

Ok, I guess the dataset would be released later. But I can send you the training and testing data through email.

another question,in the validation set of NonHomogeneous Dehazing, there is no haze-free data, so how did you set up the validation dataset 捕获

Hi, we just divided 5 image pairs from the training data as the validation dataset. Thus actually we use 35 and 5 pairs as training and validation datasets, respectively.

Sttrr commented 1 year ago

Ok, I guess the dataset would be released later. But I can send you the training and testing data through email.

another question,in the validation set of NonHomogeneous Dehazing, there is no haze-free data, so how did you set up the validation dataset 捕获

Hi, we just divided 5 image pairs from the training data as the validation dataset. Thus actually we use 35 and 5 pairs as training and validation datasets, respectively.

When I trained the unet-latent using dehazing dataset, I found that the training processing was very slow and the gpu utilization was very low because a lot of time was spent on loading data. How long did it take to reach about 300000 iterations during your training? How did you solve this problem during training?

Algolzw commented 1 year ago

Hi, maybe you can pre-crop the images to a cropped training dataset with smaller image sizes. An example code can be found in this script.

yeyun122 commented 12 months ago

Hi, maybe you can pre-crop the images to a cropped training dataset with smaller image sizes. An example code can be found in this script.

Hello, did the author pre-crop the image during training?

Algolzw commented 12 months ago

Hi, maybe you can pre-crop the images to a cropped training dataset with smaller image sizes. An example code can be found in this script.

Hello, did the author pre-crop the image during training?

Yes.

zhenghaoyes commented 10 months ago

@azlanqazi2012 Sorry to bother you, I would like to ask what changes need to be made to these codes on windows. thanks a lot.

Bulinglife commented 8 months ago

很抱歉打扰,我想请问这个问题应该怎么解决 image

Algolzw commented 8 months ago

@Bulinglife 你需要保证GT图像和LR图像有相同的数量,可以检查一下配置文件中数据集的路径是否正确。

qwer4296chen commented 7 months ago

很抱歉打扰,我使用latent-dehazing下的train.py训练我的水下数据集,然后测试出的结果如下图所示: 41 请问我该如何处理使得模型能输出正常的图片呀

Algolzw commented 7 months ago

如果需要使用latent-dehazing训练你自己数据集的话需要先pretrain一个UNet,在unet-latent文件夹里。

qwer4296chen commented 7 months ago

谢谢您,我正按照您说的训练UNet

ZHB2333 commented 1 month ago

Wow, Works great on these images, Thanks.

Just a suggestion, since your code is compatible on Linux, so I had to make some minor adjustments in it to make it work on windows, so if you update line number 10, 11 and 67 of options.py file, and line number 40 and 41 of test.py file, then it will work on both Linux and windows. you can add condition to check OS and use commands accordingly.

That's just a suggestion, I hope you won't mind.

Thanks, you have been great help.

Hello, even though I used the HR dehazing dataset, the test result is still purple. May I ask how you solved it?