liuh127 / NTIRE-2021-Dehazing-DWGAN

Official PyTorch implementation of DW-GAN, 1st place solution of NTIRE 2021 NonHomogeneous Dehazing Challenge (CVPR Workshop 2021).
MIT License
60 stars 8 forks source link

file train #2

Closed butterflyro closed 2 years ago

butterflyro commented 3 years ago

can you please provide the train file, i am also working in this direction, thanks if you can help

liuh127 commented 3 years ago

can you please provide the train file, i am also working in this direction, thanks if you can help

Hi,

We currently don't have plan to provide training script. You can refer to our work Two-branch dehaizng, which provides a training script.

butterflyro commented 3 years ago

I have re-coded the train part, but the test part, I am wondering if you guys have augment the data at predict, or have anything to do with the test part.

butterflyro commented 3 years ago

My English is not very good, if you don't understand i can correct it

liuh127 commented 3 years ago

I have re-coded the train part, but the test part, I am wondering if you guys have augment the data at predict, or have anything to do with the test part.

We don't conduct any data augmentations during testing stage. You can refer to this repo, it gives you detailed testing script.

butterflyro commented 3 years ago

i mean you have predictions on other datasets, but training is on random crop 256x256, so i don't know if prediction just needs to be fit to network input, or needs further processing

butterflyro commented 3 years ago

image

liuh127 commented 3 years ago

i mean you have predictions on other datasets, but training is on random crop 256x256, so i don't know if prediction just needs to be fit to network input, or needs further processing

You just required to make sure the testing image with the size that can be divisible by 64. No other augmentations needed.

butterflyro commented 3 years ago

I tested with weights you provided, the results were not the same as in the paper

liuh127 commented 3 years ago

I tested with weights you provided, the results were not the same as in the paper

You should train our network on the training split of the dataset you want to test. For example, to test on ITS, you should train our network on RESIDE indoor. There's no guarantee that a single pre-trained model can work on every dehazing datasets. BTW, the provided model is just used to produce final results on the testing stage of NTIRE 2021. If you want to have the results as listed in our table for NTIRE20 and NTIRE21, you should re-train our network follows the data split of NTIRE20 and NTIRE21 in our paper.

butterflyro commented 3 years ago

sorry for the late reply, did you mean the result of NTIRE2021 is training only on data of NTIRE2020 and NTIRE2021, and let me ask one more question, what is your method of initializing the weights.

liuh127 commented 3 years ago

sorry for the late reply, did you mean the result of NTIRE2021 is training only on data of NTIRE2020 and NTIRE2021, and let me ask one more question, what is your method of initializing the weights.

There're two accuracies in our paper that are correlated to "NTIRE2021". One is the number we report in Table1 and the other is reported in Section 4.5. The accuracy we report in table 1 is obtained using the model trained ONLY on NTIRE2021, while the counterpart in Section 4.5 is the model trained on both NTIRE20 and NTIRE21.

You should see how we initialize the model in our code.

butterflyro commented 3 years ago

yeahh, thanks you, I have seen init weight, the use of loss function: L_total = sum(Li) However, Loss_adv according to my understanding of Gan, there will be 2 separate optimizers, First the input image is fed to the Generator, then passed through the Discriminator, the output is used to update the weight of D with loss_adv, then update the Generator part with the remaining 3 loss function (smooth L1, perceptual, MS-SSIM). I don't know, is my thinking correct?

I'm only a 3rd year student so there are many things that are not good, looking forward to your advice

liuh127 commented 3 years ago

yeahh, thanks you, I have seen init weight, the use of loss function: L_total = sum(Li) However, Loss_adv according to my understanding of Gan, there will be 2 separate optimizers, First the input image is fed to the Generator, then passed through the Discriminator, the output is used to update the weight of D with loss_adv, then update the Generator part with the remaining 3 loss function (smooth L1, perceptual, MS-SSIM). I don't know, is my thinking correct?

I'm only a 3rd year student so there are many things that are not good, looking forward to your advice

Yes.

butterflyro commented 3 years ago

Thanks you so muchh, im very grateful to you

butterflyro commented 3 years ago

can i ask you about the metric, i used PSNR, but the result is quite weird image and SSIM of skimage image The results are quite strange. The first is when testing with the set of weights you provided with 5 images from 21 to 25

image

Second, during training, PSNR are usually very high image

butterflyro commented 3 years ago

i found my metric problem, but i confused is putting 2 variables into measure functions is (restored image and its label [0,255] or (generator network output [-1,1] transform to around [0,1] and the label will divide by 255)