cchen156 / Learning-to-See-in-the-Dark

Learning to See in the Dark. CVPR 2018
http://cchen156.web.engr.illinois.edu/SID.html
MIT License
5.44k stars 846 forks source link

Result Image(Sony Model -The green is scattered) #42

Open ghost0710 opened 6 years ago

ghost0710 commented 6 years ago

I use your sony pretrainned model, Execute test_sony.py I wonder why green dots are spreading in the output image. image

cchen156 commented 6 years ago

Our results are not artifact free which was stated in the paper. If you train a model for the outdoor images only, these artifacts will disappear. Probably the indoor images are so different (e.g., light sources). When one network tried to process both indoor and outdoor images, the performance decreased.

ghost0710 commented 6 years ago

@cchen156 How did you find out that the issue would disappear? Do you have pretrained model for only outdoor image?

cchen156 commented 6 years ago

Yes. I trained a model for outdoor images only. The results are better.

ghost0710 commented 6 years ago

please share your only outdoor model

cchen156 commented 6 years ago

It is easy to train using the provided code. The first part of the data is outdoor images and the second part is indoor images. I did not keep the outdoor model. You can use images 00000 to 00183 to train it.

ghost0710 commented 6 years ago

image

There is a phenomenon that the image in Sony result is crushed like the above What is the reason? Especially in very dark environments

oneTaken commented 6 years ago

@ghost0710 I tried many combinations, with different models and different super-parameters.And I found this image is always predicted like this. In my opinion, if this input area has very little information(the environment is very dark), the model would tried to suggest what's information here. So there are some blur areas.