cchen156 / Learning-to-See-in-the-Dark

Learning to See in the Dark. CVPR 2018
http://cchen156.web.engr.illinois.edu/SID.html
MIT License
5.44k stars 845 forks source link

comparison between output of SID(see in the dark) and photoshop #14

Open shockjiang opened 6 years ago

shockjiang commented 6 years ago

I tried to compare the output generated by SID (O-sid) and photoshop (O-ps, with all auto from DNG to PNG), and according to what I saw: 1/ when the light is extremely low, O-sid is better than O-ps. 2/ but when the light is low and not extremely, O-ps is better than O-sid.

Also, O-sid loses quite a few texture compared to O-ps. I wonder is this as expected?

cchen156 commented 6 years ago

For 2/, can you let me know which image you used? I can take a look.

shockjiang commented 6 years ago

thank you for your feedback. I attached the DNG file together with three PNG files generated by SID, PS, and PS+blur. The exposure time of the DNG is 0.0333sec (1/30s). File link: https://drive.google.com/file/d/1JWngJn9bA9b4PRiR2nMePwfsNPPjtpA7/view?usp=sharing

cchen156 commented 6 years ago

I see. You are testing our model on your own data. This is very possible as our model is not expected to work on every camera sensor. We only train it on the Sony set in this case. I do not know which camera you used to capture this image. You may need data from your camera to finetune or retrain the model. This seems to be relevant: #7

shockjiang commented 6 years ago

That's interesting. The sensor we used is Sony IMX 317, which is quite popular for cameras. As we can see, the output PNG brighten of whole photo at the cost of losing textures and mixing colors. It resembles the deep network perform a blur filter (even not a edge-preserving filter) on photos.

for your reference : https://www.sony-semicon.co.jp/products_en/IS/sensor2/products/imx377.html

yifanjiang19 commented 6 years ago

@shockjiang Hi, I saw your result. And I also tried the pre-trained model provided by the author. I think your result is comparably good. You can see the model train and test on the dataset provided by the author, the results are not good for every photo, especially for the colorful photo(In the paper, the author also mentioned it). But the photo you provide is colorful and the result is good. I think there are probably two explain. 1) The resolution of the photo you provide is low, so it is hard to distinguish the textures. It is not distinct if some part is not good. 2) The SNR of the input image is high. Could you please provide the scale image of the input image or help explain this question?

cchen156 commented 6 years ago

Another reason may be that the input image is too good, not dark at all. The exposure time is not the only factor to make the image dark or not. If you take the image in daytime with enough light, it will not be dark at all. What is the amplification ratio you used for the result? If the input is too good, there is no problem with existing methods/software. Then this case will be far from our training range.

shockjiang commented 6 years ago

@yueruchen The resolution of the photo is 3000x3000, which is not very low, compared to 4200x2800 photo provided by the author. This photo is noisy.

@yueruchen @cchen156 Here I also provide the JPG output generated together with this DNG. As you can see, the environment is quite dark. The ground truth photo is taken at 0.5s exposure time, I also attach it here. test-camera-jpg_0 0333s test photo(above), exposure time: 1/30 sec

gt-camera-jpg-0 5s ground truth photo(above), exposure time: 1/2 sec

yifanjiang19 commented 6 years ago

@shockjiang I have tried the pre-trained model and the dataset provided by the author, some results are not good. https://drive.google.com/open?id=18ilE6dmhXTpVD7ETjbK2_1ih78rxdePU I think the photo you provide in the first time is comparably good. Why did the output image become worse in the second time?

shockjiang commented 6 years ago

which two files did you refer to? @yueruchen

yifanjiang19 commented 6 years ago

@shockjiang You gave several images at the first time, the output is comparably lighter than one in the second time. first time:https://drive.google.com/file/d/1wOZGKXVsp5WdqW-aeGeBS9XSIrwsceA_/view second time:image

changjiuy commented 6 years ago

@shockjiang @cchen156 hi, Have you solved the problem of losing textures and details?