cchen156 / Learning-to-See-in-the-Dark

Learning to See in the Dark. CVPR 2018
http://cchen156.web.engr.illinois.edu/SID.html
MIT License
5.43k stars 846 forks source link

Will the Sony model work for dual camera iPhone? #94

Open JasonVann opened 4 years ago

JasonVann commented 4 years ago

I don't have a iPhone 6s handy, but I heard some people are getting good results with iPhone 6s. If I try Sony model with raw images taken with iPhone with two or three cameras, (iPhone 8 or iPhone 11), will the result be roughly the same as with iPhone 6s? Thanks!

gaseosaluz commented 4 years ago

When using the Sony model, I have had good/equivalent results when using iPhone 6s and Xs images. No real differences in the results.

l0stpenguin commented 4 years ago

@gaseosaluz i have tried with RAW images from my iphone X and changed the black level but got terrible results. Here are the details: https://github.com/cchen156/Learning-to-See-in-the-Dark/issues/75#issuecomment-508178237

Do you have any idea what might the issue be?

gaseosaluz commented 4 years ago

@mevinDhun I looked at the details in #75 and I can't off the top of my head figure out what the problem you have could be. I have not worked on this in a while, but I still have my code. Maybe you can find a way to send me the RAW image and I can try to find time run it in my setup?

l0stpenguin commented 4 years ago

@gaseosaluz here is a link to the dng sample: https://drive.google.com/drive/folders/1toiGYKQ1WeCiqbXB4rH24-wsH3VvfBT3?usp=sharing

In my tests, i have calculated the black level rather than hardcoding it:

im = raw.raw_image_visible.astype(np.float32)
black_level = raw.black_level_per_channel[0]
im = np.maximum(im - black_level, 0) / (16383 - black_level)  # subtract the black level

In case you can't find time to try, please share your modified code which could be helpful. Thanks.

gaseosaluz commented 4 years ago

@mevinDhun I will try to get to this as soon as I can. It may be the weekend. Unfortunately I don’t have my code in a public repo, but if I need to share it I will see what I can do about it.

gaseosaluz commented 4 years ago

@mevinDhun I found a few minutes to run your image through my code. Unfortunately I did not get good results either. At this point I don't think that the problem is related to the black level. I will keep your picture handy and when I have some time I will try to run the code through a debugger and see if I find anything.

A quick question - when you took the underexposed image, did you make sure that you held the camera steady (or had it in a tripod or something holding it steady?). I ask because I noticed that my corrected image appeared to show signs of the camera being 'shaken' when the picture was taken. I have had problems with these type of images before. I don't know if this is the problem but I thought I would mention it.

In any case, sorry I was not able to help. If I do figure something out I will post here.

l0stpenguin commented 4 years ago

@gaseosaluz That picture was taken handheld, so it might have some camera shake. Let's assume it is a faulty picture. But i have tried with another one where i place the phone against a wall to ensure minimal shake. Here is the DNG taken from Halide camera app: https://drive.google.com/open?id=1x0PrDS0fWtpmXgM4EgdPycbKG5hAgqdd

Running it through the model outputs this (which looks very bad): 10004_00_0 05s_650

At this point, i'm very confused since i could not get any decent results but the paper stated it should work on iphone camera sensors.

gaseosaluz commented 4 years ago

@mevinDhun Ok, I will try to run this new picture tonight. I am also puzzled because I have gotten decent results from pictures form an iPhone 6S and iPhone Xs. I took the pictures with the VSCO app, but I am not sure it is app dependent. Again, if I have better news, I will post here

littleqing0914 commented 4 years ago

When using the Sony model, I have had good/equivalent results when using iPhone 6s and Xs images. No real differences in the results.

Can I test the pictures taken by my camera or mobile phone directly? How should I do?

gaseosaluz commented 4 years ago

I think that theoretically it is possible take pictures in your phone. I have done some work on this for an iPhone and this is what I did to test it (but not in an iPhone application) the idea:

Converted he TF model to a CoreML Model. I was succefull in converting the model from TF to CoreML. I crated a quick Swift Program (following an Apple example) to load the CoreML model into Xcode. I did this to make sure the model would be callable from Swift and useable inside Xcode. I tested the converted CoreML model in a Jupyter Notebook. This was probably the hardest part for me. Since I don’t know Swift I tried to emulate in Python the things that I needed to do to manipulate the image per the origianl See in the Dark code. One of the big reasons that I did this is that I needed the NumPy manipulations form the original See in the Dark Repo to do the image slicing for the model. I didd not and until today I do not now how to do that in Swift. Once I had the image sliced appropriately in Python in a Jupyter Notebook (I used the same steps as in the original code) I fed it to tthe CoreML model using a Python API that Apple described in this WWDC presentation: https://developer.apple.com/videos/play/wwdc2018/719/ https://developer.apple.com/videos/play/wwdc2018/719/. Unfortunately Apple pulled the Python library and as far as I know it is no longer available.

Once I got a proof of concept for this, I stopped this work because I don’t know enough Swift to replicate the NumPy functionality in Swift (and AFAIK there is no NumPy equivalent in Swift).

I did not put this in a repository because I am not really 100% if did this right and I am not sure if anybody would find this useful.

I hope this helps.

Lalo Silva

On Mar 7, 2020, at 6:12 AM, littleqing0914 notifications@github.com wrote:

When using the Sony model, I have had good/equivalent results when using iPhone 6s and Xs images. No real differences in the results.

Can I test the pictures taken by my camera or mobile phone directly? How should I do?

— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/cchen156/Learning-to-See-in-the-Dark/issues/94?email_source=notifications&email_token=ALS3Q62BMWKHZSXOHBBFPALRGI23HA5CNFSM4I3RNEUKYY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOEODX2FY#issuecomment-596081943, or unsubscribe https://github.com/notifications/unsubscribe-auth/ALS3Q66VW67JGRZTDDYIIKDRGI23HANCNFSM4I3RNEUA.

l0stpenguin commented 4 years ago

@gaseosaluz it would be really useful if you could provide the coreml model. I am an ios developer and i would like to try to run it on mobile. Probably i will have to use Accelerate framework to replicate numpy matrix operations in swift.

edchepen commented 4 years ago

That is a good idea. Maybe you can complete th work that I could not :-) … let me see how I can find a way to share the model. Just be aware that I only tested it with a couple of pictures and only in Python. There are likely to be other problems

On Mar 17, 2020, at 2:37 PM, Mevin notifications@github.com wrote:

@gaseosaluz https://github.com/gaseosaluz it would be really useful if you could provide the coreml model. I am an ios developer and i would like to try to run it on mobile. Probably i will have to use Accelerate framework to replicate numpy matrix operations in swift.

— You are receiving this because you are subscribed to this thread. Reply to this email directly, view it on GitHub https://github.com/cchen156/Learning-to-See-in-the-Dark/issues/94#issuecomment-600259832, or unsubscribe https://github.com/notifications/unsubscribe-auth/ABN6VVB3EZ5SF3QVXNWA7ULRH7GPZANCNFSM4I3RNEUA.

mouryareddy commented 4 years ago

Hi @gaseosaluz Can you post your black_level_per_channel and maximum pixel value in your raw picture , I see the image posted by @l0stpenguin black_level_per_channel is 528 and max pixel value is 609 similiar to mine and got bad results

vivek-varma commented 3 years ago

@l0stpenguin if you resolved the issue, can you share the code for removing black layer in swift. I also want to port the model into iPhone.

vivek-varma commented 3 years ago

@gaseosaluz how did test the coreML model. Any script for that?