guanyingc / DeepHDRVideo

HDR Video Reconstruction: A Coarse-to-fine Network and A Real-world Benchmark Dataset (ICCV 2021)
https://guanyingc.github.io/DeepHDRVideo
Other
98 stars 19 forks source link

Flickering problem in 3-exposure-3stop video #15

Open xliuk opened 2 years ago

xliuk commented 2 years ago

Hi, i'm testing your code on my own datasets, captured with a Canon and preprocessed as required. In particular i'm using sequences of three alternating exposures. When i have a configuration {EV-2, EV+0, EV+2, ...} your code works fine. Instead, when i have a configuration {EV-3, EV+0, EV+3, ...}, i get 'periodically' flickering: i.e., if Hi is a HDR frame, it doesn't match with H{i+1} and H{i+2} (flickering), but it matches with H{i+3}. Why do i get this flickering problem? I attach two hdr video of a static scene to show this. Below I specify exposure times (in seconds). Aperture and iso are constant in both (ISO 800, f3.5). 1- First HDR video: {EV-2, EV+0, EV+2, ...}, exposure times {1/320, 1/80, 1/20} 2- Second HDR video: {EV-3, EV+0, EV+3, ...}, exposure times {1/640, 1/80, 1/10}

https://user-images.githubusercontent.com/114491838/194347916-ccb60538-4562-4dd9-84ec-333cf1938cd3.mov

https://user-images.githubusercontent.com/114491838/194347942-72cdfb11-d243-4c94-8f43-5bbce5860631.mov

guanyingc commented 2 years ago

Hi, may I ask the following questions?

  1. What is the configuration for the three-stop exposure? Are you using {EV-3, EV+0, EV+3, EV-3, EV+0...} or {EV-3, EV+0, EV+3, EV+0, EV-3...}. Make sure you are using the first one.
  2. What is the testing command? Please make sure you are using the three-exposure model, and you should also set the exposure order in the exposures.txt in the dataset directory. You may follow the data organization of TOG13 dataset or the introduced dataset in https://github.com/guanyingc/DeepHDRVideo-Dataset/.

I think this problem might come from the inaccurate data organization. Hope this help.

xliuk commented 2 years ago

(Sorry i closed this issue accidentally) Thank you for your reply! I rechecked data organization and it seems good. I used for {EV-3, EV+0, EV+3, EV-3, EV+0...} the same data organization of {EV-2, EV+0, EV+2, EV-2, EV+0...}, but the first one still doesn't work. I also made a try with {EV-1, EV+0, EV+1, EV-1, EV+0...}, and this seems to work fine. I reply to your questions:

  1. The configuration is {EV-3, EV+0, EV+3, EV-3, EV+0...}. I attach a screenshot video to show this. I also attach the img_list.txt and Exposures.txt.

https://user-images.githubusercontent.com/114491838/194509003-3df0bf55-b914-4ec6-9269-42df0d16c47f.mov

img_list Exposures
  1. This is the testing command i used:

python run_model.py --gpu_ids 0 --model hdr3E_flow2s_model --benchmark Liuk_Dataset --bm_dir data/Liuk_Dataset --test_scene lamp_3expo_3stop --mnet_name weight_net --mnet_checkp data/models/CoarseToFine_3Exp/weight_net.pth --fnet_checkp data/models/CoarseToFine_3Exp/flow_net.pth --mnet2_checkp data/models/CoarseToFine_3Exp/refine_net.pth

As you can see I created a dataloader for my own dataset (Liuk_Dataset), that is the same of TOG13 dataloader, except for the part where LDRs are loaded:

img = iutils.apply_gamma(iutils.read_16bit_tif(img_paths[i], crf=self.crf), gamma=2.2)

I didn't need to apply CRF and gamma, because i preprocessed raw frames as you did for your real benchmark dataset (white balance, demosaicing, gamma correction). So i used the part of code from real_benchmark dataset loader:

for i in range(ldr_start, ldr_end): if img_paths[i][-4:] == '.tif': img = iutils.read_16bit_tif(img_paths[i]) else: img = imread(imgpaths[i]) / 255.0 item['ldr%d' % i] = img

guanyingc commented 2 years ago

Hi, I understand you question now. When using 3-exposure-2stop or 3-exposure-1stop videos, the results are reasonable. But if tested with 3-exposure-3stop video, the results are flickering.

This observation is interesting, if you confirm the data organization. In fact, I have not tested with 3-exposure-3stop video.

Could you also check the coarse reconstruction results? Let's first see which stage goes wrong.

xliuk commented 2 years ago

Yes, that's exactly what i meant. I think data organization is ok, because i used the same organization for 3-exposure-3stop, 3-exposure-2stop, 3-exposure-1stop videos. Honestly, i don't know how the code works and so i don't know how to check each part of the network. I just tested its results on several datasets. If you want, i can share you my 3-exposure-3stop datasets, so we can see which stage goes wrong.

guanyingc commented 2 years ago

Ok, you could send me the dataset (e.g., a google drive link to chenguanying@cuhk.edu.cn). I will have a check when I have time. Thanks.

xliuk commented 2 years ago

Ok i sent you a google drive link. Let me know. Thanks!

guanyingc commented 2 years ago

Hi, it seems that I haven't received the link.

xliuk commented 2 years ago

Yes, they are gamma corrected (^1/2.2). I can’t understand: in your article (Preprocessing section) and in supplementary material article, you ask for gamma corrected images as input (L_i). That’s why I did the gamma in my raw processing. I also checked your raw processing: you do gamma in the last stage. That’s also why I used my own dataloader that is similar to your real benchmark dataloader. Indeed your real benchmark dataloader, contrary to TOG13 dataloader, doesn’t apply gamma correction, because already applied.

Il giorno 10 ott 2022, alle ore 10:13, Guanying Chen @.***> ha scritto:

I check your data. I found that the images you sent me might be gamma corrected instead of linear.

guanyingc commented 2 years ago

Hi, I have tested the code and the results are similar to yours. I think there is a bug in the code, which might be the thresholds used for determining the masks. Previous, we don't experimented with such a large stop in the three-exposure scene.

Since I haven't play with this code for a long time and I am busy in other matters, I am afraid I have no time to look into this problem. Sorry.

xliuk commented 2 years ago

Hi, no problem. Can you give me precise details on where I can find the bug in the code and how to fix it? I can try to fix that, if you don’t have time. Thank you so much.

Il giorno 10 ott 2022, alle ore 12:25, Guanying Chen @.***> ha scritto:

Hi, I have tested the code and the results are similar to yours. I think there is a bug in the code, which might be the thresholds used for determining the masks. Previous, we don't experimented with such a large stop in the three-exposure scene.

Since I haven't play with this code for a long time and I am busy in other matters, I am afraid I have no time to look into this problem. Sorry.

— Reply to this email directly, view it on GitHub https://github.com/guanyingc/DeepHDRVideo/issues/15#issuecomment-1273102781, or unsubscribe https://github.com/notifications/unsubscribe-auth/A3JQDPQ7MJK473LDOYIFRR3WCPVK7ANCNFSM6AAAAAAQ6W5CT4. You are receiving this because you modified the open/close state.

syujung commented 2 years ago

Dear xliuk, I also have a Canon camera ,I wonder how did you adjust the camera to capture a scene in alternating exposures way?Or you just manually adjust exposure time when you capture a scene?Can you share the way to capture a scene in alternating exposures way (Canon camera)? Looking forward to your reply!

syujung commented 2 years ago

@xliuk