Open xliuk opened 2 years ago
Hi, may I ask the following questions?
I think this problem might come from the inaccurate data organization. Hope this help.
(Sorry i closed this issue accidentally) Thank you for your reply! I rechecked data organization and it seems good. I used for {EV-3, EV+0, EV+3, EV-3, EV+0...} the same data organization of {EV-2, EV+0, EV+2, EV-2, EV+0...}, but the first one still doesn't work. I also made a try with {EV-1, EV+0, EV+1, EV-1, EV+0...}, and this seems to work fine. I reply to your questions:
python run_model.py --gpu_ids 0 --model hdr3E_flow2s_model --benchmark Liuk_Dataset --bm_dir data/Liuk_Dataset --test_scene lamp_3expo_3stop --mnet_name weight_net --mnet_checkp data/models/CoarseToFine_3Exp/weight_net.pth --fnet_checkp data/models/CoarseToFine_3Exp/flow_net.pth --mnet2_checkp data/models/CoarseToFine_3Exp/refine_net.pth
As you can see I created a dataloader for my own dataset (Liuk_Dataset), that is the same of TOG13 dataloader, except for the part where LDRs are loaded:
img = iutils.apply_gamma(iutils.read_16bit_tif(img_paths[i], crf=self.crf), gamma=2.2)
I didn't need to apply CRF and gamma, because i preprocessed raw frames as you did for your real benchmark dataset (white balance, demosaicing, gamma correction). So i used the part of code from real_benchmark dataset loader:
for i in range(ldr_start, ldr_end): if img_paths[i][-4:] == '.tif': img = iutils.read_16bit_tif(img_paths[i]) else: img = imread(imgpaths[i]) / 255.0 item['ldr%d' % i] = img
Hi, I understand you question now. When using 3-exposure-2stop or 3-exposure-1stop videos, the results are reasonable. But if tested with 3-exposure-3stop video, the results are flickering.
This observation is interesting, if you confirm the data organization. In fact, I have not tested with 3-exposure-3stop video.
Could you also check the coarse reconstruction results? Let's first see which stage goes wrong.
Yes, that's exactly what i meant. I think data organization is ok, because i used the same organization for 3-exposure-3stop, 3-exposure-2stop, 3-exposure-1stop videos. Honestly, i don't know how the code works and so i don't know how to check each part of the network. I just tested its results on several datasets. If you want, i can share you my 3-exposure-3stop datasets, so we can see which stage goes wrong.
Ok, you could send me the dataset (e.g., a google drive link to chenguanying@cuhk.edu.cn). I will have a check when I have time. Thanks.
Ok i sent you a google drive link. Let me know. Thanks!
Hi, it seems that I haven't received the link.
Yes, they are gamma corrected (^1/2.2). I can’t understand: in your article (Preprocessing section) and in supplementary material article, you ask for gamma corrected images as input (L_i). That’s why I did the gamma in my raw processing. I also checked your raw processing: you do gamma in the last stage. That’s also why I used my own dataloader that is similar to your real benchmark dataloader. Indeed your real benchmark dataloader, contrary to TOG13 dataloader, doesn’t apply gamma correction, because already applied.
Il giorno 10 ott 2022, alle ore 10:13, Guanying Chen @.***> ha scritto:
I check your data. I found that the images you sent me might be gamma corrected instead of linear.
Hi, I have tested the code and the results are similar to yours. I think there is a bug in the code, which might be the thresholds used for determining the masks. Previous, we don't experimented with such a large stop in the three-exposure scene.
Since I haven't play with this code for a long time and I am busy in other matters, I am afraid I have no time to look into this problem. Sorry.
Hi, no problem. Can you give me precise details on where I can find the bug in the code and how to fix it? I can try to fix that, if you don’t have time. Thank you so much.
Il giorno 10 ott 2022, alle ore 12:25, Guanying Chen @.***> ha scritto:
Hi, I have tested the code and the results are similar to yours. I think there is a bug in the code, which might be the thresholds used for determining the masks. Previous, we don't experimented with such a large stop in the three-exposure scene.
Since I haven't play with this code for a long time and I am busy in other matters, I am afraid I have no time to look into this problem. Sorry.
— Reply to this email directly, view it on GitHub https://github.com/guanyingc/DeepHDRVideo/issues/15#issuecomment-1273102781, or unsubscribe https://github.com/notifications/unsubscribe-auth/A3JQDPQ7MJK473LDOYIFRR3WCPVK7ANCNFSM6AAAAAAQ6W5CT4. You are receiving this because you modified the open/close state.
Dear xliuk, I also have a Canon camera ,I wonder how did you adjust the camera to capture a scene in alternating exposures way?Or you just manually adjust exposure time when you capture a scene?Can you share the way to capture a scene in alternating exposures way (Canon camera)? Looking forward to your reply!
@xliuk
Hi, i'm testing your code on my own datasets, captured with a Canon and preprocessed as required. In particular i'm using sequences of three alternating exposures. When i have a configuration {EV-2, EV+0, EV+2, ...} your code works fine. Instead, when i have a configuration {EV-3, EV+0, EV+3, ...}, i get 'periodically' flickering: i.e., if Hi is a HDR frame, it doesn't match with H{i+1} and H{i+2} (flickering), but it matches with H{i+3}. Why do i get this flickering problem? I attach two hdr video of a static scene to show this. Below I specify exposure times (in seconds). Aperture and iso are constant in both (ISO 800, f3.5). 1- First HDR video: {EV-2, EV+0, EV+2, ...}, exposure times {1/320, 1/80, 1/20} 2- Second HDR video: {EV-3, EV+0, EV+3, ...}, exposure times {1/640, 1/80, 1/10}
https://user-images.githubusercontent.com/114491838/194347916-ccb60538-4562-4dd9-84ec-333cf1938cd3.mov
https://user-images.githubusercontent.com/114491838/194347942-72cdfb11-d243-4c94-8f43-5bbce5860631.mov