sooyekim / Deep-SR-ITM

Official repository of Deep SR-ITM (oral at ICCV 2019)
101 stars 16 forks source link

About source code. #4

Closed liulizhou closed 4 years ago

liulizhou commented 5 years ago

Hi, Thanks for your great work, i want to implement it with pytorch and i have several question:

  1. About GuidedFiltering and Div_Elemwise layers. I implement GF layer with opencv and i can get right result compared with your GuidedFiltering layer.

    gd = cv2.ximgproc.guidedFilter(guide=yuv_sdr, src=yuv_sdr, radius=3, eps=0.01, dDepth=-1)

    then Div_Elemwise layer, the result is incorrect:

    yuv_gd = yuv_sdr / (gd + np.finfo(np.float32).eps)
    i also try matlab eps value:  2.2204e-16

    When i test div_elemwise with matlab, i can get right result. Matlab like this:

    right pred01 is the guidedFilter result.
    pred01 = sdr_yuv_cpu./(pred01 + eps);

    my python result: my_result your matlab result: your_result

  2. You first train base_net and then train full_net, can i just train one time with full net?

  3. Multi-purpose_CNN can you provide network code?

sooyekim commented 4 years ago

Hi Liulizhou,

Thanks for the interest in our work.

  1. The python code seems alright... I don't see why it's giving a different result. One thing, are you converting to RGB for visualization? There is no need for this as the detail layer values do not correspond to color values anymore, with the range being around 1. With appropriate normalization, both results should seem grayish (not pinkish) with edge/texture information.

  2. Yes, you can just train the full_net directly, but there may be a performance drop. Without pre-training, the filters are randomly initialized, and the produced feature maps will not be so meaningful for the modulation layers to be trained properly as feature map modulations.

  3. I will get back to you with the network code of Multi-purpose CNN.

Thank you.

Best, Soo Ye

sooyekim commented 4 years ago

Hi again,

This is the network code for Muti-purpose CNN as a text file.

BTW, if you wish to train this network, the option for derOutputs should be set as below, since there are multiple losses in this network. (Multiple GTs also must be given)

opts.train.derOutputs = {'objective', 1, 'objective_HR', 1, 'objective_HDR', 1} ;

Please tell me if there are any problems or further inquiries.

Best, Soo Ye

liulizhou commented 4 years ago

Hi, @sooyekim Thanks for the code.

  1. I get that result with python code:
    YUV-> GF layer>Div_elementwise layer-> YUV-> uv sample->convert to BGR and show it(opencv do it)

    I convert to BGR for visualization to make sure that it is correct. But the result is different from matlab. Now i have completed pytorch network code and training (directly use your matlab GF and Div result input to pytorch network). i still confused why the result is different. I show Y channel directly: YUV-> GF layer>Div_elementwise layer-> YUV-> show Y channel python: python result image matlab: matlab result image Can you tell me how you visualize your detail layer result ? Still copy my python GF and Div method:

    
    yuv_sdr = cv2.merge([y_sdr, u_sdr, v_sdr])
    yuv_sdr = (yuv_sdr / 255).astype(np.float32)

yuv_hdr = cv2.merge([y_hdr, u_hdr, v_hdr]) yuv_hdr = (yuv_hdr / 1023).astype(np.float32)

yuv_gf = cv2.ximgproc.guidedFilter(guide=yuv_sdr, src=yuv_sdr, radius=3, eps=0.01, dDepth=-1) yuv_detail = yuv_sdr / (yuv_gf + np.finfo(np.float32).eps)


2. I get some pytorch train intermediate result, looks color and lum a little different(left to right, anchor hdr, gen hdr, sdr):
![anchor](https://sdr2hdr-1252410218.cos.ap-chengdu.myqcloud.com/epoch013_anchor_hdr.png) ![gen hdr](https://sdr2hdr-1252410218.cos.ap-chengdu.myqcloud.com/epoch013_gen_hdr.png)![sdr](https://sdr2hdr-1252410218.cos.ap-chengdu.myqcloud.com/epoch013_sdr.png)
![anchor](https://sdr2hdr-1252410218.cos.ap-chengdu.myqcloud.com/epoch014_anchor_hdr.png) ![gen hdr](https://sdr2hdr-1252410218.cos.ap-chengdu.myqcloud.com/epoch014_gen_hdr.png)![sdr](https://sdr2hdr-1252410218.cos.ap-chengdu.myqcloud.com/epoch014_sdr.png)

3. I also read your paper `JSI-GAN`, and its so great work. Can you provide the generator network code if it is convenient ?
sooyekim commented 4 years ago

Hi @liulizhou,

  1. For visualization of the detail layer, I simply visualized the yuv_detail/2 without converting to RGB. There is no need for the conversion as the detail layer values do not correspond to color values anymore, with the range being around 1. One thing, I set the radius = 5 for the parameters of the Guided Filter, which is the default value in Matlab. Maybe this could be the difference?

  2. What do you mean by anchor HDR? Is this the Matlab result, or the ground truth?

  3. We plan on releasing the code for JSI-GAN in the future (in Tensorflow!). The code is a bit messy now and is not yet ready for public release...

Thanks for your interest!

liulizhou commented 4 years ago

Hi, @sooyekim

  1. I have test the radius=5 or 3, the difference still exists. That is not the problem. I get your matlab Gided Filter result and input to python Div_elemwise method, the result was also incorrect. Maybe i have made some mistakes, and i will check it. Thanks again.
  2. anchor HDR means ground truth.
  3. Thanks, look forward to it.
liulizhou commented 4 years ago

solved!