Closed endinblue closed 4 years ago
Hi, Do you mean the encoder parts?
Yes, share-weight layers in encoder part.
hi, I used different encoders for different input images. you can check it in the HDR part.
Hi. Thank you for your answer. Maybe, I misunderstand the code because I am not used to Keras. I am so sorry.
I think this code means :
X_i_32 = self.encoder_1(X_i, 1) X_i_64 = self.encoder_1(X_i_32, 2) X_i_128 = self.encoder_1(X_i_64, 4) X_i_256 = self.encoder_1(X_i_128, 8)
using shared-weight encoder layer for each same LDR.
I think it need to be write as
X_i_32 = self.encoder_1_1(X_i, 1) X_i_64 = self.encoder_1_2(X_i_32, 2) X_i_128 = self.encoder_1_3(X_i_64, 4) X_i_256 = self.encoder_1_4(X_i_128, 8)
Ah,
I got your point. Yeah, I'm using same encoder (shared-weight as you say) for each input images.
One reason is that I tried to make the model to have the same number of parameters as stated in the paper. the other one is that I think sharing the weight for different levels (in one input image) makes sense because they are just feature of the input images.
Please correct me if I'm wrong or please share your progress if you try with different encoder for different level of encoding.
Thanks!
Hello. Thank you for your answer. I try to make code with pytorch. Actually I made it And I find your code with tensorflow and keras. So I just check that is there any difference with mine. haha
I mean my question is just curious about code. I also don't know exact code of paper.
Again, thank you for your answer. Thank you.
have you tried to run some metrics? I mean PSNR or HDR-VDP? Have you reached the results are shown in the paper? If so, please share with me ^^
I actually build this code for my paper. But this model is not for comparison only just for ablation study. So, I trained it with my own dataset. So I don't compare the results with paper. And Actually Paper said it's HDR's range in [0,1] but I use HDRformat (RGBE) for groundtruth. So, my network's groundtruth not in that range it also different.
Hope to see your paper soon.
Hello! I have a question that you use shared-weight convolutional layer. Is there are any reason you use share-weight layers? I can't not find any mention about this in paper. Thankyou!