Open lisiping0817 opened 5 years ago
I found some error in my code. So I will fix it and upload my results as soon as possible. Thank you.
@Junshk , firstly thanks for your implementation. Could you please post the errors in your code? Or did you have the right implementation? Sorry for pushing you.
@Junshk By the way, the variables "target" and "input","input_v","target_v" confuse me, so the first two are the validation lr and hr images and the latter two are lr and hr training images?
@d12306 Sorry for the late reply. I've fixed errors and I am checking my implementation is right. (training)
For the second question, the first two variables mean target domain images (hr image domain, "input" is not given) and the latter two mean input domain images (lr image domain by unknown downsampling, "target_v" is not given)
@Junshk , Thanks for your attention. I don't understand you say that the 'input' and the 'target_v ' is not given. Actually, in 'main.py', Line 157, ''input, target, bicubic, input_v, target_v = batch1[0], batch1[1], batch1[2], batch0[0], batch0[1]'' the numerator outputs the above five variables, could you please help me figure that out. Sorry for bothering you. Thanks.
@d12306 In the paper, do not use gt and lr image pair because the topic is unsupervised sr. variable (input, target) is a pair and (input_v, target_v) is also a pair. The firsr and the second pair is not related. In the training process, we don't use input and target_v variables for unspervised learning setting. In other word, they should be not given.
@Junshk Thanks, appreciate your help!
@Junshk , hi, Actually I am wondering that why the identity loss in the high resolution module is not averaged over ervery pixel? It is true that the paper actually formulate the loss like that, but when I train the network, I found that the loss is too large compared to others? Will it make sense to average it over every pixel? Thanks,
Hey @d12306,
Did you able to train the network? I'm facing an issue, which some loss functions stays 0.000
Epoch0: Loss: idt 0.000000 2.370729 cyc 0.000000 1.735660 D 0.000000 0.973776, G: 0.000000 0.870190, psnr_hr: 5.089412, psnr_lr 5.857366
@Auth0rM0rgan Sorry for the late reply. Actually, the network training and the computation of the loss functions is seperated into two stages. At first, only the Noisy LR-> Clean LR is trained, and the zero loss as you found when printing denoted the losses in the second stage, so when training at the first stage, their values are 0. After some iterations, the second stage of training or joint fine-tuning is initialized so the loss will not be 0. You can wait and find it yourself.
@d12306, Thanks for the explanation. just a question:
Did you change the boolean variable joint to TRUE ( in the main.py, line 134)? the joint variable is False by default and when I set it to TRUE all the loss functions work together. None of them stay 0.
train(training_data_loader, training_high_loader, model, optimizer, epoch, False)
@Auth0rM0rgan , Yeah, you should change it to True. I guess there is some stuff with the code version. By the way, did you find it reasonable that the identity loss in the LR to HR module is not averaged? I am a little bit confused why the author didn't average it?
@Junshk , Hi, Thank you for sharing this code.
Could you please upload the experimental results as i want to compare them with the paper's result.
@Auth0rM0rgan , Yeah, you should change it to True. I guess there is some stuff with the code version. By the way, did you find it reasonable that the identity loss in the LR to HR module is not averaged? I am a little bit confused why the author didn't average it?
hello, Now I find the same question,in the training of lr->hr,the idt loss is very large ,and the sr result is poor。 so could you tell me whether you have solved the problem?thanks。
I'd like to see the results of the paper reproduction. Could you upload some pictures? thank you so much