Open HyZhu39 opened 3 years ago
Dear Sir, Amazing work ! Congratulation!! please , I have a question.can you kindly provide me with the full path I should insert of checkpoint the trained large scale training model to be able to use it as a pre-trained to meta transfer training? I'm waiting for your reply. Thanks in advance
Please can you kindly explain me how to calculate this weight loss ?
def get_loss_weights(self): loss_weights = tf.ones(shape=[self.TASK_ITER]) * (1.0/self.TASK_ITER) decay_rate = 1.0 / self.TASK_ITER / (10000 / 3) min_value= 0.03 / self.TASK_ITER
loss_weights_pre = tf.maximum(loss_weights[:-1] - (tf.multiply(tf.to_float(self.global_step), decay_rate)), min_value)
loss_weight_cur= tf.minimum(loss_weights[-1] + (tf.multiply(tf.to_float(self.global_step),(self.TASK_ITER- 1) * decay_rate)), 1.0 - ((self.TASK_ITER - 1) * min_value))
loss_weights = tf.concat([[loss_weights_pre], [[loss_weight_cur]]], axis=1)
return loss_weights
It seems that the PSNR and SSIM results of “bicubic” downsampling scenario in the paper cannot be obtained using your current released model. Could you please upload the LR images for “bicubic” downsampling test and code for bicubic downsampling? In addition, when training the bicubicx2 model, the PSNR/SSIM results obtained by using the existing training code are quite different from those mentioned in the paper. Should I adjust some parameters corresponding to the bicubic downsampling scenario?