Open wuguolil opened 4 years ago
Dear Sir, Amazing work ! Congratulation!! please , I have a question.can you kindly provide me with the full path I should insert of checkpoint the trained large scale training model to be able to use it as a pre-trained to meta transfer training? I'm waiting for your reply. Thanks in advance
Please @wuguolil I'm facing a problem when i load the pretrained model , specially when it reads the checkpoint this is the error .. how did you kindly solve it please ??
NotFoundError (see above for traceback): Key MODEL/conv7/kernel/Adam_3 not found in checkpoint [[Node: save/RestoreV2_69 = RestoreV2[dtypes=[DT_FLOAT], _device="/job:localhost/replica:0/task:0/device:CPU:0"](_arg_save/Const_0_0, save/RestoreV2_69/tensor_names, save/RestoreV2_69/shape_and_slices)]]
Please can you kindly explain me how to calculate this weight loss ?
def get_loss_weights(self): loss_weights = tf.ones(shape=[self.TASK_ITER]) * (1.0/self.TASK_ITER) decay_rate = 1.0 / self.TASK_ITER / (10000 / 3) min_value= 0.03 / self.TASK_ITER
loss_weights_pre = tf.maximum(loss_weights[:-1] - (tf.multiply(tf.to_float(self.global_step), decay_rate)), min_value)
loss_weight_cur= tf.minimum(loss_weights[-1] + (tf.multiply(tf.to_float(self.global_step),(self.TASK_ITER- 1) * decay_rate)), 1.0 - ((self.TASK_ITER - 1) * min_value))
loss_weights = tf.concat([[loss_weights_pre], [[loss_weight_cur]]], axis=1)
return loss_weights
Thank you for your admirable and meaningful work! My question is that why the downsampling operator is implemented by a model rather than a algorithm in the meta-test step. And I only found the models of bicubic x2, direct x2,direct x4 and multi-scale. But I want to know how they are implemented. Could you please upload the code? Thank you very much!