Thanks for your nice work first. I confused that in your paper, the final objective function is combined from three loss functions by different weights. However, I can not find the weight mechanism in the source code.
According to the code, the default loss function and compile function are below:
def modified_mean_squared_error_2(y_true, y_pred) :
mask = tf.not_equal(y_true, 0.0)
mask = tf.cast(mask, dtype=tf.float32)
a = (y_pred - y_true) * 2
b = K.sum(mask, axis=-1, keepdims=True)
c = tf.ones_like(b)
b = K.tf.where(b > 0.0, b, c)
return K.sum((a mask) / b, axis=-1)`
In my understanding, the final loss will be combined by different sub-loss with weight 1 in this code. If I make a misunderstanding, please tell me where is the mistake and the correct way to understand the code. thank you very much.
Thanks for your nice work first. I confused that in your paper, the final objective function is combined from three loss functions by different weights. However, I can not find the weight mechanism in the source code. According to the code, the default loss function and compile function are below:
def modified_mean_squared_error_2(y_true, y_pred) : mask = tf.not_equal(y_true, 0.0) mask = tf.cast(mask, dtype=tf.float32) a = (y_pred - y_true) * 2 b = K.sum(mask, axis=-1, keepdims=True) c = tf.ones_like(b) b = K.tf.where(b > 0.0, b, c) return K.sum((a mask) / b, axis=-1)`
def compile(self): self.model.compile(optimizer=select_optimiser(self.optimiser, self.learning_rate), loss=select_loss(self.loss))`
In my understanding, the final loss will be combined by different sub-loss with weight 1 in this code. If I make a misunderstanding, please tell me where is the mistake and the correct way to understand the code. thank you very much.