Open eralvc opened 6 years ago
赞
https://github.com/lambdaji/tf_repos/blob/master/deep_ctr/Model_pipeline/DeepFM.py#L189
loss = tf.reduce_mean(tf.nn.sigmoid_cross_entropy_with_logits(logits=y, labels=labels)) +
l2_reg * tf.nn.l2_loss(FM_W) +
l2_reg * tf.nn.l2_loss(FM_V)+
此处的loss如果设置为mini-batch相关参数的loss,back-propagation时会快很多
https://github.com/lambdaji/tf_repos/blob/master/deep_ctr/Model_pipeline/DeepFM.py#L189
loss = tf.reduce_mean(tf.nn.sigmoid_cross_entropy_with_logits(logits=y, labels=labels)) + l2_reg * tf.nn.l2_loss(FM_W) + l2_reg * tf.nn.l2_loss(FM_V)+
此处的loss如果设置为mini-batch相关参数的loss,back-propagation时会快很多
这个需要怎么设置呢?
我测试了下,很奇怪。加了和没加效果几乎相同。
使用batch_normal_layer,不需要加上这个ops嘛? update_ops = tf.get_collection(tf.GraphKeys.UPDATE_OPS) with tf.control_dependencies(update_ops): train_op = optimizer.minimize(loss, global_step=tf.train.get_global_step()
165行左右,构建deep全连接时,给变量都加上了l2正则 y_deep = tf.contrib.layers.fully_connected(inputs=deep_inputs, num_outputs=1, activation_fn=tf.identity, \ weights_regularizer=tf.contrib.layers.l2_regularizer(l2_reg), scope='deep_out') 然后在189行左右定义损失函数 loss = tf.reduce_mean(tf.nn.sigmoid_cross_entropy_with_logits(logits=y, labels=labels)) + \ l2_reg tf.nn.l2_loss(FM_W) + \ l2_reg tf.nn.l2_loss(FM_V)
我理解,上面的损失函数没有把前面通过weights_regularizer正则的变量取出来 所以应该改成 loss = tf.reduce_mean(tf.nn.sigmoid_cross_entropy_with_logits(logits=y, labels=labels)) + \ l2_reg tf.nn.l2_loss(FM_W) + \ l2_reg tf.nn.l2_loss(FM_V)+ \ tf.reduce_sum(tf.get_collection(tf.GraphKeys.REGULARIZATION_LOSSES))