Closed ykk648 closed 5 years ago
If you refer to these lines:
with tf.variable_scope('model'):
# Compute the embeddings with the model
embeddings = build_model(is_training, images, params)
embedding_mean_norm = tf.reduce_mean(tf.norm(embeddings, axis=1))
tf.summary.scalar("embedding_mean_norm", embedding_mean_norm)
The mean embedding norm is just a way to compute the average L2 norm of the embeddings. This is just the "size" of these embeddings (how big they are on average).
I just use this to monitor the size of the embeddings during training, to make sure they don't collapse to 0 or go too high.
thanks, embedding_mean_norm reflects the degree of ambiguity of human face and someone used it for image quality evaluation.
Could you give me some clues or papers? thx