Closed se7oluti0n closed 7 years ago
I have implemented Mean IoU metrics, based on calculate_iou in evaluate.py. Could someone check it out
evaluate.py
def sparse_iou_ignoring_last_label(y_true, y_pred): # mIoU nb_classes = K.int_shape(y_pred)[-1] y_pred = K.reshape(y_pred, (-1, nb_classes)) pred = K.argmax(y_pred, axis=1) gt = tf.reshape(y_true, [-1,]) weights = tf.cast(tf.less_equal(gt, nb_classes - 1), tf.int32) # Ignoring all labels greater than or equal to n_classes. confusion_matrix = tf.confusion_matrix(gt, pred, num_classes=nb_classes, weights=weights, dtype=tf.float32) I = tf.diag_part(confusion_matrix) U = tf.reduce_sum(confusion_matrix, axis=0) + tf.reduce_sum(confusion_matrix, axis=1) - I IoU = I / (U + tf.convert_to_tensor(K.epsilon(), dtype=tf.float32)) return tf.reduce_mean(IoU)
If it is correct, I want to extend it when batch_size > 1
I have implemented Mean IoU metrics, based on calculate_iou in
evaluate.py
. Could someone check it outIf it is correct, I want to extend it when batch_size > 1