liutinglt / CE2P

214 stars 41 forks source link

fusion about metric #2

Open qinhaifangpku opened 5 years ago

qinhaifangpku commented 5 years ago

Hi, Thanks for your great sharing this work!

I borrow your evaluate.py to evaluate my prediction on other datasets to compute the mean IOU but when i put into ground truth it only get mean IU: 0.50232

Thank you very much in advance!

liutinglt commented 5 years ago

@qinhaifangpku You can check the type of the ground truth in your code (uint8 may cause this problem).

qinhaifangpku commented 5 years ago

@qinhaifangpku You can check the type of the ground truth in your code (uint8 may cause this problem).

Okay, so what type should it be?

liutinglt commented 5 years ago

@qinhaifangpku You should set it as int like the code I provided , "seg_gt = np.asarray(label[0].numpy(), dtype=np.int)". You can evaluate the metric more directly by modifying the evaluate.py and replacing the seg_pred to seg_gt in "confusion_matrix += get_confusion_matrix(seg_gt, seg_pred, args.num_classes )".