Closed jsherrah closed 10 years ago
I've updated the evalPredictions.py file to include class. Wasn't sure if you wanted a per-class breakdown or the average over all classes contained in the ground truth - either way, the stats are computed in the script, just need to decide what to return!
Great. Here's what I would like the output to be old chap:
global accuracy average per class accuracy (single number) for n classes, nxn confusion matrix (just google it for details)
ta
G'day! Added a function that returns a n x n confusion matrix for a input ground truth and predicted image labels. Check it out - if it looks good, I'll include in the overall evaluation function that operates over all test images.
Crickey! I threw me boomerang over your code and now it's fair dinkum.
At the moment this outputs the raw % of pixels that are correctly classified (global).
It would be good to also quote the average per-class accuracy (averaged over classes). This is more informative, since just guessing "grass" or "sky" would probably already get you a reasonable default accuracy due to the ubiquity of these classes.