issues
search
frankkramer-lab
/
aucmedi
a framework for Automated Classification of Medical Images
https://frankkramer-lab.github.io/aucmedi/
GNU General Public License v3.0
38
stars
13
forks
source link
Implement evaluation stuff
#88
Closed
muellerdo
closed
2 years ago
muellerdo
commented
2 years ago
Training Evaluation via csv_logger or
keras history
Performance evaluation with preds matrix from predict() and label_ohe
Compare prediction function (pred_list=[pred_a, pred_b, pred_c], labels_ohe)
dataset analysis (input_interface)
model complexity
Training Evaluation
epoch vs loss - all in one plot
epoch vs loss - facet_grid for each model (with all metrics)
epoch vs loss - smoothed over models
Performance evaluation
ROC plot
barplots with all kinds metrics
heatmap for confusion matrix
CSV table with metric computations
Compare prediction function
facet_grid large barplot dataset vs metric
Performance gain
Dataset analysis
class distribution -> barplot/pieplot & heatmap for multilabel (samples vs class)
table with class distribution
sample showcase -> visualize 4 randomly collected images from each class
sampling?
muellerdo
commented
2 years ago
Current Status:
[x] Training Evaluation
[x] Performance Evaluation
[x] Performance Comparison between models
[x] Dataset Analysis
muellerdo
commented
2 years ago
[x] Evaluation submodule Description
Training Evaluation
Performance evaluation
Compare prediction function
Dataset analysis