Closed dbuscombe-usgs closed 3 years ago
plotcomp_n_getiou
now computes and reports K-L divergence for each validation and train pair tested in train_model.py
smaller values of KLD are better (smaller divergence)
Hatteras stats now:
Evaluating model on entire validation set ...
15/15 [==============================] - 2s 164ms/step - loss: 0.0222 - mean_iou: 0.9839 - dice_coef: 0.9778
loss=0.0222, Mean IOU=0.9839, Mean Dice=0.9778
Mean of mean IoUs (validation subset)=0.983
Mean of mean Dice scores (validation subset)=0.977
Mean of mean KLD scores (train subset)=0.142
Mean of mean IoUs (train subset)=0.988
Mean of mean Dice scores (train subset)=0.982
Mean of mean KLD scores (train subset)=0.100
also added KLD as a model training option
elif LOSS.startswith('k'):
model.compile(optimizer = 'adam', loss =tf.keras.losses.KLDivergence(), metrics = [mean_iou, dice_coef])
needs:
trained resunet model with Kullback–Leibler on the Klamath L8 C/T imagery with 80% validation split and 0.2 dropout
As I suspected, the unet does converge easily and quickly
outputs are good, comparable to dice loss
perhaps it is a good loss function for a Unet when you want to preserve the overall frequency distribution of classes in model predictions?
tested, docs updated. closing
it makes sense to explore K-L distance as a loss function or metric, see:
https://github.com/dbuscombe-usgs/segmentation_zoo/issues/32#issuecomment-966740690