netrack / keras-metrics

Metrics for Keras. DEPRECATED since Keras 2.3.0
MIT License
165 stars 23 forks source link

"ValueError: too many values to unpack (expected 2)" on using precision and recall metrics for binary segmentation #22

Closed JennyLouise closed 5 years ago

JennyLouise commented 5 years ago

I'm trying to solve a binary segmentation problem with a highly imbalanced dataset, so tracking the recall and precision is useful. I'm using a keras implementation of a CNN.

imgs_train, imgs_mask_train, imgs_test, imgs_validation, mask_validation = self.load_data()
model = self.get_unet()
weighted_loss=weighted_pixelwise_crossentropy(np.array([1,self.loss_weight]))

adam = optimizers.Adam()
precision = keras_metrics.precision() 
recall = keras_metrics.recall() model.compile(loss=weighted_loss,optimizer=adam, metrics=['accuracy', precision, recall])
model_checkpoint = ModelCheckpoint(self.experiment_id+'.hdf5', monitor='loss',verbose=1, save_best_only=True, save_weights_only= True)
csv_logger = CSVLogger(self.augmented_datapath+"_"+self.experiment_id+'.csv', append=True, separator=';')
model.fit(imgs_train, imgs_mask_train, validation_data=[imgs_validation, mask_validation], epochs=20, verbose=1, callbacks=[model_checkpoint, csv_logger])

Here's a snippet of the code I'm using. load_data() returns numpy arrays for the training, testing, and validation data, and the model returns a (512, 512, 2) array, with 1s in one of the two columns for each pixel.

Traceback (most recent call last):
  File "job_array_nnet.py", line 74, in <module>
    job_array(args.array_id)
  File "job_array_nnet.py", line 66, in job_array
    nnet.train_neuralnet()
  File "/mainfs/home/jw22g14/DeepSVM/neural_net_model.py", line 172, in train_neuralnet
    model.compile(loss=weighted_loss,optimizer=adam, metrics=['accuracy', precision, recall])
  File "/home/jw22g14/.conda/envs/python3/lib/python3.6/site-packages/keras/engine/training.py", line 440, in compile
    handle_metrics(output_metrics)
  File "/home/jw22g14/.conda/envs/python3/lib/python3.6/site-packages/keras/engine/training.py", line 409, in handle_metrics
    mask=masks[i])
  File "/home/jw22g14/.conda/envs/python3/lib/python3.6/site-packages/keras/engine/training_utils.py", line 403, in weighted
    score_array = fn(y_true, y_pred)
  File "/home/jw22g14/.conda/envs/python3/lib/python3.6/site-packages/keras_metrics/metrics.py", line 218, in __call__
    tp = self.tp(y_true, y_pred)
  File "/home/jw22g14/.conda/envs/python3/lib/python3.6/site-packages/keras_metrics/metrics.py", line 72, in __call__
    y_true, y_pred = self.cast(y_true, y_pred)
  File "/home/jw22g14/.conda/envs/python3/lib/python3.6/site-packages/keras_metrics/metrics.py", line 26, in cast
    return self.cast_strategy(y_true, y_pred, dtype=dtype)
  File "/home/jw22g14/.conda/envs/python3/lib/python3.6/site-packages/keras_metrics/metrics.py", line 44, in _categorical
    _, labels = y_pred.shape

And here's the stack trace.

JennyLouise commented 5 years ago

I've also tried changing the output to be 512,512,1, and I've tried using

precision =keras_metrics.precision(label=1)  
recall = keras_metrics.recall(label=0)

I still get the same error

ybubnov commented 5 years ago

Hi @JennyLouise, thank you for posting the issue! I didn't quite get what does this notation mean: (512, 512, 2) for model output, are these layers of CNN? Generally speaking, this error is an expected behavior that happens when model output has more than 1 label.

The nature of metrics defined in keras-metrics package assumes that output is single-label, therefore for multi-label model you have to explicitly specify the label for which metric is calculated.

ybubnov commented 5 years ago

If 512, 512, 2 are layers and you want, for the sake of example, calculate the metrics for first label, then you can define metrics like this:

precision = keras_metris.precision(label=0)
recall = keras_metrics.precision(label=0)

In example where model is 512, 512, 1 and metrics are defined for labels 1 and 0, the exception is raised because the model produces only single label, while you're trying to calculate metrics for two layers.

JennyLouise commented 5 years ago

Sorry I wasn't clearer about the model output. I'm outputting a segmentation mask for an image, my final layers look like this:

 x = Conv2D(classes, (1, 1), padding='same', name=last_layer_name, activation='softmax')(x)
 x = BilinearUpsampling(output_size=(input_shape[0], input_shape[1]))(x)

I'm trying to get the precision and recall for the whole segmentation mask, rather than just a single label, so in this case there are 512x512 labels and 512x512 predictions for each image.

The code I'm using is based on DeepLabv3: https://github.com/bonlime/keras-deeplab-v3-plus but only for 2 classes, foreground and background

ybubnov commented 5 years ago

Thank you for clarification. Following up my explanation, to correctly use metrics, you have to calculate it for each predicted output. Having 512^2 prediction implies same amount of metrics.

According to your task description, the prediction can be described as: where particular pixel belongs to: background or foreground. The measuring of the precision in this case is applied to each pixel, so the task now sounds: how is correctly pixel at position (x, y) predicted.

If, say, your task was to create an image comparison model (which is trivial per-pixel equivalence), where the output is: 0 - input image equals to the expected, 1 - input image not equals to the expected. Then measuring of the prediction is straightforward, and you can use metrics from the keras-metrics package as is.

JennyLouise commented 5 years ago

Thanks for your speedy response.

The first option sounds exactly like what I'm trying to do, to get a grasp on how well my model predicts classification at a pixel by pixel level. Is there a way to use precision and recall from keras-metrics as a metric when training my model, and to get the average overall precision and recall?

As far as I can tell, option 2 removes the false positive and false negative separation, which I need with such an unbalanced dataset.

ybubnov commented 5 years ago

Unfortunately, there is no option of calculating average for all outputs, I will think about API to do this, but that feature will be available (if will) only in future versions. Right now I'm quite busy with my primary job, therefore I won't have time until the middle of January. I'm happy to see others' contribution, though.

w5688414 commented 5 years ago

File "train.py", line 82, in <module> metrics=['accuracy',precision,recall]) File "/home/eric/anaconda3/lib/python3.6/site-packages/keras/engine/training.py", line 451, in compile handle_metrics(output_metrics) File "/home/eric/anaconda3/lib/python3.6/site-packages/keras/engine/training.py", line 420, in handle_metrics mask=masks[i]) File "/home/eric/anaconda3/lib/python3.6/site-packages/keras/engine/training_utils.py", line 404, in weighted score_array = fn(y_true, y_pred) File "/home/eric/anaconda3/lib/python3.6/site-packages/keras_metrics/metrics.py", line 218, in __call__ tp = self.tp(y_true, y_pred) File "/home/eric/anaconda3/lib/python3.6/site-packages/keras_metrics/metrics.py", line 72, in __call__ y_true, y_pred = self.cast(y_true, y_pred) File "/home/eric/anaconda3/lib/python3.6/site-packages/keras_metrics/metrics.py", line 26, in cast return self.cast_strategy(y_true, y_pred, dtype=dtype) File "/home/eric/anaconda3/lib/python3.6/site-packages/keras_metrics/metrics.py", line 44, in _categorical _, labels = y_pred.shape ValueError: too many values to unpack (expected 2) I met the same error