Closed JennyLouise closed 5 years ago
I've also tried changing the output to be 512,512,1, and I've tried using
precision =keras_metrics.precision(label=1)
recall = keras_metrics.recall(label=0)
I still get the same error
Hi @JennyLouise, thank you for posting the issue!
I didn't quite get what does this notation mean: (512, 512, 2)
for model output, are these layers of CNN? Generally speaking, this error is an expected behavior that happens when model output has more than 1 label.
The nature of metrics defined in keras-metrics
package assumes that output is single-label, therefore for multi-label model you have to explicitly specify the label for which metric is calculated.
If 512, 512, 2
are layers and you want, for the sake of example, calculate the metrics for first label, then you can define metrics like this:
precision = keras_metris.precision(label=0)
recall = keras_metrics.precision(label=0)
In example where model is 512, 512, 1
and metrics are defined for labels 1
and 0
, the exception is raised because the model produces only single label, while you're trying to calculate metrics for two layers.
Sorry I wasn't clearer about the model output. I'm outputting a segmentation mask for an image, my final layers look like this:
x = Conv2D(classes, (1, 1), padding='same', name=last_layer_name, activation='softmax')(x)
x = BilinearUpsampling(output_size=(input_shape[0], input_shape[1]))(x)
I'm trying to get the precision and recall for the whole segmentation mask, rather than just a single label, so in this case there are 512x512 labels and 512x512 predictions for each image.
The code I'm using is based on DeepLabv3: https://github.com/bonlime/keras-deeplab-v3-plus but only for 2 classes, foreground and background
Thank you for clarification. Following up my explanation, to correctly use metrics, you have to calculate it for each predicted output. Having 512^2 prediction implies same amount of metrics.
According to your task description, the prediction can be described as: where particular pixel belongs to: background or foreground. The measuring of the precision in this case is applied to each pixel, so the task now sounds: how is correctly pixel at position (x, y) predicted.
If, say, your task was to create an image comparison model (which is trivial per-pixel equivalence), where the output is: 0 - input image equals to the expected, 1 - input image not equals to the expected. Then measuring of the prediction is straightforward, and you can use metrics from the keras-metrics
package as is.
Thanks for your speedy response.
The first option sounds exactly like what I'm trying to do, to get a grasp on how well my model predicts classification at a pixel by pixel level. Is there a way to use precision and recall from keras-metrics as a metric when training my model, and to get the average overall precision and recall?
As far as I can tell, option 2 removes the false positive and false negative separation, which I need with such an unbalanced dataset.
Unfortunately, there is no option of calculating average for all outputs, I will think about API to do this, but that feature will be available (if will) only in future versions. Right now I'm quite busy with my primary job, therefore I won't have time until the middle of January. I'm happy to see others' contribution, though.
File "train.py", line 82, in <module> metrics=['accuracy',precision,recall]) File "/home/eric/anaconda3/lib/python3.6/site-packages/keras/engine/training.py", line 451, in compile handle_metrics(output_metrics) File "/home/eric/anaconda3/lib/python3.6/site-packages/keras/engine/training.py", line 420, in handle_metrics mask=masks[i]) File "/home/eric/anaconda3/lib/python3.6/site-packages/keras/engine/training_utils.py", line 404, in weighted score_array = fn(y_true, y_pred) File "/home/eric/anaconda3/lib/python3.6/site-packages/keras_metrics/metrics.py", line 218, in __call__ tp = self.tp(y_true, y_pred) File "/home/eric/anaconda3/lib/python3.6/site-packages/keras_metrics/metrics.py", line 72, in __call__ y_true, y_pred = self.cast(y_true, y_pred) File "/home/eric/anaconda3/lib/python3.6/site-packages/keras_metrics/metrics.py", line 26, in cast return self.cast_strategy(y_true, y_pred, dtype=dtype) File "/home/eric/anaconda3/lib/python3.6/site-packages/keras_metrics/metrics.py", line 44, in _categorical _, labels = y_pred.shape ValueError: too many values to unpack (expected 2)
I met the same error
I'm trying to solve a binary segmentation problem with a highly imbalanced dataset, so tracking the recall and precision is useful. I'm using a keras implementation of a CNN.
Here's a snippet of the code I'm using. load_data() returns numpy arrays for the training, testing, and validation data, and the model returns a (512, 512, 2) array, with 1s in one of the two columns for each pixel.
And here's the stack trace.