rezazad68 / BCDU-Net

BCDU-Net : Medical Image Segmentation
719 stars 265 forks source link

Two problems, help, Mr REZEZAD #31

Closed 675492062 closed 3 years ago

675492062 commented 3 years ago

1)jaccard_similarity_score from package sklearn.metrics used in your codes should be old and the results maybe wrong. After Sklearn Version 0.32 , jaccard_similarity_score is replaced by jaccard_score, the testing results become lower. When binary segmentation, jaccard_similarity_score == Accuracy. 2)The unet model is hard to train and there is no validation accuracy improvment when training on my GPUS: 815/1815 [==============================] - 79s 44ms/step - loss: 0.6670 - acc: 0.7923 - val_loss: 0.6779 - val_acc: 0.8619 Epoch 2/100 1815/1815 [==============================] - 71s 39ms/step - loss: 2.0440 - acc: 0.7923 - val_loss: 1.6138 - val_acc: 0.8619 Epoch 3/100 1815/1815 [==============================] - 71s 39ms/step - loss: 2.1744 - acc: 0.7923 - val_loss: 1.6138 - val_acc: 0.8619 Epoch 4/100 1815/1815 [==============================] - 71s 39ms/step - loss: 2.1744 - acc: 0.7923 - val_loss: 1.6138 - val_acc: 0.8619 Epoch 5/100 1815/1815 [==============================] - 71s 39ms/step - loss: 2.1742 - acc: 0.7923 - val_loss: 1.6127 - val_acc: 0.8619 Epoch 6/100 1815/1815 [==============================] - 71s 39ms/step - loss: 2.1737 - acc: 0.7923 - val_loss: 1.6127 - val_acc: 0.8619 Epoch 7/100 1815/1815 [==============================] - 71s 39ms/step - loss: 2.1737 - acc: 0.7923 - val_loss: 1.6127 - val_acc: 0.8619 Epoch 8/100 1815/1815 [==============================] - 71s 39ms/step - loss: 2.1737 - acc: 0.7923 - val_loss: 1.6127 - val_acc: 0.8619

Epoch 00008: ReduceLROnPlateau reducing learning rate to 9.999999747378752e-06. Epoch 9/100 1815/1815 [==============================] - 71s 39ms/step - loss: 2.1737 - acc: 0.7923 - val_loss: 1.6127 - val_acc: 0.8619 Epoch 10/100 1815/1815 [==============================] - 71s 39ms/step - loss: 2.1737 - acc: 0.7923 - val_loss: 1.6127 - val_acc: 0.8619 Epoch 11/100 1815/1815 [==============================] - 71s 39ms/step - loss: 2.1737 - acc: 0.7923 - val_loss: 1.6127 - val_acc: 0.8619 Epoch 12/100 1815/1815 [==============================] - 71s 39ms/step - loss: 2.1737 - acc: 0.7923 - val_loss: 1.6127 - val_acc: 0.8619 Epoch 13/100 1815/1815 [==============================] - 70s 39ms/step - loss: 2.1737 - acc: 0.7923 - val_loss: 1.6127 - val_acc: 0.8619 Epoch 14/100 1815/1815 [==============================] - 70s 39ms/step - loss: 2.1737 - acc: 0.7923 - val_loss: 1.6127 - val_acc: 0.8619 Epoch 15/100 1815/1815 [==============================] - 70s 39ms/step - loss: 2.1737 - acc: 0.7923 - val_loss: 1.6127 - val_acc: 0.8619 ..... Epoch 00029: ReduceLROnPlateau reducing learning rate to 1.0000000116860975e-08. Epoch 30/100 1815/1815 [==============================] - 71s 39ms/step - loss: 2.1737 - acc: 0.7923 - val_loss: 1.6127 - val_acc: 0.8619 Epoch 31/100 1815/1815 [==============================] - 71s 39ms/step - loss: 2.1737 - acc: 0.7923 - val_loss: 1.6127 - val_acc: 0.8619 Epoch 32/100 1815/1815 [==============================] - 71s 39ms/step - loss: 2.1737 - acc: 0.7923 - val_loss: 1.6127 - val_acc: 0.8619 Epoch 33/100 1815/1815 [==============================] - 71s 39ms/step - loss: 2.1737 - acc: 0.7923 - val_loss: 1.6127 - val_acc: 0.8619 Epoch 34/100 1815/1815 [==============================] - 71s 39ms/step - loss: 2.1737 - acc: 0.7923 - val_loss: 1.6127 - val_acc: 0.8619 Epoch 35/100 1815/1815 [==============================] - 71s 39ms/step - loss: 2.1737 - acc: 0.7923 - val_loss: 1.6127 - val_acc: 0.8619 Epoch 36/100 1815/1815 [==============================] - 71s 39ms/step - loss: 2.1737 - acc: 0.7923 - val_loss: 1.6127 - val_acc: 0.8619

Epoch 00036: ReduceLROnPlateau reducing learning rate to 9.999999939225292e-10. Epoch 37/100 1815/1815 [==============================] - 71s 39ms/step - loss: 2.1737 - acc: 0.7923 - val_loss: 1.6127 - val_acc: 0.8619 Epoch 38/100 1815/1815 [==============================] - 71s 39ms/step - loss: 2.1737 - acc: 0.7923 - val_loss: 1.6127 - val_acc: 0.8619 Epoch 39/100 1815/1815 [==============================] - 71s 39ms/step - loss: 2.1737 - acc: 0.7923 - val_loss: 1.6127 - val_acc: 0.8619 Epoch 40/100 1815/1815 [==============================] - 71s 39ms/step - loss: 2.1737 - acc: 0.7923 - val_loss: 1.6127 - val_acc: 0.8619 Epoch 41/100 1815/1815 [==============================] - 71s 39ms/step - loss: 2.1737 - acc: 0.7923 - val_loss: 1.6127 - val_acc: 0.8619 Epoch 42/100 1815/1815 [==============================] - 71s 39ms/step - loss: 2.1737 - acc: 0.7923 - val_loss: 1.6127 - val_acc: 0.8619 Epoch 43/100 1815/1815 [==============================] - 71s 39ms/step - loss: 2.1737 - acc: 0.7923 - val_loss: 1.6127 - val_acc: 0.8619

Epoch 00043: ReduceLROnPlateau reducing learning rate to 9.999999717180686e-11. Epoch 44/100 1815/1815 [==============================] - 71s 39ms/step - loss: 2.1737 - acc: 0.7923 - val_loss: 1.6127 - val_acc: 0.8619 Epoch 45/100 1815/1815 [==============================] - 71s 39ms/step - loss: 2.1737 - acc: 0.7923 - val_loss: 1.6127 - val_acc: 0.8619 Epoch 46/100 1815/1815 [==============================] - 71s 39ms/step - loss: 2.1737 - acc: 0.7923 - val_loss: 1.6127 - val_acc: 0.8619 Epoch 47/100 1815/1815 [==============================] - 71s 39ms/step - loss: 2.1737 - acc: 0.7923 - val_loss: 1.6127 - val_acc: 0.8619 and finally, the evaluation results are very bad: Area under the ROC curve: 0.5

Area under Precision-Recall curve: 0.6370608990009015

Confusion matrix: Custom threshold (for positive) of 0.5 [[24737000 0] [ 9341720 0]] Global Accuracy: 0.7258782019981971 Specificity: 1.0 Sensitivity: 0.0 Precision: 0

Jaccard similarity score: 0.0

F1 score (F-measure): 0.0

So strange!

675492062 commented 3 years ago

Very strange, I switch to another model (do not change any other code) and training and prediction become normal.

675492062 commented 3 years ago

I just change the batch size from 8 to 4, and training && prediction become normal too, which is really hard to understand.

rezazad68 commented 3 years ago

Thanks for your interest in our work. Please note that you need to use appropriate hyperparameters for the specific model. The default values are for bcd-unet models. If you wanna train with U-net or any other model, try different learning rates, batch size,s and loss functions. Best