Open luistelmocosta opened 4 years ago
Depending on the exact details of your problem, and what your goal is you can try different modifications to the loss rather than adjusting the network itself.
The original U-Net paper does two things to account for class imbalances. It adds a weight to the loss to account for the imbalanced classification of pixels. This is so that false negatives and false positives will not contribute to the loss function by the same amount. It also adds a weight that accounts for the location of the wrong pixel classification relative to the true segmentation. This is so wrong classifications at the boarders of cells are weighted heavier than wrong classifications at the center of the cell. This is to force the network to pay more attention to accurately separating the cells.
You can adjust the 'class_weight' parameter in keras on compile. You can also look up how to generate custom loss functions through keras if you need something more specific.
@luistelmocosta Hi there, I'm in a similar situation right now: I am using "negative examples" to train a U-Net because I know the final model will be passed input similar to these negative examples during deployment. I'm more interested in literature that coined, upholds, or discusses the term "negative examples" so I could read and cite in my work too. Please, did you ever find any such articles or papers? Thanks :)
Hello, I did some research on this topic at the time and I collected some papers/articles that I found interesting.
Paper: "U-Net: Convolutional Networks for Biomedical Image Segmentation" by Olaf Ronneberger, Philipp Fischer, and Thomas Brox. This is the original paper introducing the U-Net architecture, which provides insights into the design choices and showcases its performance on biomedical image segmentation tasks. You can find the paper here: https://arxiv.org/abs/1505.04597
Paper: "A Review of Deep-Learning-Based Medical Image Segmentation Methods" by Xiangbin Liu, Liping Song, Shuai Liu and Yudong Zhang. This paper provides an overview of various deep learning techniques, including U-Net, for medical image segmentation tasks. It discusses the importance of negative examples and explores strategies for handling class imbalance. You can access the paper here: https://www.mdpi.com/2071-1050/13/3/1224
Paper: "Focal Loss for Dense Object Detection" by Tsung-Yi Lin, Priya Goyal, Ross Girshick, Kaiming He, and Piotr Dollár. Although this paper focuses on object detection, it introduces the focal loss, which addresses the issue of class imbalance and can be adapted for segmentation tasks. The focal loss assigns higher weights to hard examples, which can be useful for training the model to pay more attention to challenging negative examples. You can find the paper here: https://arxiv.org/abs/1708.02002
Paper: "Improving Object Localization with Fitness NMS and Bounded IoU Loss" by Clement Godard, Oisin Mac Aodha, and Gabriel J. Brostow. This paper proposes a method called "Fitness NMS" to improve object localization in images. While not specifically focused on medical image segmentation, it presents an interesting approach that addresses false positives by considering the model's confidence scores. You can access the paper here: https://arxiv.org/abs/1911.04406
Thanks for the reply and the links. :)
In my case, I'm only looking for literature that addresses negative examples. I have two classes: background and target-class
. Here, my "negative examples" are examples where the associated ground truth annotation (binary mask) is empty so the entire image is labeled "background". It's not necessarily a new class but a way to teach the model what is not target-class
. It does skew the entire training set to "background" though but so far I'm getting great results anyway :)
Also, I skimmed A Review of Deep-Learning-Based Medical Image Segmentation Methods but I could not find any discussion on negative examples. I'll take a closer look later.
Hello, I am working with a Unet segmentation model for medical image analysis, and I would like to know how important are negative examples (with empty masks) for my model to learn that some images are 100% negatives. I am asking this because I took a bunch of negative and added to my dataset, some kind of hard negative mining, and still, I am getting a lot of false positives. Does Unet learn anything with negative examples? Is there any other way to force my model to learn these 'negative' features?
If you could also provide relevant information about this (articles, papers, questions) I would highly appreciate it.
Kind regards