zhixuhao / unet

unet for image segmentation
MIT License
4.6k stars 2k forks source link

How relevant are negative examples for a Unet segmentation model? #166

Open luistelmocosta opened 4 years ago

luistelmocosta commented 4 years ago

Hello, I am working with a Unet segmentation model for medical image analysis, and I would like to know how important are negative examples (with empty masks) for my model to learn that some images are 100% negatives. I am asking this because I took a bunch of negative and added to my dataset, some kind of hard negative mining, and still, I am getting a lot of false positives. Does Unet learn anything with negative examples? Is there any other way to force my model to learn these 'negative' features?

If you could also provide relevant information about this (articles, papers, questions) I would highly appreciate it.

Kind regards

huanglau commented 4 years ago

Depending on the exact details of your problem, and what your goal is you can try different modifications to the loss rather than adjusting the network itself.

The original U-Net paper does two things to account for class imbalances. It adds a weight to the loss to account for the imbalanced classification of pixels. This is so that false negatives and false positives will not contribute to the loss function by the same amount. It also adds a weight that accounts for the location of the wrong pixel classification relative to the true segmentation. This is so wrong classifications at the boarders of cells are weighted heavier than wrong classifications at the center of the cell. This is to force the network to pay more attention to accurately separating the cells.

You can adjust the 'class_weight' parameter in keras on compile. You can also look up how to generate custom loss functions through keras if you need something more specific.

emmanuel-nwogu commented 1 year ago

@luistelmocosta Hi there, I'm in a similar situation right now: I am using "negative examples" to train a U-Net because I know the final model will be passed input similar to these negative examples during deployment. I'm more interested in literature that coined, upholds, or discusses the term "negative examples" so I could read and cite in my work too. Please, did you ever find any such articles or papers? Thanks :)

luistelmocosta commented 1 year ago

Hello, I did some research on this topic at the time and I collected some papers/articles that I found interesting.

emmanuel-nwogu commented 1 year ago

Thanks for the reply and the links. :)

In my case, I'm only looking for literature that addresses negative examples. I have two classes: background and target-class. Here, my "negative examples" are examples where the associated ground truth annotation (binary mask) is empty so the entire image is labeled "background". It's not necessarily a new class but a way to teach the model what is not target-class. It does skew the entire training set to "background" though but so far I'm getting great results anyway :)

Also, I skimmed A Review of Deep-Learning-Based Medical Image Segmentation Methods but I could not find any discussion on negative examples. I'll take a closer look later.