Closed foreignsand closed 7 months ago
There are a couple options that might help.
masks_this
result):
# Set up morphological filter (change ksize to fill in bigger holes)
ksize = (15,15)
kernel = cv2.getStructuringElement(cv2.MORPH_ELLIPSE, ksize)
cleaner_masks_this = [] for mask in masks_this: mask_uint8 = np.uint8(mask) * 255 new_bool_mask = cv2.morphologyEx(mask_uint8, cv2.MORPH_CLOSE, kernel) > 127 cleaner_masks_this.append(new_bool_mask)
2. You can try using the mask input to the predictor. This seems very finicky, but there are posts (see #347 or #169) that suggest you can get better results by iteratively feeding the mask predicted by SAM back into it and re-predicting.
3. If possible, you can try using point prompts instead of the box prompt (i.e. when calling `mask_predictor.predict(...)`). Obviously that's not improving things using boxes, but in case it's an option, it might help. Using both positive and negative prompts tends to help pick up more complex shapes, from what I've seen.
Here's an example of the result (using the web demo) on what seemed like the toughest segmentation:
![multipoint_example](https://github.com/facebookresearch/segment-anything/assets/32405350/67398006-ac74-473a-95d8-adf9bf5556f1)
4. And lastly, if you're planning to do a lot of this kind of segmentation, then using a variant of SAM that is fine tuned for these kinds of images might help. I don't know anything about this kind of stuff, so I can't be of much help, but a quick search returned [CellSam](https://gist.github.com/sushmanthreddy/618e642d2adfc6b58b6b5df0e9dbd3cd) which seems vaguely related, and might be useful? Fine-tuning your own variant could be a lot of work, so it's only worthwhile if you're going to be working with a lot of these images.
Thank you so much! This is really thorough and helpful!
I've been using positive and negative points to help refine the mask as suggested in option 3. I'll probably need to do something more like 4 in the long run because I will be doing this quite a bit, but for now this is working better!
Again, thanks so much!
Cheers, Emily
Is there a way to improve SamPredictor segmentation when using bounding boxes?
I have something I want to segment—fungal colonies that are growing into each other in petri dish—and the predictive model doesn't do the greatest job of segmenting it:
This is relatively easy to do with a petri dish with separated colonies like this:
And the following script successfully segments the above petri dish with separated colonies:
If I change the box locations, it does an okay job with the first petri dish, but it is still patchy.
I'm not sure if there are settings that could improve the quality of the segmentation and would love to hear suggestions.
Many thanks, Emily