Closed hermancollin closed 1 year ago
In a now deleted notebook on the MedSAM repo, this was used to get the full bbox:
B,_, H, W = gt.shape
boxes = torch.from_numpy(np.array([[0,0,W,H]]*B)).float().to(device)
Preliminary results from the ac/add_axon_segmentation_training
branch trained for 100 epochs. Not very impressive.
GT: prediction:
I can see a trend in the 8 validation images where the right part of the image is very poorly segmented compared to the left part (as can be seen above). I suspect there's a problem with the prompt.
This was the case. The validation bboxes were rotated 90 degrees. Fixed in a62a7ecf4424840989167eab46e9fa9752eb5466
Now it's slightly better
We would like to know how easily SAM can be fine-tuned for fully automatic segmentation (i.e. prompting with the whole image as a bbox instead of prompting with a ROI of interest).
The perfect pretext to try this is axon segmentation.