Hi there,
I am new to the cell segmentation task and came across two questions on your CVPR paper. Hope you could share your insights if possible :)
I noticed you use U-net instead of mask R-CNN in the paper. I am curious about is it because you want to stay consistent with baseline models, or it is because U-net yields better performance?
As your annotation is actually larger than the MICCAI18 dataset, do you think using your annotation as the training dataset is a feasible approach that leads to better performance on general cell segmentation tasks?
We used U-net because it is much faster than mask R-CNN. We need to segment 5k whole slide images. Each image takes several hours for U-net, and using mask R-CNN would make the process much longer.
The annotation you refer to is the fake synthetic data? If yes, I think it is hard to definitively say if the large scale synthetic data is better than manually annotated MICCAI18 dataset -- it depends on the test set. My impression is that the model trained with large scale synthetic data performs OK-ish across various test cases, but the model trained with manually annotated data performs very very good on some cases, while fails on some other cases. I think a combination of them would be the best.
Hi there, I am new to the cell segmentation task and came across two questions on your CVPR paper. Hope you could share your insights if possible :)
I noticed you use U-net instead of mask R-CNN in the paper. I am curious about is it because you want to stay consistent with baseline models, or it is because U-net yields better performance?
As your annotation is actually larger than the MICCAI18 dataset, do you think using your annotation as the training dataset is a feasible approach that leads to better performance on general cell segmentation tasks?
Thanks!