Closed theodu closed 11 months ago
I would suggest to finetune the region proposal network on the blood cell dataset. The current approach is based on existing region proposal networks, which is trained on COCO / LVIS. So it naturally performs better in common life scene instead of specific use cases like biology.
But honestly I feel like a domain shift this large could be too much for a pretrained model could handle, so I could not guarantee this method gonna work definitely. My suggestion is to finetune, if possible, first RPN, then the RCNN if still not sufficient.
Given the natural of this dataset, you may even try purely supervised method first to measure the ceiling performance.
Thank you for your answer!
I will indeed try to retrain the RPNN and/or the RCNN. And after that, what would you suggest as the best mask approach for the dataset ?
I think the best mask approach depends on your application requirement, like whether a single cell shall be detect separately may need to be determined by the actual use case and users (biologist in this case?).
@theodu Were you able to fine tune the model?
I have another question regarding the dataset building that will feed the prototypes of novels classes. My use case is to annotate new images with already annotated images that I will use to build my prototypes. What would you suggest as the optimal way to build context image masks ?
For exemple, for this image from the blood-cell-object-detection (3 classes: rbc, wbc and platelets)
I tried many different inputs such as:
a few instances of each classes like:
all instances of a class like:
only one instance of a class like:
I didn't try yet to crop the image around its instances to create several subset image with one big object inside
But all of these methods output results that are not accurate enough like:
In your YCB demo, you use images with one big object at the center of the image. Is there a way for me to optimise my dataset ?