matterport / Mask_RCNN

Mask R-CNN for object detection and instance segmentation on Keras and TensorFlow
Other
24.53k stars 11.68k forks source link

DETECTION_MAX_INSTANCES - detections of up to 1000 objects per image #1884

Open velaia opened 4 years ago

velaia commented 4 years ago

Hi all, thanks for this great research and publishing it here! And thanks to many others for their great questions and answers. ❤

I'm working on a model that detects metal bars in bundles of bars. I'm using the Mask R-CNN implementation that comes with Detectron2 (the parameter there is called DETECTIONS_PER_IMAGE I think) and face the issue that the number of bars detected peaks at a certain threshold. First the detections stopped at 100. Then I found the above mentioned parameter and adjusted it to 1000. Now the number of detections peaks at about 469. I've adjusted the SCORE_THRESH_TEST to 0.5 but the maximum number seems to be fixed. Because the objects to detect can be pretty small I've also adjusted the ANCHOR_GENERATOR sizes to include [16] in the (RPN) ANCHOR_GENERATOR section of the model. What else can I do? Adjust RESNETS.OUT_FEATURES to not include "res5"? What's the detection limit that comes with the architecture (if any)?

This is an image showing the detections: image

Master-HM commented 4 years ago

Hi developer! I don't know answer to question, but I've question. Would you please tell me how you changed object colors after detection, because file provided in this repository gray-scale whole image, and then return original color pixel for detected objects, I Want it to return detected objects with color I specify e.g. Yellow for one class object, orange for other class object. with bound bax and class label

please help me to get this @velaia

velaia commented 4 years ago

Hello @Master-HM,

my requirement was somewhat different to yours as I had only one class of objects and therefore don't have to change colors per class. Looking at the detectron2 code I found the visualizer has a function called _overlayinstances which I used (see second cell from the bottom in my notebook here https://github.com/velaia/Jupyter-Notebooks/blob/master/20191117%20Detectron2%20Installation-2.ipynb). You will probably have to adjust the following parameter to reflect your colors: assigned_colors=['#008CFF' for i in range(len(outputs["instances"]))].

Good luck and keep us updated on your progress!

paulhager commented 4 years ago

I can't provide a definitive answer, but we've had similar problems as we are trying to detect cells which easily go into the multiple hundreds in one picture. What seems to have had the greatest impact for us were the training configs:

config.RPN_TRAIN_ANCHORS_PER_IMAGE = 800 config.MAX_GT_INSTANCES = 300 config.PRE_NMS_LIMIT = 12000 config.POST_NMS_ROIS_TRAINING = 6000

and the inference configs:

config.DETECTION_MAX_INSTANCES = 1000 config.POST_NMS_ROIS_INFERENCE = 8000

These configs were based on ~250 cells (objects) per image.

For inference we also experimented with upping the PRE_NMS_LIMIT similar to training but this created markedly more FP than it took FN away so we left it.

We're doing a run right now also setting config.TRAIN_ROIS_PER_IMAGE = 600, I can post back with the results.

Hope this helps! And would be interested to hear of any insights you have in this area, because it definitely also seems to be the bottleneck for us. The network gives very little FP but way too many FN...