matterport / Mask_RCNN

Mask R-CNN for object detection and instance segmentation on Keras and TensorFlow
Other
24.65k stars 11.7k forks source link

How can I decide/calculate RPN_ANCHOR_SCALES and RPN_ANCHOR_RATIOS from my own dataset? #1556

Open K-M-Ibrahim-Khalilullah opened 5 years ago

K-M-Ibrahim-Khalilullah commented 5 years ago

I am trained my own dataset according to the documentation and default values of the RPN_ANCHOR_SCALES and RPN_ANCHOR_RATIOS. But My question is How can I calculate/decide those parameters for my own dataset. If I increase the number of the parameter's value, is it Ok? if OK, How?

Thanks

MathiasKahlen commented 5 years ago

I would like to know the same thing.

My experience so far by experimenting is that reducing the RPN_ANCHOR_SCALES helped on detecting smaller instances of my classes. However, reducing them too much seems to make it miss out of some of the larger instances.

MathiasKahlen commented 5 years ago

I found this paper while searching for it: http://www.multimedia-computing.de/mediawiki/images/e/ed/ICME2017.pdf

I'm not very good at the math in it, but I guess they give some formulas from which you could calculate it?

From the conclusion it says:

We have evaluated in detail the behavior of Faster R-CNN for small objects for both the proposal and the classification stage using artificial datasets. In our experiments we have observed that small objects pose a problem for the proposal stage in particular. These difficulties are partially due to the inability of the RPN to accurately localize these objects because of the low resolution of the feature map. Also, we have shown that for small objects the choice of anchor scales is of great importance and have provided a criterion by which to choose anchor scales depending on the desired localization accuracy.