Open Arkkienkeli opened 6 years ago
Probably more ways to this, but what I did was training on only masks bigger than the desired min_size. I filtered them in my load_mask function
if mask_width > 10 or mask_height > 10:
#remove instance
continue
I don't have small masks in the training set, but when I try detection, the trained model detects very small regions with high confidence.
@Arkkienkeli Maybe you tried this, but there are two configs, one for training and one for inference. Make sure the config during inference also has the same config as during training. You can adjust the anchor sizes in the inference config and see how that changes your detection results.
@patrick-12sigma I use the same config for inference as for training.
Its just a theory, but it could be that it detects big objects with your big anchors, but uses the rpn_bbox or the mrcnn_bbox to down adjust big boxes to small boxes after they are detected.
If so, one of these network parts is sensitive to small errors inside the box. I dont know how to fix it other than having more training data and augmentation data.
Hi. I have images of size 704x704 and I get a lot of small false positives (network thinks that something like 10x10 is an object, but it's not). True object's side is usually more than 30px and I want to somehow filter them via configuration. I had anchor sizes 8, 16, 32, 64, 96, 128, 160, then deleted 8 and 16, but it did not help. How can I configure this?
Thank you for the project.