Open dmitryshendryk opened 5 years ago
@dmitryshendryk Not an answer to your question but I had a question regarding your augmentation. When you applied your augmentations did you happen to change your annotations such that it matches your augmented image? If so how?
For example, if I wanted to detect a building in the top half of the image and the image is rotated 90 degrees clockwise, then the building will be on the right side of the augmented image. But the annotations will still say it's on the top half of the image. Wouldn't this cause problems? Or is it taken into account and changed internally?
@ash1995 When I rotate the image and do augmentation, i also recalculate all points of the boundary box, so no problem in it.
@dmitryshendryk Is the recalculation of the bounding box and segmentation masks done by you via a function you wrote yourself or is this recalculation possible from within the project i.e. is there a function that does the recalculation within the Mask RCNN project pipeline?
If possible could you show me how you do this recalculation?
@ash1995 By myself. It's not difficult, you just need to find transformation matrix after augmentation and then apply it on bounding box.
Can you show some code or script of your transformation matrix you made to apply after image augmentation? Thanks
Hello everyone.
I have a use case to train model on small images with shape (80, 140). Objects, that i want to detect on these images are even more small, it's a characters.
Already implemented a lot of steps data augmentation like blur, rotation, flip and so on. This is my config params i've redefined
Other params are default.
The model can detect characters, for now not very good, but there should be space for improvements.
So my question is, what is the best config will be for the FPN, RPN for the small images and small objects detection? I think need to adjust these or maybe other hyper params.
Can you share any best practices for the Mask R-CNN config for such use case?