Open chengchu88 opened 6 years ago
I have a similar project to yours. What I did was to annotate two different classes: one is the outer shape and the other one is the inner shape. Basically I checked if the bounding box of the inner shape is within the bounding box of the outer one. However, I'm also looking to see if I can find a better approach.
Thanks.. I did exactly the same thing! It seems that this Mask RCNN cannot have void in the predicted mask.
On Fri, Sep 14, 2018 at 11:17 PM Sajjad notifications@github.com wrote:
I have a similar project https://github.com/matterport/Mask_RCNN/issues/927 to yours. What I did was to annotate two different classes: one is the outer shape and the other one is the inner shape. Basically I checked if the bounding box of the inner shape is within the bounding box of the outer one. However, I'm also looking to see if I can find a better approach.
— You are receiving this because you authored the thread. Reply to this email directly, view it on GitHub https://github.com/matterport/Mask_RCNN/issues/933#issuecomment-421534685, or mute the thread https://github.com/notifications/unsubscribe-auth/AlSxeCNXHfbmH26guS-9p81TMUT-p52Wks5ubJuQgaJpZM4WqFqL .
I was able to train masks with holes using the existing code.
I was able to train masks with holes using the existing code.
I was able to do so as well. But the main question is what is the best approach to detect the masks which their hole is also detected? Especially when you annotate the holes as a different class.
yes i can train it too, however the detection only produced c-shape polygons. and, what is the area ratio of the void to the outer mask in your case? mine is quite large, close to 0.8 or 0.9.
thanks
On Thu, Sep 20, 2018, 8:12 PM dcyoung notifications@github.com wrote:
I was able to train masks with holes using the existing code.
— You are receiving this because you authored the thread. Reply to this email directly, view it on GitHub https://github.com/matterport/Mask_RCNN/issues/933#issuecomment-423400190, or mute the thread https://github.com/notifications/unsubscribe-auth/AlSxeK0gaz0Xmd38QMMSZ2TQ3HHKd8Wfks5udFkogaJpZM4WqFqL .
Masks are generated at the pixel level. In other words, each pixels decides for itself if it's part of the mask or not. So the mask can take any shape. Your issue might be due to having a small dataset. Try image augmentation, or add more data if possible.
Another test you can do to rule out any technical issues is to test on the training data. If masks look good on the training data but bad on the validation data, then this confirms that you need a bigger dataset.
@dcyoung I have a similar problem to train mask rcnn for objects with holes.
I want the predicted mask to also contain the holes, but it seems the step of converting polygons to mask treats the masks of holes as the object mask during training. So it fails to predict the masks of holes in the object.
How can I correctly handle this case?
@wangg12 You can yield the desired behavior by creating an additional polygon representing the hole in your annotations. Then when constructing your mask, zero out pixels corresponding to the hole region. In my case, the objects themselves didn't have physical holes, but the pixel masks needed to have holes if another object was occluding it. So i subtracted any foreground masks from the background object's mask. In my dataset, the order of the annotations maintains the occlusion order (background -> foreground). So something like this worked...
# Convert polygons to a bitmap mask of shape
# [height, width, instance_count]
info = self.image_info[image_id]
mask = np.zeros([info["height"], info["width"], len(info["polygons"])],
dtype=np.uint8)
class_ids = []
for i, p in enumerate(info["polygons"]):
# Get indices of pixels inside the polygon and set them to 1
rr, cc = skimage.draw.polygon(p['all_points_y'], p['all_points_x'])
mask[rr, cc, i] = 1
# Subtract all overlapping mask pixels
overlapping_polygons = info["polygons"][i+1:]
for op in overlapping_polygons:
rr, cc = skimage.draw.polygon(
op['all_points_y'], op['all_points_x'])
mask[rr, cc, i] = 0
class_id = self.map_source_class_name(
"sheets.{}".format(p['name']))
class_ids.append(class_id)
In addition to @dcyoung 's solution, another idea is to shape your polygon such that you preserve the hole. Instead of annotating a donut as two circles inside each other, think of it as a C but keep the opening of the C very small (practically, you can even overlap the two sides of the opening).
Here is an example. I kept the opening larger than needed for illustration:
@waleedka It is a workaround for the polygons based mask representation. However, it may need tedious manual re-annotation. When the binary masks are provided, opencv and scikit-image both find the circle like contour instead of a C contour. So maybe a better polygon representation (for example, add another field to represent foreground and background) or directly use the RLE or binary mask could get rid of tedious manual re-annotation.
@wangg12 Mask RCNN takes binary masks. The code that reads the polygons does it in order to convert the polygons to binary masks. If you already have binary masks, then simply use them directly. Skip the binary->polygon->binary conversion.
@wangg12 Mask RCNN takes binary masks. The code that reads the polygons does it in order to convert the polygons to binary masks. If you already have binary masks, then simply use them directly. Skip the binary->polygon->binary conversion.
Is there a mask loader available? Many thanks.
following up on this post with a similar issue - I built a custom coco format data set with object masks in pixel format. So each object mask is a 1D array of x1,y1,x2,y2... format. Not sure how to feed this into mask rcnn? should I transform the masks into polygons or change something in the existing code? thanks!!
hello, Can Mask RCNN produce a donut-shape mask prediction? I trained my model with donut-shape mask but instead of a full donut, I got 'C' shape mask plus some small pieces and cannot get a full donut. Thank you, C