Open chandlerbing65nm opened 2 years ago
I load (x1, y1, x2, y2, x3, y3, x4, y4) as a mask and convert it into (x, y, w, h, t) by the Mask2OBB
pipeline.
I load (x1, y1, x2, y2, x3, y3, x4, y4) as a mask and convert it into (x, y, w, h, t) by the
Mask2OBB
pipeline.
@jbwang1997 Can I ask where specifically the conversion code (Mask2OBB
) of Faster-RCNN and RetinaNet detectors are located? I have tried searching for it but I cannot find it. The part where you call [Mask2OBB
] function.
I load (x1, y1, x2, y2, x3, y3, x4, y4) as a mask and convert it into (x, y, w, h, t) by the
Mask2OBB
pipeline.
That is the one right?
The DOTA dataset annotations are specified in corner positions (x1, y1, x2, y2, x3, y3, x4, y4) but why did you define your bounding box in center+height+width (x,y,w,h) position?
Does it mean that you converted the annotations from corner positions to center+height+width? If so, where in the code can I find these?