I am working on a detection project for diagonal objects. I was trying to annotate the training images by Rolabellmg and convert the bounding box coordinates as per the requirements. Now consider a hypothetical corners coordinates tuples:
When I visualise it on ImageDraw.polygon it should appear as a regular rectangle pointing to the right upper corner (or left lower corner), like this:
However, when I was trying to input this corner tuple into the function _corners2rotatedbbox provided in #183, the result would be [275.0000065924086, 777.999979899881, 524.9999868151826, 46.00004020023789, -0.3500000369883179].
Putting the above Xmin Ymin tuples into the calc_box function provided in #233 will produce the same corners coordinates tuples.
The negative theta seems to contradict the training.md descriptionmeasured anti-clockwise from the x-axis. and the photo shown on the developer blog.
Although I tried to continue training by switching +/- signs of thetas, both versions of detection output ended with box loss ~0.7 but the inference visualization is completely off the grid.
Thank you for taking your time in reading this and please correct me if I got anything wrong.
Dear Contributors,
I am working on a detection project for diagonal objects. I was trying to annotate the training images by Rolabellmg and convert the bounding box coordinates as per the requirements. Now consider a hypothetical corners coordinates tuples:
(283.0280133060877, 869.4051020615663, 776.1986875509616, 689.3837531474543, 791.9719866939123, 732.5948979384337, 298.8013124490384, 912.6162468525457)
When I visualise it on
ImageDraw.polygon
it should appear as a regular rectangle pointing to the right upper corner (or left lower corner), like this:However, when I was trying to input this corner tuple into the function
_corners2rotatedbbox
provided in #183, the result would be[275.0000065924086, 777.999979899881, 524.9999868151826, 46.00004020023789, -0.3500000369883179]
.Putting the above Xmin Ymin tuples into the
calc_box
function provided in #233 will produce the same corners coordinates tuples.The negative theta seems to contradict the training.md description
measured anti-clockwise from the x-axis.
and the photo shown on the developer blog.Although I tried to continue training by switching +/- signs of thetas, both versions of detection output ended with box loss ~0.7 but the inference visualization is completely off the grid.
Thank you for taking your time in reading this and please correct me if I got anything wrong.