Closed wuguangbin1230 closed 6 years ago
Hi! Sorry that I'm so late with the answer.
The original images are 640x480 pixels and the vertex of the ground truth rectangle grasps should be scaled when scaling the images. For example, if the x-coordinate of one grasp vertix is 200 in the original dataset then if I scale the images to 224x224 the annotated vetex is to be scaled too:
x_scaled = 200 * 224/640 = 200 * 0.35
The corresponding y value for the same vertex scales with a factor of 0.47.
I also uploadet the models. Regards!
Hi author,
In your program, there are values: 0.35 and 0.47. What do they mean?
Thank you!
`def bboxes_to_grasps(bboxes):
converting and scaling bounding boxes into grasps, g = {x, y, tan, h, w}