sshaoshuai / PointRCNN

PointRCNN: 3D Object Proposal Generation and Detection from Point Cloud, CVPR 2019.
MIT License
1.71k stars 425 forks source link

Get the iou3d in KITTI dataset #194

Open a-akram-98 opened 3 years ago

a-akram-98 commented 3 years ago

the KITTI dataset gives computeBox3D.m to help to extract the labels from the label.txt and here is a part of the code

l = object.l;
w = object.w;
h = object.h;

% 3D bounding box corners
x_corners = [l/2, l/2, -l/2, -l/2, l/2, l/2, -l/2, -l/2];
y_corners = [0,0,0,0,-h,-h,-h,-h];
z_corners = [w/2, -w/2, -w/2, w/2, w/2, -w/2, -w/2, w/2];

so from the previous code it appears that the dimension that gets the x component is l and for y is h, and for z is w, but in the iou3d_utils.py in function boxes_iou3d_cpu in the function description the expected in put is like this

    """
    Input (torch):
        boxes_a: (N, 7) [x, y, z, h, w, l, ry], torch tensor with type float32
        boxes_b: (M, 7) [x, y, z, h, w, l, ry], torch tensor with type float32
        rect: True/False means boxes in camera/velodyne coord system.
    Output:
        iou_3d: (N, M)
    """

I'm using labels in rect coordinates, so what is the order of the input I must use, I'm really confused.