V2AI / Det3D

World's first general purpose 3D object detection codebse.
https://arxiv.org/abs/1908.09492
Apache License 2.0
1.49k stars 298 forks source link

Weird Result of running calculate_iou_partly #75

Closed zwqnju closed 4 years ago

zwqnju commented 4 years ago

Running the following script:

import numpy as np from det3d.datasets.utils.eval import calculate_iou_partly

fake_gt_annos = [] anno = {} anno.update( { "name": np.array(['Car']), "location": np.array([[0.0, 0.0, 0.0]]), "dimensions": np.array([[1.0, 1.0, 1.0]]), "rotation_y": np.array([np.pi/8]), } ) fake_gt_annos.append(anno)

rets = calculate_iou_partly( fake_gt_annos, fake_gt_annos, metric=1, num_parts=1, z_axis=2, z_center=0.5 ) print(rets)

I got:

([array([[7.17528081e-09]])], [array([[7.17528081e-09]])], array([1]), array([1]))

Why? I was using the same parameters of dt_annos and gt_annos when calling calculate_iou_partly, and I expect to get the overlap result = 1.

Is this a bug? Or I misunderstand something?

jhultman commented 4 years ago

This function depends on rotate_iou_gpu_eval which has known bugs. It uses some geometric algorithms defined in det3d/ops/nms/nms_gpu.py which seem to have divide by zero problems when the boxes are identical. I think the culprit is probably line_segment_intersection or sort_vertex_in_convex_polygon. You can try using more stable IOU function like box_np_ops.riou_cc.

Edit: This issue explains the problem:

The deviation between input rotation_y angle for gt_annos and dt_annos must be greater than 0.1 percent. Otherwise, the computation of rotate iou will blow up in the function line_segment_intersection.

zwqnju commented 4 years ago

@jhultman Thank you for your reply! I am trying to use Det3d to train 3d object detection models with my own dataset. But I only have lidar data, and the ground truth and detected object are like [class, x, y, z, w, l, h, yaw] (in LiDAR coordinate system). I want to compute the evaluation metrics using your code, but it seems too complicated to me. Do you have any suggestions?

jhultman commented 4 years ago

@zwqnju I think it may require a lot of work to get the eval code from this repo to work on a custom dataset. Maybe you can consider using another implementation of mean average precision. Unfortunately, it seems most eval codebases available on github make strong assumptions about how your dataset is formatted, and they typically assume axis-aligned 2D IOU.

If you convert the labels and detections to COCO format, you can use the pycocotools API to compute mAP, but you will have to patch the IOU part of the code to use rotated 3D IOU instead (for example with box_np_ops.riou_cc or using shapely).

You can also try this implementation, but you will have to change the IOU part. I don't know if that repo is any good but it seems to be popular.

Good luck!

zwqnju commented 4 years ago

@zwqnju I think it may require a lot of work to get the eval code from this repo to work on a custom dataset. Maybe you can consider using another implementation of mean average precision. Unfortunately, it seems most eval codebases available on github make strong assumptions about how your dataset is formatted, and they typically assume axis-aligned 2D IOU.

If you convert the labels and detections to COCO format, you can use the pycocotools API to compute mAP, but you will have to patch the IOU part of the code to use rotated 3D IOU instead (for example with box_np_ops.riou_cc or using shapely).

You can also try this implementation, but you will have to change the IOU part. I don't know if that repo is any good but it seems to be popular.

Good luck!

Very helpful. Thanks a lot!