SamsungLabs / fcaf3d

[ECCV2022] FCAF3D: Fully Convolutional Anchor-Free 3D Object Detection
MIT License
231 stars 37 forks source link

run pcd_demo.py #36

Closed PengboLi1998 closed 1 year ago

PengboLi1998 commented 2 years ago

I want to use the trained model to test point cloud,but I meet following question: Traceback (most recent call last): File "/media/ketizu/086C03DE086C03DE/project1/fcaf3d-master/demo/pcd_demo.py", line 102, in main() File "/media/ketizu/086C03DE086C03DE/project1/fcaf3d-master/demo/pcd_demo.py", line 90, in main show_result_meshlab( File "/media/ketizu/086C03DE086C03DE/project1/fcaf3d-master/mmdet3d/apis/inference.py", line 481, in show_result_meshlab file_name = show_det_result_meshlab(data, result, out_dir, score_thr, File "/media/ketizu/086C03DE086C03DE/project1/fcaf3d-master/mmdet3d/apis/inference.py", line 324, in show_det_result_meshlab show_result( TypeError: show_result() missing 2 required positional arguments: 'out_dir' and 'filename'

Process finished with exit code 1

filaPro commented 2 years ago

Please follow #27 .

PengboLi1998 commented 2 years ago

could you please tell me how to modify it in detail?It was succeseful when I run the test.py,but when I run the pcd_demo.py,it throws the question. Thanks!

joshiaLee commented 1 year ago

modify inference.py like below

def show_det_result_meshlab(data, result, out_dir, score_thr=0.0, show=False, snapshot=False ): """Show 3D detection result by meshlab.""" points = data['points'][0][0].cpu().numpy() pts_filename = data['img_metas'][0][0]['pts_filename'] file_name = osp.split(pts_filename)[-1].split('.')[0]

if 'pts_bbox' in result[0].keys():
    pred_bboxes = result[0]['pts_bbox']['boxes_3d'].tensor.numpy()
    pred_scores = result[0]['pts_bbox']['scores_3d'].numpy()
else:
    pred_bboxes = result[0]['boxes_3d'].corners.numpy() # without corners, when visualized the predicted bounding boxes are distorted.
    pred_scores = result[0]['scores_3d'].numpy()
    pred_labels = result[0]['labels_3d'].numpy() # pred_label is an essential parameter when executing _write_oriented_bbox.
# filter out low score bboxes for visualization
if score_thr > 0:

    inds = pred_scores > score_thr
    pred_bboxes = pred_bboxes[inds]

# for now we convert points into depth mode

box_mode = data['img_metas'][0][0]['box_mode_3d']
if box_mode != Box3DMode.DEPTH:
    points = points[..., [1, 0, 2]]
    points[..., 0] *= -1
    show_bboxes = Box3DMode.convert(pred_bboxes, box_mode, Box3DMode.DEPTH)
else:
    show_bboxes = deepcopy(pred_bboxes)

show_result(
    points,
    None,
    None, # extra None is essential
    show_bboxes,
    pred_labels, # pred_label is an essential parameter when executing _write_oriented_bbox
    out_dir,
    file_name,
    show=show,
    snapshot=snapshot)

return file_name