facebookresearch / votenet

Deep Hough Voting for 3D Object Detection in Point Clouds
MIT License
1.69k stars 377 forks source link

The composition of the data in end_point #158

Open FeiDao7943 opened 2 years ago

FeiDao7943 commented 2 years ago

I have finished the training and eval part, and would like to write a visible script to transfer the net output to a single figure which include the box size, heading angles and classified types, but there is no idea on how to read the end_points['……'] from the net output

YoElsheikh commented 2 years ago

As i understood your question, you want to give an input and see the inference results on the trained network. What i did is as follows: After training the model and saving the training checkpoint, go to the demo.py file and edit the section where you must specify the path to the checkpoint and the point cloud on which you'd like to run inference. Particularely this part:

    if FLAGS.dataset == 'sunrgbd':
    sys.path.append(os.path.join(ROOT_DIR, 'sunrgbd'))
    from sunrgbd_detection_dataset import DC # dataset config
    checkpoint_path = os.path.join(demo_dir, 'pretrained_votenet_on_sunrgbd.tar')
    pc_path = os.path.join(demo_dir, 'input_pc_sunrgbd.ply')

Replace the dataset, its path, the saved checkpoint path as well as the pc_path (your input point cloud) with where you saved those (when specifying the training flags for instance). Afterwards , run the demo.py file with the appropriate flags, a couple of files will be thrown out in the "dump_dir" specified in the demo.py file. You can view these files with MeshLab. Typically, .ply files will be thrown out, these include the original point cloud, the thrown bounding boxes, votes etc., you can open these together in MeshLab with Crtl + i.

FeiDao7943 commented 2 years ago

As i understood your question, you want to give an input and see the inference results on the trained network. What i did is as follows: After training the model and saving the training checkpoint, go to the demo.py file and edit the section where you must specify the path to the checkpoint and the point cloud on which you'd like to run inference. Particularely this part:

    if FLAGS.dataset == 'sunrgbd':
    sys.path.append(os.path.join(ROOT_DIR, 'sunrgbd'))
    from sunrgbd_detection_dataset import DC # dataset config
    checkpoint_path = os.path.join(demo_dir, 'pretrained_votenet_on_sunrgbd.tar')
    pc_path = os.path.join(demo_dir, 'input_pc_sunrgbd.ply')

Replace the dataset, its path, the saved checkpoint path as well as the pc_path (your input point cloud) with where you saved those (when specifying the training flags for instance). Afterwards , run the demo.py file with the appropriate flags, a couple of files will be thrown out in the "dump_dir" specified in the demo.py file. You can view these files with MeshLab. Typically, .ply files will be thrown out, these include the original point cloud, the thrown bounding boxes, votes etc., you can open these together in MeshLab with Crtl + i.

I have find the workaround before, and compare with yours, the two both use the function finally parse_predictions() #which is in models/ap_helper.py I think the author had put the transformer function in this file