open-mmlab / OpenPCDet

OpenPCDet Toolbox for LiDAR-based 3D Object Detection.
Apache License 2.0
4.59k stars 1.28k forks source link

How can I use the trained nuscenes model to visualize its detection results? #824

Closed 751565516 closed 2 years ago

751565516 commented 2 years ago

When I use v1 After the training of 0-mini nuscenes dataset, how can I use the trained model to visualize its detection results?

When I imitate the visualization command of Kitti dataset, I get an error:

python demo.py --cfg_file ./cfgs/nuscenes_models/cbgs_pp_multihead.yaml --ckpt /home/juzhen/OpenPCDet/output/cfgs/nuscenes_models/cbgs_pp_multihead/default/ckpt/checkpoint_epoch_105.pth --data_path /home/juzhen/OpenPCDet/data/nuscenes/v1.0-mini/sweeps/LIDAR_TOP/n008-2018-08-01-15-16-36-0400LIDAR_TOP__1533151604797841.pcd.bin 2022-02-23 21:03:43,197 INFO -----------------Quick Demo of OpenPCDet------------------------- 2022-02-23 21:03:43,197 INFO Total number of samples: 1 2022-02-23 21:03:45,165 INFO ==> Loading parameters from checkpoint /home/juzhen/OpenPCDet/output/cfgs/nuscenes_models/cbgs_pp_multihead/default/ckpt/checkpoint_epoch_105.pth to CPU 2022-02-23 21:03:45,207 INFO ==> Checkpoint trained from version: pcdet+0.5.2+7ce6a2b 2022-02-23 21:03:45,231 INFO ==> Done (loaded 421/421) Traceback (most recent call last): File "demo.py", line 112, in main() File "demo.py", line 94, in main for idx, data_dict in enumerate(demo_dataset): File "demo.py", line 59, in getitem__ data_dict = self.prepare_data(data_dict=input_dict) File "/home/juzhen/OpenPCDet/pcdet/datasets/dataset.py", line 149, in prepare_data data_dict=data_dict File "/home/juzhen/OpenPCDet/pcdet/datasets/processor/data_processor.py", line 209, in forward data_dict = cur_processor(data_dict=data_dict) File "/home/juzhen/OpenPCDet/pcdet/datasets/processor/data_processor.py", line 134, in transform_points_to_voxels voxel_output = self.voxel_generator.generate(points) File "/home/juzhen/OpenPCDet/pcdet/datasets/processor/data_processor.py", line 54, in generate voxel_output = self._voxel_generator.point_to_voxel(tv.from_numpy(points)) RuntimeError: /io/build/temp.linux-x86_64-3.7/spconv/build/src/csrc/sparse/all/ops_cpu3d/Point2VoxelCPU/Point2VoxelCPU_point_to_voxel_static.cc(21) num_features == voxels.dim(2) assert faild. your points num features doesn't equal to voxel.

Can you tell me what command to use to visualize the test results? Thank you!

Rapisurazurite commented 2 years ago

demo.py read the pointcloud file as shape (-1, 4), you can replace line 48 in demo.py with points = np.fromfile(self.sample_file_list[index], dtype=np.float32).reshape(-1, 5)

wayne1627 commented 2 years ago

demo.py read the pointcloud file as shape (-1, 4), you can replace line 48 in demo.py with points = np.fromfile(self.sample_file_list[index], dtype=np.float32).reshape(-1, 5)

but the result doesn't looks good

Rapisurazurite commented 2 years ago

demo.py read the pointcloud file as shape (-1, 4), you can replace line 48 in demo.py with points = np.fromfile(self.sample_file_list[index], dtype=np.float32).reshape(-1, 5)

but the result doesn't looks good

hi, i also try to visualize the nuscenes-mini dataset, in my pc, the result looks ok. here is the dataset visualize result. image and the model visualize result. both use the sample index 12 of test mini dataset. image

751565516 commented 2 years ago

Thank you,and I was wondering what is your visual command?---- Replied Message @.>Date03/07/2022 09:31 @.> @.**@.>SubjectRe: [open-mmlab/OpenPCDet] How can I use the trained nuscenes model to visualize its detection results? (Issue #824)

demo.py read the pointcloud file as shape (-1, 4), you can replace line 48 in demo.py with points = np.fromfile(self.sample_file_list[index], dtype=np.float32).reshape(-1, 5)

but the result doesn't looks good

hi, i also try to visualize the nuscenes-mini dataset, in my pc, the result looks ok. here is the dataset visualize result.

and the model visualize result. both use the sample index 12 of test mini dataset.

—Reply to this email directly, view it on GitHub, or unsubscribe.Triage notifications on the go with GitHub Mobile for iOS or Android. You are receiving this because you authored the thread.Message ID: @.> [ { @.": "http://schema.org", @.": "EmailMessage", "potentialAction": { @.": "ViewAction", "target": "https://github.com/open-mmlab/OpenPCDet/issues/824#issuecomment-1060101078", "url": "https://github.com/open-mmlab/OpenPCDet/issues/824#issuecomment-1060101078", "name": "View Issue" }, "description": "View this Issue on GitHub", "publisher": { @.***": "Organization", "name": "GitHub", "url": "https://github.com" } } ]

Rapisurazurite commented 2 years ago

I modified demo.py to use multi-sweeps.

import glob
from pathlib import Path

try:
    import open3d
    from visual_utils import open3d_vis_utils as V
    OPEN3D_FLAG = True
except:
    import mayavi.mlab as mlab
    from visual_utils import visualize_utils as V
    OPEN3D_FLAG = False

import numpy as np
import torch

from pcdet.config import cfg, cfg_from_yaml_file
from pcdet.datasets import DatasetTemplate, build_dataloader
from pcdet.models import build_network, load_data_to_gpu
from pcdet.utils import common_utils

def parse_config():
    parser = argparse.ArgumentParser(description='arg parser')
    parser.add_argument('--cfg_file', type=str, default='cfgs/nuscenes_models/cbgs_pp_multihead.yaml',
                        help='specify the config for demo')
    parser.add_argument('--ckpt', type=str, default="ckpt/pp_multihead_nds5823_updated.pth", help='specify the pretrained model')

    args = parser.parse_args()
    cfg_from_yaml_file(args.cfg_file, cfg)

    return args, cfg

def main():
    args, cfg = parse_config()
    logger = common_utils.create_logger()
    logger.info('-----------------Quick Demo of OpenPCDet-------------------------')
    train_set, train_loader, train_sampler = build_dataloader(
        dataset_cfg=cfg.DATA_CONFIG,
        class_names=cfg.CLASS_NAMES,
        batch_size=1,
        dist=None, workers=4,
        logger=logger,
        training=False,
        merge_all_iters_to_one_epoch=False,
        total_epochs=None
    )

    logger.info(f'Total number of samples: \t{len(train_set)}')

    model = build_network(model_cfg=cfg.MODEL, num_class=len(cfg.CLASS_NAMES), dataset=train_set)
    model.load_params_from_file(filename=args.ckpt, logger=logger, to_cpu=True)
    model.cuda()
    model.eval()
    with torch.no_grad():
        for idx, data_dict in enumerate(train_set):
            logger.info(f'Visualized sample index: \t{idx + 1}')
            data_dict = train_set.collate_batch([data_dict])
            load_data_to_gpu(data_dict)
            pred_dicts, _ = model.forward(data_dict)

            V.draw_scenes(
                points=data_dict['points'][:, 1:], ref_boxes=pred_dicts[0]['pred_boxes'],
                ref_scores=pred_dicts[0]['pred_scores'], ref_labels=pred_dicts[0]['pred_labels'],
            )

            if not OPEN3D_FLAG:
                mlab.show(stop=True)

    logger.info('Demo done.')

if __name__ == '__main__':
    main()
github-actions[bot] commented 2 years ago

This issue is stale because it has been open for 30 days with no activity.

github-actions[bot] commented 2 years ago

This issue was closed because it has been inactive for 14 days since being marked as stale.

AbhishekKaushikCV commented 1 year ago

@Rapisurazurite Hi, thanks for the code snippet.

BBAKY commented 1 year ago

@AbhishekKaushikCV Have you managed to add the ground truth in the visualisation?

  • How can I also add the ground truth boxes in the visualisation? I tried this:
gt_boxes = data_dict['gt_boxes']
..
and then pass the gt_boxes to v.draw_scenes
            V.draw_scenes(
                points=data_dict['points'][:, 1:],gt_boxes=gt_boxes, ref_boxes=pred_dicts[0]['pred_boxes'],
                ref_scores=pred_dicts[0]['pred_scores'], ref_labels=pred_dicts[0]['pred_labels'],
            )
  • Error_Message: line 88, in translate_boxes_to_open3d_instance axis_angles = np.array([0, 0, gt_boxes[6] + 1e-10]) IndexError: index 6 is out of bounds for axis 0 with size 2 (rp) s1435@linse3:/misc/no_backups/s1435/DA3D/tools$
  • do you know how can I solve this error? Thanks!!
AbhishekKaushikCV commented 1 year ago

Yes. Add these lines of code in the updated demo.py:

gt_boxes = data_dict['gt_boxes']
gt_boxes = gt_boxes.squeeze(dim=0)

and then pass the gt_boxes to v.draw_scenes
            V.draw_scenes(
                points=data_dict['points'][:, 1:],gt_boxes=gt_boxes, ref_boxes=pred_dicts[0]['pred_boxes'],
                ref_scores=pred_dicts[0]['pred_scores'], ref_labels=pred_dicts[0]['pred_labels'],
            )
BBAKY commented 1 year ago

data_dict = demo_dataset.collate_batch([data_dict]) load_data_to_gpu(data_dict) preddicts, = model.forward(data_dict)

        gt_boxes = data_dict['gt_boxes']
        gt_boxes = gt_boxes.squeeze(dim=0)

        V.draw_scenes(
            points=data_dict['points'][:, 1:], gt_boxes=gt_boxes, ref_boxes=pred_dicts[0]['pred_boxes'],
            ref_scores=pred_dicts[0]['pred_scores'], ref_labels=pred_dicts[0]['pred_labels']
        )

@AbhishekKaushikCV I did what you recommended but I got an error: Traceback (most recent call last): File "/home/turtle/Desktop/OpenPCDet/tools/demo.py", line 119, in main() File "/home/turtle/Desktop/OpenPCDet/tools/demo.py", line 98, in main gt_boxes = data_dict['gt_boxes'] KeyError: 'gt_boxes'

Do you have any recommendation? I have tried the data and configuration in the demo and pvrcnn++ configs for Waymo and KITTI with a custom data set. Which data set and configuration have you tried with? Have you changed anything else in the code?

AbhishekKaushikCV commented 1 year ago
Liuliuliuyikou commented 5 months ago

是的。 在更新的 demo.py 中添加以下代码行:

gt_boxes = data_dict['gt_boxes']
gt_boxes = gt_boxes.squeeze(dim=0)

and then pass the gt_boxes to v.draw_scenes
            V.draw_scenes(
                points=data_dict['points'][:, 1:],gt_boxes=gt_boxes, ref_boxes=pred_dicts[0]['pred_boxes'],
                ref_scores=pred_dicts[0]['pred_scores'], ref_labels=pred_dicts[0]['pred_labels'],
            )

Yes. Add these lines of code in the updated demo.py:

gt_boxes = data_dict['gt_boxes']
gt_boxes = gt_boxes.squeeze(dim=0)

and then pass the gt_boxes to v.draw_scenes
            V.draw_scenes(
                points=data_dict['points'][:, 1:],gt_boxes=gt_boxes, ref_boxes=pred_dicts[0]['pred_boxes'],
                ref_scores=pred_dicts[0]['pred_scores'], ref_labels=pred_dicts[0]['pred_labels'],
            )

Hello, I'm trying to visualize the GT_box of the nuscenes dataset using different colors based on category, which seems to require passing in the gt_label. I am new to Openpcdet. Is label included in batch_dict["gt_boxes"]? How do I get it? Hope get your assistance, thank you!