open-mmlab / mmdetection3d

OpenMMLab's next-generation platform for general 3D object detection.
https://mmdetection3d.readthedocs.io/en/latest/
Apache License 2.0
5.21k stars 1.53k forks source link

[Bug] Failed training with example config: ValueError: class `EpochBasedTrainLoop` in mmengine/runner/loops.py: class `NuScenesDataset` in mmdet3d/datasets/nuscenes_dataset.py: Annotation must have data_list and metainfo keys #2654

Open gaborlegradi opened 1 year ago

gaborlegradi commented 1 year ago

Prerequisite

Task

I have modified the scripts/configs, or I'm working on my own tasks/models/datasets.

Branch

main branch https://github.com/open-mmlab/mmdetection3d

Environment

Installation

I have installed several ways mmdet3d both from source and with mim/pip. Here I summarize the most simple way:

conda create -n mmd3d_pl_nobuild python=3.10 conda activate mmd3d_pl_nobuild conda install pytorch==2.0.1 torchvision==0.15.2 torchaudio==2.0.2 pytorch-cuda=11.7 -c pytorch -c nvidia pip install -U openmim mim install 'mmengine==0.8.2' mim install 'mmcv==2.0.1' mim install 'mmdet==3.1.0' mim install 'mmdet3d==1.2.0'

Checking installation

import torch torch.cuda.is_available() True import mmengine import mmcv import mmdet import mmdet3d mmengine.version '0.8.2' mmcv.version '2.0.1' mmdet.version '3.1.0' mmdet3d.version '1.2.0' torch.version '2.0.1'

Config

I got config file with mim download mmdet3d --config pointpillars_hv_secfpn_sbn-all_8xb4-2x_nus-3d.py --dest . I modified it changing only the access to NuScenes data, check data_root and ann_file fields.

voxel_size = [ 0.25, 0.25, 8, ] model = dict( type='MVXFasterRCNN', data_preprocessor=dict( type='Det3DDataPreprocessor', voxel=True, voxel_layer=dict( max_num_points=64, point_cloud_range=[ -50, -50, -5, 50, 50, 3, ], voxel_size=[ 0.25, 0.25, 8, ], max_voxels=( 30000, 40000, ))), pts_voxel_encoder=dict( type='HardVFE', in_channels=4, feat_channels=[ 64, 64, ], with_distance=False, voxel_size=[ 0.25, 0.25, 8, ], with_cluster_center=True, with_voxel_center=True, point_cloud_range=[ -50, -50, -5, 50, 50, 3, ], norm_cfg=dict(type='naiveSyncBN1d', eps=0.001, momentum=0.01)), pts_middle_encoder=dict( type='PointPillarsScatter', in_channels=64, output_shape=[ 400, 400, ]), pts_backbone=dict( type='SECOND', in_channels=64, norm_cfg=dict(type='naiveSyncBN2d', eps=0.001, momentum=0.01), layer_nums=[ 3, 5, 5, ], layer_strides=[ 2, 2, 2, ], out_channels=[ 64, 128, 256, ]), pts_neck=dict( type='SECONDFPN', norm_cfg=dict(type='naiveSyncBN2d', eps=0.001, momentum=0.01), in_channels=[ 64, 128, 256, ], upsample_strides=[ 1, 2, 4, ], out_channels=[ 128, 128, 128, ]), pts_bbox_head=dict( type='Anchor3DHead', num_classes=10, in_channels=384, feat_channels=384, use_direction_classifier=True, anchor_generator=dict( type='AlignedAnchor3DRangeGenerator', ranges=[ [ -49.6, -49.6, -1.80032795, 49.6, 49.6, -1.80032795, ], [ -49.6, -49.6, -1.74440365, 49.6, 49.6, -1.74440365, ], [ -49.6, -49.6, -1.68526504, 49.6, 49.6, -1.68526504, ], [ -49.6, -49.6, -1.67339111, 49.6, 49.6, -1.67339111, ], [ -49.6, -49.6, -1.61785072, 49.6, 49.6, -1.61785072, ], [ -49.6, -49.6, -1.80984986, 49.6, 49.6, -1.80984986, ], [ -49.6, -49.6, -1.763965, 49.6, 49.6, -1.763965, ], ], sizes=[ [ 4.60718145, 1.95017717, 1.72270761, ], [ 6.73778078, 2.4560939, 2.73004906, ], [ 12.01320693, 2.87427237, 3.81509561, ], [ 1.68452161, 0.60058911, 1.27192197, ], [ 0.7256437, 0.66344886, 1.75748069, ], [ 0.40359262, 0.39694519, 1.06232151, ], [ 0.48578221, 2.49008838, 0.98297065, ], ], custom_values=[ 0, 0, ], rotations=[ 0, 1.57, ], reshape_out=True), assigner_per_size=False, diff_rad_by_sin=True, dir_offset=-0.7854, bbox_coder=dict(type='DeltaXYZWLHRBBoxCoder', code_size=9), loss_cls=dict( type='mmdet.FocalLoss', use_sigmoid=True, gamma=2.0, alpha=0.25, loss_weight=1.0), loss_bbox=dict( type='mmdet.SmoothL1Loss', beta=0.1111111111111111, loss_weight=1.0), loss_dir=dict( type='mmdet.CrossEntropyLoss', use_sigmoid=False, loss_weight=0.2)), train_cfg=dict( pts=dict( assigner=dict( type='Max3DIoUAssigner', iou_calculator=dict(type='BboxOverlapsNearest3D'), pos_iou_thr=0.6, neg_iou_thr=0.3, min_pos_iou=0.3, ignore_iof_thr=-1), allowed_border=0, code_weight=[ 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 0.2, 0.2, ], pos_weight=-1, debug=False)), test_cfg=dict( pts=dict( use_rotate_nms=True, nms_across_levels=False, nms_pre=1000, nms_thr=0.2, score_thr=0.05, min_bbox_size=0, max_num=500))) point_cloud_range = [ -50, -50, -5, 50, 50, 3, ] class_names = [ 'car', 'truck', 'trailer', 'bus', 'construction_vehicle', 'bicycle', 'motorcycle', 'pedestrian', 'traffic_cone', 'barrier', ] metainfo = dict(classes=[ 'car', 'truck', 'trailer', 'bus', 'construction_vehicle', 'bicycle', 'motorcycle', 'pedestrian', 'traffic_cone', 'barrier', ]) dataset_type = 'NuScenesDataset' data_root = '/data_repo/nuScences/nuScenes/' input_modality = dict(use_lidar=True, use_camera=False) data_prefix = dict(pts='samples/LIDAR_TOP', img='', sweeps='sweeps/LIDAR_TOP') backend_args = None train_pipeline = [ dict( type='LoadPointsFromFile', coord_type='LIDAR', load_dim=5, use_dim=5, backend_args=None), dict(type='LoadPointsFromMultiSweeps', sweeps_num=10, backend_args=None), dict(type='LoadAnnotations3D', with_bbox_3d=True, with_label_3d=True), dict( type='GlobalRotScaleTrans', rot_range=[ -0.3925, 0.3925, ], scale_ratio_range=[ 0.95, 1.05, ], translation_std=[ 0, 0, 0, ]), dict(type='RandomFlip3D', flip_ratio_bev_horizontal=0.5), dict( type='PointsRangeFilter', point_cloud_range=[ -50, -50, -5, 50, 50, 3, ]), dict( type='ObjectRangeFilter', point_cloud_range=[ -50, -50, -5, 50, 50, 3, ]), dict( type='ObjectNameFilter', classes=[ 'car', 'truck', 'trailer', 'bus', 'construction_vehicle', 'bicycle', 'motorcycle', 'pedestrian', 'traffic_cone', 'barrier', ]), dict(type='PointShuffle'), dict( type='Pack3DDetInputs', keys=[ 'points', 'gt_bboxes_3d', 'gt_labels_3d', ]), ] test_pipeline = [ dict( type='LoadPointsFromFile', coord_type='LIDAR', load_dim=5, use_dim=5, backend_args=None), dict( type='LoadPointsFromMultiSweeps', sweeps_num=10, test_mode=True, backend_args=None), dict( type='MultiScaleFlipAug3D', img_scale=( 1333, 800, ), pts_scale_ratio=1, flip=False, transforms=[ dict( type='GlobalRotScaleTrans', rot_range=[ 0, 0, ], scale_ratio_range=[ 1.0, 1.0, ], translation_std=[ 0, 0, 0, ]), dict(type='RandomFlip3D'), dict( type='PointsRangeFilter', point_cloud_range=[ -50, -50, -5, 50, 50, 3, ]), ]), dict(type='Pack3DDetInputs', keys=[ 'points', ]), ] eval_pipeline = [ dict( type='LoadPointsFromFile', coord_type='LIDAR', load_dim=5, use_dim=5, backend_args=None), dict( type='LoadPointsFromMultiSweeps', sweeps_num=10, test_mode=True, backend_args=None), dict(type='Pack3DDetInputs', keys=[ 'points', ]), ] train_dataloader = dict( batch_size=4, num_workers=4, persistent_workers=True, sampler=dict(type='DefaultSampler', shuffle=True), dataset=dict( type='NuScenesDataset', data_root='/data_repo/nuScences/nuScenes/', ann_file='nuscenes_infos_train.pkl', pipeline=[ dict( type='LoadPointsFromFile', coord_type='LIDAR', load_dim=5, use_dim=5, backend_args=None), dict( type='LoadPointsFromMultiSweeps', sweeps_num=10, backend_args=None), dict( type='LoadAnnotations3D', with_bbox_3d=True, with_label_3d=True), dict( type='GlobalRotScaleTrans', rot_range=[ -0.3925, 0.3925, ], scale_ratio_range=[ 0.95, 1.05, ], translation_std=[ 0, 0, 0, ]), dict(type='RandomFlip3D', flip_ratio_bev_horizontal=0.5), dict( type='PointsRangeFilter', point_cloud_range=[ -50, -50, -5, 50, 50, 3, ]), dict( type='ObjectRangeFilter', point_cloud_range=[ -50, -50, -5, 50, 50, 3, ]), dict( type='ObjectNameFilter', classes=[ 'car', 'truck', 'trailer', 'bus', 'construction_vehicle', 'bicycle', 'motorcycle', 'pedestrian', 'traffic_cone', 'barrier', ]), dict(type='PointShuffle'), dict( type='Pack3DDetInputs', keys=[ 'points', 'gt_bboxes_3d', 'gt_labels_3d', ]), ], metainfo=dict(classes=[ 'car', 'truck', 'trailer', 'bus', 'construction_vehicle', 'bicycle', 'motorcycle', 'pedestrian', 'traffic_cone', 'barrier', ]), modality=dict(use_lidar=True, use_camera=False), test_mode=False, data_prefix=dict( pts='samples/LIDAR_TOP', img='', sweeps='sweeps/LIDAR_TOP'), box_type_3d='LiDAR', backend_args=None)) test_dataloader = dict( batch_size=1, num_workers=1, persistent_workers=True, drop_last=False, sampler=dict(type='DefaultSampler', shuffle=False), dataset=dict( type='NuScenesDataset', data_root='/data_repo/nuScences/nuScenes/', ann_file='nuscenes_infos_val.pkl', pipeline=[ dict( type='LoadPointsFromFile', coord_type='LIDAR', load_dim=5, use_dim=5, backend_args=None), dict( type='LoadPointsFromMultiSweeps', sweeps_num=10, test_mode=True, backend_args=None), dict( type='MultiScaleFlipAug3D', img_scale=( 1333, 800, ), pts_scale_ratio=1, flip=False, transforms=[ dict( type='GlobalRotScaleTrans', rot_range=[ 0, 0, ], scale_ratio_range=[ 1.0, 1.0, ], translation_std=[ 0, 0, 0, ]), dict(type='RandomFlip3D'), dict( type='PointsRangeFilter', point_cloud_range=[ -50, -50, -5, 50, 50, 3, ]), ]), dict(type='Pack3DDetInputs', keys=[ 'points', ]), ], metainfo=dict(classes=[ 'car', 'truck', 'trailer', 'bus', 'construction_vehicle', 'bicycle', 'motorcycle', 'pedestrian', 'traffic_cone', 'barrier', ]), modality=dict(use_lidar=True, use_camera=False), data_prefix=dict( pts='samples/LIDAR_TOP', img='', sweeps='sweeps/LIDAR_TOP'), test_mode=True, box_type_3d='LiDAR', backend_args=None)) val_dataloader = dict( batch_size=1, num_workers=1, persistent_workers=True, drop_last=False, sampler=dict(type='DefaultSampler', shuffle=False), dataset=dict( type='NuScenesDataset', data_root='/data_repo/nuScences/nuScenes/', ann_file='nuscenes_infos_val.pkl', pipeline=[ dict( type='LoadPointsFromFile', coord_type='LIDAR', load_dim=5, use_dim=5, backend_args=None), dict( type='LoadPointsFromMultiSweeps', sweeps_num=10, test_mode=True, backend_args=None), dict( type='MultiScaleFlipAug3D', img_scale=( 1333, 800, ), pts_scale_ratio=1, flip=False, transforms=[ dict( type='GlobalRotScaleTrans', rot_range=[ 0, 0, ], scale_ratio_range=[ 1.0, 1.0, ], translation_std=[ 0, 0, 0, ]), dict(type='RandomFlip3D'), dict( type='PointsRangeFilter', point_cloud_range=[ -50, -50, -5, 50, 50, 3, ]), ]), dict(type='Pack3DDetInputs', keys=[ 'points', ]), ], metainfo=dict(classes=[ 'car', 'truck', 'trailer', 'bus', 'construction_vehicle', 'bicycle', 'motorcycle', 'pedestrian', 'traffic_cone', 'barrier', ]), modality=dict(use_lidar=True, use_camera=False), test_mode=True, data_prefix=dict( pts='samples/LIDAR_TOP', img='', sweeps='sweeps/LIDAR_TOP'), box_type_3d='LiDAR', backend_args=None)) val_evaluator = dict( type='NuScenesMetric', data_root='/data_repo/nuScences/nuScenes/', ann_file='/data_repo/nuScences/nuScenes/nuscenes_infos_val.pkl', metric='bbox', backend_args=None) test_evaluator = dict( type='NuScenesMetric', data_root='/data_repo/nuScences/nuScenes/', ann_file='/data_repo/nuScences/nuScenes/nuscenes_infos_val.pkl', metric='bbox', backend_args=None) vis_backends = [ dict(type='LocalVisBackend'), ] visualizer = dict( type='Det3DLocalVisualizer', vis_backends=[ dict(type='LocalVisBackend'), ], name='visualizer') lr = 0.001 optim_wrapper = dict( type='OptimWrapper', optimizer=dict(type='AdamW', lr=0.001, weight_decay=0.01), clip_grad=dict(max_norm=35, norm_type=2)) train_cfg = dict(type='EpochBasedTrainLoop', max_epochs=24, val_interval=24) val_cfg = dict(type='ValLoop') test_cfg = dict(type='TestLoop') param_scheduler = [ dict( type='LinearLR', start_factor=0.001, by_epoch=False, begin=0, end=1000), dict( type='MultiStepLR', begin=0, end=24, by_epoch=True, milestones=[ 20, 23, ], gamma=0.1), ] auto_scale_lr = dict(enable=False, base_batch_size=32) default_scope = 'mmdet3d' default_hooks = dict( timer=dict(type='IterTimerHook'), logger=dict(type='LoggerHook', interval=50), param_scheduler=dict(type='ParamSchedulerHook'), checkpoint=dict(type='CheckpointHook', interval=-1), sampler_seed=dict(type='DistSamplerSeedHook'), visualization=dict(type='Det3DVisualizationHook')) env_cfg = dict( cudnn_benchmark=False, mp_cfg=dict(mp_start_method='fork', opencv_num_threads=0), dist_cfg=dict(backend='nccl')) log_processor = dict(type='LogProcessor', window_size=50, by_epoch=True) log_level = 'INFO' load_from = None resume = False

Starting train script

I did run train.py copied from /tools/train.py: python train.py checkpoints/pointpillars_hv_secfpn_sbn-all_8xb4-2x_nus-3d.py

train.py:

Copyright (c) OpenMMLab. All rights reserved.

import argparse import logging import os import os.path as osp

from mmengine.config import Config, DictAction from mmengine.logging import print_log from mmengine.registry import RUNNERS from mmengine.runner import Runner

from mmdet3d.utils import replace_ceph_backend

def parse_args(): parser = argparse.ArgumentParser(description='Train a 3D detector') parser.add_argument('config', help='train config file path') parser.add_argument('--work-dir', help='the dir to save logs and models') parser.add_argument( '--amp', action='store_true', default=False, help='enable automatic-mixed-precision training') parser.add_argument( '--auto-scale-lr', action='store_true', help='enable automatically scaling LR.') parser.add_argument( '--resume', nargs='?', type=str, const='auto', help='If specify checkpoint path, resume from it, while if not ' 'specify, try to auto resume from the latest checkpoint ' 'in the work directory.') parser.add_argument( '--ceph', action='store_true', help='Use ceph as data storage backend') parser.add_argument( '--cfg-options', nargs='+', action=DictAction, help='override some settings in the used config, the key-value pair ' 'in xxx=yyy format will be merged into config file. If the value to ' 'be overwritten is a list, it should be like key="[a,b]" or key=a,b ' 'It also allows nested list/tuple values, e.g. key="[(a,b),(c,d)]" ' 'Note that the quotation marks are necessary and that no white space ' 'is allowed.') parser.add_argument( '--launcher', choices=['none', 'pytorch', 'slurm', 'mpi'], default='none', help='job launcher')

When using PyTorch version >= 2.0.0, the torch.distributed.launch

# will pass the `--local-rank` parameter to `tools/train.py` instead
# of `--local_rank`.
parser.add_argument('--local_rank', '--local-rank', type=int, default=0)
args = parser.parse_args()
if 'LOCAL_RANK' not in os.environ:
    os.environ['LOCAL_RANK'] = str(args.local_rank)
return args

def main(): args = parse_args()

# load config
cfg = Config.fromfile(args.config)

# TODO: We will unify the ceph support approach with other OpenMMLab repos
if args.ceph:
    cfg = replace_ceph_backend(cfg)

cfg.launcher = args.launcher
if args.cfg_options is not None:
    cfg.merge_from_dict(args.cfg_options)

# work_dir is determined in this priority: CLI > segment in file > filename
if args.work_dir is not None:
    # update configs according to CLI args if args.work_dir is not None
    cfg.work_dir = args.work_dir
elif cfg.get('work_dir', None) is None:
    # use config filename as default work_dir if cfg.work_dir is None
    cfg.work_dir = osp.join('./work_dirs',
                            osp.splitext(osp.basename(args.config))[0])

# enable automatic-mixed-precision training
if args.amp is True:
    optim_wrapper = cfg.optim_wrapper.type
    if optim_wrapper == 'AmpOptimWrapper':
        print_log(
            'AMP training is already enabled in your config.',
            logger='current',
            level=logging.WARNING)
    else:
        assert optim_wrapper == 'OptimWrapper', (
            '`--amp` is only supported when the optimizer wrapper type is '
            f'`OptimWrapper` but got {optim_wrapper}.')
        cfg.optim_wrapper.type = 'AmpOptimWrapper'
        cfg.optim_wrapper.loss_scale = 'dynamic'

# enable automatically scaling LR
if args.auto_scale_lr:
    if 'auto_scale_lr' in cfg and \
            'enable' in cfg.auto_scale_lr and \
            'base_batch_size' in cfg.auto_scale_lr:
        cfg.auto_scale_lr.enable = True
    else:
        raise RuntimeError('Can not find "auto_scale_lr" or '
                           '"auto_scale_lr.enable" or '
                           '"auto_scale_lr.base_batch_size" in your'
                           ' configuration file.')

# resume is determined in this priority: resume from > auto_resume
if args.resume == 'auto':
    cfg.resume = True
    cfg.load_from = None
elif args.resume is not None:
    cfg.resume = True
    cfg.load_from = args.resume

# build the runner from config
if 'runner_type' not in cfg:
    # build the default runner
    runner = Runner.from_cfg(cfg)
else:
    # build customized runner from the registry
    # if 'runner_type' is set in the cfg
    runner = RUNNERS.build(cfg)

# start training
runner.train()

if name == 'main': main()

Error MSG


Traceback (most recent call last): File "/home/u29a06/miniconda3/envs/mmd3d_pl_nobuild/lib/python3.10/site-packages/mmengine/registry/build_functions.py", line 122, in build_from_cfg obj = obj_cls(**args) # type: ignore File "/home/u29a06/miniconda3/envs/mmd3d_pl_nobuild/lib/python3.10/site-packages/mmdet3d/datasets/nuscenes_dataset.py", line 102, in init super().init( File "/home/u29a06/miniconda3/envs/mmd3d_pl_nobuild/lib/python3.10/site-packages/mmdet3d/datasets/det3d_dataset.py", line 129, in init super().init( File "/home/u29a06/miniconda3/envs/mmd3d_pl_nobuild/lib/python3.10/site-packages/mmengine/dataset/base_dataset.py", line 245, in init self.full_init() File "/home/u29a06/miniconda3/envs/mmd3d_pl_nobuild/lib/python3.10/site-packages/mmengine/dataset/base_dataset.py", line 296, in full_init self.data_list = self.load_data_list() File "/home/u29a06/miniconda3/envs/mmd3d_pl_nobuild/lib/python3.10/site-packages/mmengine/dataset/base_dataset.py", line 438, in load_data_list raise ValueError('Annotation must have data_list and metainfo ' ValueError: Annotation must have data_list and metainfo keys

During handling of the above exception, another exception occurred:

Traceback (most recent call last): File "/home/u29a06/miniconda3/envs/mmd3d_pl_nobuild/lib/python3.10/site-packages/mmengine/registry/build_functions.py", line 122, in build_from_cfg obj = obj_cls(*args) # type: ignore File "/home/u29a06/miniconda3/envs/mmd3d_pl_nobuild/lib/python3.10/site-packages/mmengine/runner/loops.py", line 44, in init super().init(runner, dataloader) File "/home/u29a06/miniconda3/envs/mmd3d_pl_nobuild/lib/python3.10/site-packages/mmengine/runner/base_loop.py", line 26, in init self.dataloader = runner.build_dataloader( File "/home/u29a06/miniconda3/envs/mmd3d_pl_nobuild/lib/python3.10/site-packages/mmengine/runner/runner.py", line 1351, in build_dataloader dataset = DATASETS.build(dataset_cfg) File "/home/u29a06/miniconda3/envs/mmd3d_pl_nobuild/lib/python3.10/site-packages/mmengine/registry/registry.py", line 570, in build return self.build_func(cfg, args, **kwargs, registry=self) File "/home/u29a06/miniconda3/envs/mmd3d_pl_nobuild/lib/python3.10/site-packages/mmengine/registry/build_functions.py", line 144, in build_from_cfg raise type(e)( ValueError: class NuScenesDataset in mmdet3d/datasets/nuscenes_dataset.py: Annotation must have data_list and metainfo keys

During handling of the above exception, another exception occurred:

Traceback (most recent call last): File "/home/u29a06/mmdetection3d_pipeline/network/mmdetection3d/tools/train.py", line 135, in main() File "/home/u29a06/mmdetection3d_pipeline/network/mmdetection3d/tools/train.py", line 131, in main runner.train() File "/home/u29a06/miniconda3/envs/mmd3d_pl_nobuild/lib/python3.10/site-packages/mmengine/runner/runner.py", line 1701, in train self._train_loop = self.build_train_loop( File "/home/u29a06/miniconda3/envs/mmd3d_pl_nobuild/lib/python3.10/site-packages/mmengine/runner/runner.py", line 1493, in build_train_loop loop = LOOPS.build( File "/home/u29a06/miniconda3/envs/mmd3d_pl_nobuild/lib/python3.10/site-packages/mmengine/registry/registry.py", line 570, in build return self.build_func(cfg, *args, **kwargs, registry=self) File "/home/u29a06/miniconda3/envs/mmd3d_pl_nobuild/lib/python3.10/site-packages/mmengine/registry/build_functions.py", line 144, in build_from_cfg raise type(e)( ValueError: class EpochBasedTrainLoop in mmengine/runner/loops.py: class NuScenesDataset in mmdet3d/datasets/nuscenes_dataset.py: Annotation must have data_list and metainfo keys

Reproduces the problem - code sample

I did run train.py copied from /tools/train.py: python train.py checkpoints/pointpillars_hv_secfpn_sbn-all_8xb4-2x_nus-3d.py

Reproduces the problem - command or script

I did run train.py copied from /tools/train.py: python train.py checkpoints/pointpillars_hv_secfpn_sbn-all_8xb4-2x_nus-3d.py

Reproduces the problem - error message


Traceback (most recent call last): File "/home/u29a06/miniconda3/envs/mmd3d_pl_nobuild/lib/python3.10/site-packages/mmengine/registry/build_functions.py", line 122, in build_from_cfg obj = obj_cls(**args) # type: ignore File "/home/u29a06/miniconda3/envs/mmd3d_pl_nobuild/lib/python3.10/site-packages/mmdet3d/datasets/nuscenes_dataset.py", line 102, in init super().init( File "/home/u29a06/miniconda3/envs/mmd3d_pl_nobuild/lib/python3.10/site-packages/mmdet3d/datasets/det3d_dataset.py", line 129, in init super().init( File "/home/u29a06/miniconda3/envs/mmd3d_pl_nobuild/lib/python3.10/site-packages/mmengine/dataset/base_dataset.py", line 245, in init self.full_init() File "/home/u29a06/miniconda3/envs/mmd3d_pl_nobuild/lib/python3.10/site-packages/mmengine/dataset/base_dataset.py", line 296, in full_init self.data_list = self.load_data_list() File "/home/u29a06/miniconda3/envs/mmd3d_pl_nobuild/lib/python3.10/site-packages/mmengine/dataset/base_dataset.py", line 438, in load_data_list raise ValueError('Annotation must have data_list and metainfo ' ValueError: Annotation must have data_list and metainfo keys

During handling of the above exception, another exception occurred:

Traceback (most recent call last): File "/home/u29a06/miniconda3/envs/mmd3d_pl_nobuild/lib/python3.10/site-packages/mmengine/registry/build_functions.py", line 122, in build_from_cfg obj = obj_cls(*args) # type: ignore File "/home/u29a06/miniconda3/envs/mmd3d_pl_nobuild/lib/python3.10/site-packages/mmengine/runner/loops.py", line 44, in init super().init(runner, dataloader) File "/home/u29a06/miniconda3/envs/mmd3d_pl_nobuild/lib/python3.10/site-packages/mmengine/runner/base_loop.py", line 26, in init self.dataloader = runner.build_dataloader( File "/home/u29a06/miniconda3/envs/mmd3d_pl_nobuild/lib/python3.10/site-packages/mmengine/runner/runner.py", line 1351, in build_dataloader dataset = DATASETS.build(dataset_cfg) File "/home/u29a06/miniconda3/envs/mmd3d_pl_nobuild/lib/python3.10/site-packages/mmengine/registry/registry.py", line 570, in build return self.build_func(cfg, args, **kwargs, registry=self) File "/home/u29a06/miniconda3/envs/mmd3d_pl_nobuild/lib/python3.10/site-packages/mmengine/registry/build_functions.py", line 144, in build_from_cfg raise type(e)( ValueError: class NuScenesDataset in mmdet3d/datasets/nuscenes_dataset.py: Annotation must have data_list and metainfo keys

During handling of the above exception, another exception occurred:

Traceback (most recent call last): File "/home/u29a06/mmdetection3d_pipeline/network/mmdetection3d/tools/train.py", line 135, in main() File "/home/u29a06/mmdetection3d_pipeline/network/mmdetection3d/tools/train.py", line 131, in main runner.train() File "/home/u29a06/miniconda3/envs/mmd3d_pl_nobuild/lib/python3.10/site-packages/mmengine/runner/runner.py", line 1701, in train self._train_loop = self.build_train_loop( File "/home/u29a06/miniconda3/envs/mmd3d_pl_nobuild/lib/python3.10/site-packages/mmengine/runner/runner.py", line 1493, in build_train_loop loop = LOOPS.build( File "/home/u29a06/miniconda3/envs/mmd3d_pl_nobuild/lib/python3.10/site-packages/mmengine/registry/registry.py", line 570, in build return self.build_func(cfg, *args, **kwargs, registry=self) File "/home/u29a06/miniconda3/envs/mmd3d_pl_nobuild/lib/python3.10/site-packages/mmengine/registry/build_functions.py", line 144, in build_from_cfg raise type(e)( ValueError: class EpochBasedTrainLoop in mmengine/runner/loops.py: class NuScenesDataset in mmdet3d/datasets/nuscenes_dataset.py: Annotation must have data_list and metainfo keys

Additional information

No response

Xiangxu-0103 commented 1 year ago

It seems that your anno info does not update to v2 version. Please use this script to update the into.

gaborlegradi commented 1 year ago

Hello Xiangxu-0103, Please check for me the following error. Everything is the same as written above. You can see below the command end resulting error:

python tools/dataset_converters/update_infos_to_v2.py --dataset nuscenes --pkl-path /data_repo/nuScences/nuScenes/nuscenes_infos_train_old.pkl --out-dir /data_repo/nuScences/nuScenes/tmp/ /data_repo/nuScences/nuScenes/nuscenes_infos_train_old.pkl will be modified. Reading from input file: /data_repo/nuScences/nuScenes/nuscenes_infos_train_old.pkl. Traceback (most recent call last): File "/home/u29a06/git/mmdetection3d_pipeline/network/mmdetection3d/tools/dataset_converters/update_infos_to_v2.py", line 1159, in update_pkl_infos( File "/home/u29a06/git/mmdetection3d_pipeline/network/mmdetection3d/tools/dataset_converters/update_infos_to_v2.py", line 1148, in update_pkl_infos update_nuscenes_infos(pkl_path=pkl_path, out_dir=out_dir) File "/home/u29a06/git/mmdetection3d_pipeline/network/mmdetection3d/tools/dataset_converters/update_infos_to_v2.py", line 269, in update_nuscenes_infos nusc = NuScenes( File "/home/u29a06/miniconda3/envs/mmd3d_pl/lib/python3.10/site-packages/nuscenes/nuscenes.py", line 62, in init assert osp.exists(self.table_root), 'Database version not found: {}'.format(self.table_root) AssertionError: Database version not found: ./data/nuscenes/v1.0-trainval

In fact, I have v1.0-trainval folder, but path is /data_repo/nuScences/nuScenes/v1.0-trainval/

Xiangxu-0103 commented 1 year ago

It's indeed a bug that the default path is set as data/nuscenes, we will check and fix it ASAP. You can symlink the path of nuscenes to data/nuscenes for transition.