Open LaCandela opened 3 months ago
Hi @LaCandela! I think you can improve the detection score by scaling the learning rate lr according to your batch size. As a general starting point I'd try scaling it linear to the author's batch size of 32 (1 batch per gpu). As you are setting the batch size to 20 using a single gpu the lr should be lr=(0.0004/32)*20.
Hope you are still working on FastBEV and can help me figure out, how to setup the environment to get the published inference time. For the M0 model I only get 2 fps on a single RTX4090. I haven't compiled the files from "script/view_transform_cuda" folder and wonder if that is the issue or that a TensorRT implementation is necessary.
I am trying to reproduce the published model quality but I couldn't so far. I retrained an m0 model version and got only 0.215 NDS and 0.04 mAP instead of the published 0.411 NDS and 0.277 mAP.
I've only made small modification in the config file:
I started the training without slurm:
CUDA_VISIBLE_DEVICES=0 python tools/train.py ~/Fast-BEV/configs/fastbev/exp/paper/fastbev_m0_r18_s256x704_v200x200x4_c192_d2_f4.py --work-dir /Fast-BEV/runs/train_repro/
However, I can reproduce the NDS and mAP values on the validation set with the pre-trained models published in this repository. So this shows to me that the environment and data setup seems to be fine (at least for validation).
Do you have any idea where can I improve the performance of the training pipeline? Is there any hyperparameter I could tune?
For reference I copy here the config file that I used:
-- coding: utf-8 --
model = dict( type='FastBEV', style="v1", backbone=dict( type='ResNet', depth=18, num_stages=4, out_indices=(0, 1, 2, 3), frozen_stages=1,
norm_cfg=dict(type='SyncBN', requires_grad=True),
)
If point cloud range is changed, the models should also change their point cloud range accordingly
point_cloud_range = [-50, -50, -5, 50, 50, 3]
For nuScenes we usually do 10-class detection
class_names = [ 'car', 'truck', 'trailer', 'bus', 'construction_vehicle', 'bicycle', 'motorcycle', 'pedestrian', 'traffic_cone', 'barrier' ] dataset_type = 'NuScenesMultiView_Map_Dataset2' data_root = './data/nuscenes/'
Input modality for nuScenes dataset, this is consistent with the submission
format which requires the information in input_modality.
input_modality = dict( use_lidar=False, use_camera=True, use_radar=False, use_map=False, use_external=True)
img_norm_cfg = dict(mean=[123.675, 116.28, 103.53], std=[58.395, 57.12, 57.375], to_rgb=True) data_config = { 'src_size': (900, 1600), 'input_size': (256, 704),
train-aug
}
file_client_args = dict(backend='disk')
file_client_args = dict(
backend='petrel',
path_mapping=dict({
data_root: 'public-1424:s3://openmmlab/datasets/detection3d/nuscenes/'}))
train_pipeline = [ dict(type='MultiViewPipeline', sequential=True, n_images=6, n_times=4, transforms=[ dict( type='LoadImageFromFile', file_client_args=file_client_args)]), dict(type='LoadAnnotations3D', with_bbox=True, with_label=True, with_bev_seg=True), dict( type='LoadPointsFromFile', dummy=True, coord_type='LIDAR', load_dim=5, use_dim=5), dict( type='RandomFlip3D', flip_2d=False, sync_2d=False, flip_ratio_bev_horizontal=0.5, flip_ratio_bev_vertical=0.5, update_img2lidar=True), dict( type='GlobalRotScaleTrans', rot_range=[-0.3925, 0.3925], scale_ratio_range=[0.95, 1.05], translation_std=[0.05, 0.05, 0.05], update_img2lidar=True), dict(type='RandomAugImageMultiViewImage', data_config=data_config), dict(type='ObjectRangeFilter', point_cloud_range=point_cloud_range), dict(type='KittiSetOrigin', point_cloud_range=point_cloud_range), dict(type='NormalizeMultiviewImage', **img_norm_cfg), dict(type='DefaultFormatBundle3D', class_names=class_names), dict(type='Collect3D', keys=['img', 'gt_bboxes', 'gt_labels', 'gt_bboxes_3d', 'gt_labels_3d', 'gt_bev_seg'])] test_pipeline = [ dict(type='MultiViewPipeline', sequential=True, n_images=6, n_times=4, transforms=[ dict( type='LoadImageFromFile', file_client_args=file_client_args)]), dict( type='LoadPointsFromFile', dummy=True, coord_type='LIDAR', load_dim=5, use_dim=5), dict(type='RandomAugImageMultiViewImage', data_config=data_config, is_train=False),
dict(type='TestTimeAugImageMultiViewImage', data_config=data_config, is_train=False),
data = dict( samples_per_gpu=20, workers_per_gpu=5, train=dict( type='CBGSDataset', dataset=dict( type=dataset_type, data_root=data_root, pipeline=train_pipeline, classes=class_names, modality=input_modality, test_mode=False, with_box2d=True, box_type_3d='LiDAR', ann_file='data/nuscenes/nuscenes_infos_train_4d_interval3_max60.pkl', load_interval=1, sequential=True, n_times=4, train_adj_ids=[1, 3, 5], speed_mode='abs_velo', max_interval=10, min_interval=0, fix_direction=True, prev_only=True, test_adj='prev', test_adj_ids=[1, 3, 5], test_time_id=None, ) ), val=dict( type=dataset_type, data_root=data_root, pipeline=test_pipeline, classes=class_names, modality=input_modality, test_mode=True, with_box2d=True, box_type_3d='LiDAR', ann_file='data/nuscenes/nuscenes_infos_val_4d_interval3_max60.pkl', load_interval=1, sequential=True, n_times=4, train_adj_ids=[1, 3, 5], speed_mode='abs_velo', max_interval=10, min_interval=0, fix_direction=True, test_adj='prev', test_adj_ids=[1, 3, 5], test_time_id=None, ), test=dict( type=dataset_type, data_root=data_root, pipeline=test_pipeline, classes=class_names, modality=input_modality, test_mode=True, with_box2d=True, box_type_3d='LiDAR', ann_file='data/nuscenes/nuscenes_infos_val_4d_interval3_max60.pkl', load_interval=1, sequential=True, n_times=4, train_adj_ids=[1, 3, 5], speed_mode='abs_velo', max_interval=10, min_interval=0, fix_direction=True, test_adj='prev', test_adj_ids=[1, 3, 5], test_time_id=None, ) )
optimizer = dict( type='AdamW2', lr=0.0004, weight_decay=0.01, paramwise_cfg=dict( custom_keys={'backbone': dict(lr_mult=0.1, decay_mult=1.0)})) optimizer_config = dict(grad_clip=dict(max_norm=35., norm_type=2))
learning policy
lr_config = dict( policy='poly', warmup='linear', warmup_iters=1000, warmup_ratio=1e-6, power=1.0, min_lr=0, by_epoch=False )
total_epochs = 20 checkpoint_config = dict(interval=1) log_config = dict( interval=10, hooks=[ dict(type='TextLoggerHook'), dict(type='TensorboardLoggerHook'), ]) evaluation = dict(interval=2) dist_params = dict(backend='nccl') find_unused_parameters = True # todo: fix number of FPN outputs log_level = 'INFO'
load_from = "/Fast-BEV/checkpoints/cascade_mask_rcnn_r18_fpn_coco-mstrain_3x_20e_nuim_bbox_mAP_0.5110_segm_mAP_0.4070.pth" resume_from = None workflow = [('train', 1), ('val', 1)]
fp16 settings, the loss scale is specifically tuned to avoid Nan
fp16 = dict(loss_scale='dynamic')