fundamentalvision / BEVFormer

[ECCV 2022] This is the official implementation of BEVFormer, a camera-only framework for autonomous driving perception, e.g., 3D object detection and semantic map segmentation.
https://arxiv.org/abs/2203.17270
Apache License 2.0
3.24k stars 525 forks source link

bevformer-base AP=0, i change the encoder layernum from 6 to 3, i got the AP=0 result #66

Open deeptoby opened 2 years ago

deeptoby commented 2 years ago

Hello zhiqi:

zhiqi-li commented 2 years ago

Can you provide more details? While setting layer_num=6, did you get normal results?

deeptoby commented 2 years ago

Thanks for you replay! i will show my some modified details.

we change the img_backbone's depth from 101 to 50 img_backbone=dict( type='ResNet', depth=50, num_stages=4, out_indices=(1, 2, 3), frozen_stages=1, norm_cfg=dict(type='BN2d', requires_grad=False), norm_eval=True, style='caffe', dcn=dict(type='DCNv2', deform_groups=1, fallback_on_stride=False), # original DCNv2 will print log when perform load_state_dict stage_with_dcn=(False, False, True, True)),

Then change the encoder layer nums from 6 to 3: encoder=dict( type='BEVFormerEncoder', num_layers=3, pc_range=point_cloud_range, num_points_in_pillar=4, return_intermediate=False, transformerlayers=dict( type='BEVFormerLayer',


video_test_mode=False

Test Result After 10 epochs: AP = 0.0

zhiqi-li commented 2 years ago

Can you provide the log of training.?

deeptoby commented 2 years ago

2022-07-13 07:48:49,431 - mmdet - INFO - Epoch [2][50/3517] lr: 1.991e-04, eta: 2 days, 6:35:14, time: 2.612, data_time: 0.523, memory: 18492, loss_cls: 0.0685, loss_bbox: 0.2451, d0.loss_cls: 0.0697, d0.loss_bbox: 0.0798, d1.loss_cls: 0.0696, d1.loss_bbox: 0.0839, d2.loss_cls: 0.0696, d2.loss_bbox: 0.0880, d3.loss_cls: 0.0696, d3.loss_bbox: 0.0977, d4.loss_cls: 0.0608, d4.loss_bbox: 0.2381, loss: 1.2405, grad_norm: 34.3968 2022-07-13 07:50:54,896 - mmdet - INFO - Epoch [2][100/3517] lr: 1.991e-04, eta: 2 days, 6:34:40, time: 2.509, data_time: 0.374, memory: 18492, loss_cls: 0.0701, loss_bbox: 0.2633, d0.loss_cls: 0.0710, d0.loss_bbox: 0.0780, d1.loss_cls: 0.0708, d1.loss_bbox: 0.0829, d2.loss_cls: 0.0710, d2.loss_bbox: 0.0876, d3.loss_cls: 0.0711, d3.loss_bbox: 0.0964, d4.loss_cls: 0.0619, d4.loss_bbox: 0.2380, loss: 1.2622, grad_norm: 35.7356 2022-07-13 07:52:58,740 - mmdet - INFO - Epoch [2][150/3517] lr: 1.991e-04, eta: 2 days, 6:33:27, time: 2.476, data_time: 0.343, memory: 18492, loss_cls: 0.0692, loss_bbox: 0.2370, d0.loss_cls: 0.0702, d0.loss_bbox: 0.0730, d1.loss_cls: 0.0703, d1.loss_bbox: 0.0775, d2.loss_cls: 0.0705, d2.loss_bbox: 0.0824, d3.loss_cls: 0.0703, d3.loss_bbox: 0.0907, d4.loss_cls: 0.0613, d4.loss_bbox: 0.2317, loss: 1.2040, grad_norm: 31.0999 2022-07-13 07:55:03,667 - mmdet - INFO - Epoch [2][200/3517] lr: 1.991e-04, eta: 2 days, 6:32:36, time: 2.498, data_time: 0.379, memory: 18492, loss_cls: 0.0693, loss_bbox: 0.2312, d0.loss_cls: 0.0703, d0.loss_bbox: 0.0765, d1.loss_cls: 0.0705, d1.loss_bbox: 0.0810, d2.loss_cls: 0.0705, d2.loss_bbox: 0.0846, d3.loss_cls: 0.0705, d3.loss_bbox: 0.0932, d4.loss_cls: 0.0603, d4.loss_bbox: 0.2219, loss: 1.1998, grad_norm: 32.8055 2022-07-13 07:57:07,341 - mmdet - INFO - Epoch [2][250/3517] lr: 1.991e-04, eta: 2 days, 6:31:19, time: 2.475, data_time: 0.348, memory: 18492, loss_cls: 0.0703, loss_bbox: 0.2185, d0.loss_cls: 0.0715, d0.loss_bbox: 0.0770, d1.loss_cls: 0.0716, d1.loss_bbox: 0.0801, d2.loss_cls: 0.0716, d2.loss_bbox: 0.0830, d3.loss_cls: 0.0716, d3.loss_bbox: 0.0900, d4.loss_cls: 0.0619, d4.loss_bbox: 0.2159, loss: 1.1830, grad_norm: 31.7012 2022-07-13 07:59:12,041 - mmdet - INFO - Epoch [2][300/3517] lr: 1.991e-04, eta: 2 days, 6:30:20, time: 2.494, data_time: 0.367, memory: 18492, loss_cls: 0.0701, loss_bbox: 0.2115, d0.loss_cls: 0.0713, d0.loss_bbox: 0.0765, d1.loss_cls: 0.0712, d1.loss_bbox: 0.0766, d2.loss_cls: 0.0714, d2.loss_bbox: 0.0799, d3.loss_cls: 0.0713, d3.loss_bbox: 0.0866, d4.loss_cls: 0.0616, d4.loss_bbox: 0.2083, loss: 1.1563, grad_norm: 33.7621 2022-07-13 08:01:17,137 - mmdet - INFO - Epoch [2][350/3517] lr: 1.991e-04, eta: 2 days, 6:29:29, time: 2.503, data_time: 0.361, memory: 18493, loss_cls: 0.0708, loss_bbox: 0.2231, d0.loss_cls: 0.0715, d0.loss_bbox: 0.0779, d1.loss_cls: 0.0713, d1.loss_bbox: 0.0795, d2.loss_cls: 0.0713, d2.loss_bbox: 0.0814, d3.loss_cls: 0.0712, d3.loss_bbox: 0.0877, d4.loss_cls: 0.0621, d4.loss_bbox: 0.2116, loss: 1.1794, grad_norm: 34.5665 2022-07-13 08:03:16,983 - mmdet - INFO - Epoch [2][400/3517] lr: 1.991e-04, eta: 2 days, 6:26:47, time: 2.397, data_time: 0.317, memory: 18493, loss_cls: 0.0734, loss_bbox: 0.2812, d0.loss_cls: 0.0740, d0.loss_bbox: 0.0776, d1.loss_cls: 0.0740, d1.loss_bbox: 0.0800, d2.loss_cls: 0.0738, d2.loss_bbox: 0.0831, d3.loss_cls: 0.0739, d3.loss_bbox: 0.0912, d4.loss_cls: 0.0650, d4.loss_bbox: 0.2439, loss: 1.2910, grad_norm: 35.4136 2022-07-13 08:05:21,410 - mmdet - INFO - Epoch [2][450/3517] lr: 1.991e-04, eta: 2 days, 6:25:39, time: 2.488, data_time: 0.325, memory: 18493, loss_cls: 0.0726, loss_bbox: 0.2536, d0.loss_cls: 0.0725, d0.loss_bbox: 0.0744, d1.loss_cls: 0.0723, d1.loss_bbox: 0.0770, d2.loss_cls: 0.0723, d2.loss_bbox: 0.0819, d3.loss_cls: 0.0723, d3.loss_bbox: 0.0900, d4.loss_cls: 0.0637, d4.loss_bbox: 0.2279, loss: 1.2304, grad_norm: 34.6175 2022-07-13 08:07:23,003 - mmdet - INFO - Epoch [2][500/3517] lr: 1.991e-04, eta: 2 days, 6:23:33, time: 2.432, data_time: 0.312, memory: 18493, loss_cls: 0.0721, loss_bbox: 0.2296, d0.loss_cls: 0.0723, d0.loss_bbox: 0.0749, d1.loss_cls: 0.0721, d1.loss_bbox: 0.0772, d2.loss_cls: 0.0723, d2.loss_bbox: 0.0799, d3.loss_cls: 0.0724, d3.loss_bbox: 0.0887, d4.loss_cls: 0.0634, d4.loss_bbox: 0.2128, loss: 1.1877, grad_norm: 31.8288

deeptoby commented 2 years ago

i didn't load the pretrained model and i didn't remove the frozenstages setting! --!

zhiqi-li commented 2 years ago

I hope you can provide your config file, and the printed loss seems obviously abnormal.

wuhen777 commented 4 months ago

2022-07-13 07:48:49,431 - mmdet - INFO - Epoch [2][50/3517] lr: 1.991e-04, eta: 2 days, 6:35:14, time: 2.612, data_time: 0.523, memory: 18492, loss_cls: 0.0685, loss_bbox: 0.2451, d0.loss_cls: 0.0697, d0.loss_bbox: 0.0798, d1.loss_cls: 0.0696, d1.loss_bbox: 0.0839, d2.loss_cls: 0.0696, d2.loss_bbox: 0.0880, d3.loss_cls: 0.0696, d3.loss_bbox: 0.0977, d4.loss_cls: 0.0608, d4.loss_bbox: 0.2381, loss: 1.2405, grad_norm: 34.3968 2022-07-13 07:50:54,896 - mmdet - INFO - Epoch [2][100/3517] lr: 1.991e-04, eta: 2 days, 6:34:40, time: 2.509, data_time: 0.374, memory: 18492, loss_cls: 0.0701, loss_bbox: 0.2633, d0.loss_cls: 0.0710, d0.loss_bbox: 0.0780, d1.loss_cls: 0.0708, d1.loss_bbox: 0.0829, d2.loss_cls: 0.0710, d2.loss_bbox: 0.0876, d3.loss_cls: 0.0711, d3.loss_bbox: 0.0964, d4.loss_cls: 0.0619, d4.loss_bbox: 0.2380, loss: 1.2622, grad_norm: 35.7356 2022-07-13 07:52:58,740 - mmdet - INFO - Epoch [2][150/3517] lr: 1.991e-04, eta: 2 days, 6:33:27, time: 2.476, data_time: 0.343, memory: 18492, loss_cls: 0.0692, loss_bbox: 0.2370, d0.loss_cls: 0.0702, d0.loss_bbox: 0.0730, d1.loss_cls: 0.0703, d1.loss_bbox: 0.0775, d2.loss_cls: 0.0705, d2.loss_bbox: 0.0824, d3.loss_cls: 0.0703, d3.loss_bbox: 0.0907, d4.loss_cls: 0.0613, d4.loss_bbox: 0.2317, loss: 1.2040, grad_norm: 31.0999 2022-07-13 07:55:03,667 - mmdet - INFO - Epoch [2][200/3517] lr: 1.991e-04, eta: 2 days, 6:32:36, time: 2.498, data_time: 0.379, memory: 18492, loss_cls: 0.0693, loss_bbox: 0.2312, d0.loss_cls: 0.0703, d0.loss_bbox: 0.0765, d1.loss_cls: 0.0705, d1.loss_bbox: 0.0810, d2.loss_cls: 0.0705, d2.loss_bbox: 0.0846, d3.loss_cls: 0.0705, d3.loss_bbox: 0.0932, d4.loss_cls: 0.0603, d4.loss_bbox: 0.2219, loss: 1.1998, grad_norm: 32.8055 2022-07-13 07:57:07,341 - mmdet - INFO - Epoch [2][250/3517] lr: 1.991e-04, eta: 2 days, 6:31:19, time: 2.475, data_time: 0.348, memory: 18492, loss_cls: 0.0703, loss_bbox: 0.2185, d0.loss_cls: 0.0715, d0.loss_bbox: 0.0770, d1.loss_cls: 0.0716, d1.loss_bbox: 0.0801, d2.loss_cls: 0.0716, d2.loss_bbox: 0.0830, d3.loss_cls: 0.0716, d3.loss_bbox: 0.0900, d4.loss_cls: 0.0619, d4.loss_bbox: 0.2159, loss: 1.1830, grad_norm: 31.7012 2022-07-13 07:59:12,041 - mmdet - INFO - Epoch [2][300/3517] lr: 1.991e-04, eta: 2 days, 6:30:20, time: 2.494, data_time: 0.367, memory: 18492, loss_cls: 0.0701, loss_bbox: 0.2115, d0.loss_cls: 0.0713, d0.loss_bbox: 0.0765, d1.loss_cls: 0.0712, d1.loss_bbox: 0.0766, d2.loss_cls: 0.0714, d2.loss_bbox: 0.0799, d3.loss_cls: 0.0713, d3.loss_bbox: 0.0866, d4.loss_cls: 0.0616, d4.loss_bbox: 0.2083, loss: 1.1563, grad_norm: 33.7621 2022-07-13 08:01:17,137 - mmdet - INFO - Epoch [2][350/3517] lr: 1.991e-04, eta: 2 days, 6:29:29, time: 2.503, data_time: 0.361, memory: 18493, loss_cls: 0.0708, loss_bbox: 0.2231, d0.loss_cls: 0.0715, d0.loss_bbox: 0.0779, d1.loss_cls: 0.0713, d1.loss_bbox: 0.0795, d2.loss_cls: 0.0713, d2.loss_bbox: 0.0814, d3.loss_cls: 0.0712, d3.loss_bbox: 0.0877, d4.loss_cls: 0.0621, d4.loss_bbox: 0.2116, loss: 1.1794, grad_norm: 34.5665 2022-07-13 08:03:16,983 - mmdet - INFO - Epoch [2][400/3517] lr: 1.991e-04, eta: 2 days, 6:26:47, time: 2.397, data_time: 0.317, memory: 18493, loss_cls: 0.0734, loss_bbox: 0.2812, d0.loss_cls: 0.0740, d0.loss_bbox: 0.0776, d1.loss_cls: 0.0740, d1.loss_bbox: 0.0800, d2.loss_cls: 0.0738, d2.loss_bbox: 0.0831, d3.loss_cls: 0.0739, d3.loss_bbox: 0.0912, d4.loss_cls: 0.0650, d4.loss_bbox: 0.2439, loss: 1.2910, grad_norm: 35.4136 2022-07-13 08:05:21,410 - mmdet - INFO - Epoch [2][450/3517] lr: 1.991e-04, eta: 2 days, 6:25:39, time: 2.488, data_time: 0.325, memory: 18493, loss_cls: 0.0726, loss_bbox: 0.2536, d0.loss_cls: 0.0725, d0.loss_bbox: 0.0744, d1.loss_cls: 0.0723, d1.loss_bbox: 0.0770, d2.loss_cls: 0.0723, d2.loss_bbox: 0.0819, d3.loss_cls: 0.0723, d3.loss_bbox: 0.0900, d4.loss_cls: 0.0637, d4.loss_bbox: 0.2279, loss: 1.2304, grad_norm: 34.6175 2022-07-13 08:07:23,003 - mmdet - INFO - Epoch [2][500/3517] lr: 1.991e-04, eta: 2 days, 6:23:33, time: 2.432, data_time: 0.312, memory: 18493, loss_cls: 0.0721, loss_bbox: 0.2296, d0.loss_cls: 0.0723, d0.loss_bbox: 0.0749, d1.loss_cls: 0.0721, d1.loss_bbox: 0.0772, d2.loss_cls: 0.0723, d2.loss_bbox: 0.0799, d3.loss_cls: 0.0724, d3.loss_bbox: 0.0887, d4.loss_cls: 0.0634, d4.loss_bbox: 0.2128, loss: 1.1877, grad_norm: 31.8288

May I ask how you managed to use only a partial dataset during training? I would like to use only a portion of the complete dataset