V2AI / Det3D

World's first general purpose 3D object detection codebse.
https://arxiv.org/abs/1908.09492
Apache License 2.0
1.49k stars 298 forks source link

bbox_head compute loss error #10

Closed muzi2045 closed 4 years ago

muzi2045 commented 4 years ago

Try to train this CBGS in single GPU, after modified some params, occur this error: image

it looks like when trained with Nuscenes dataset, setting the Range [-50.4, -50.4, 50.4, 50,4], and voxels_size[0.1, 0.1], it will generate [1008, 1008] array -> 1008 1008 2 = 2032128 anchors per class, but the box_preds output is [1, 126, 126, 18] per class -> 126 126 2 = 31752.

def add_sin_difference(boxes1, boxes2):
    rad_pred_encoding = torch.sin(boxes1[..., -1:]) * torch.cos(boxes2[..., -1:])
    rad_tg_encoding = torch.cos(boxes1[..., -1:]) * torch.sin(boxes2[..., -1:])
    boxes1 = torch.cat([boxes1[..., :-1], rad_pred_encoding], dim=-1)
    boxes2 = torch.cat([boxes2[..., :-1], rad_tg_encoding], dim=-1)
    return boxes1, boxes2

Hopefully for any advice @a157801 @poodarchu

muzi2045 commented 4 years ago

it looks like this config will effect the reg targets shape:

backbone=dict(
        type="SpMiddleResNetFHD", num_input_features=5, ds_factor=8, norm_cfg=norm_cfg,
    ),

when set ds_factor = 1 , the backbone output tensor value are normal, but the reg targets and box_pred dimension mismatch. set ds_factor = 8, the backbone output value are nan value, the reg tagets shape and box_pred shape are matched.

poodarchu commented 4 years ago

you need to set ds_factor(down sampling factor) to the exact down sample ratio of bacbone, or error will be raised.

muzi2045 commented 4 years ago

thanks for reply! the downsample factor I am using is 8, but there is some abnormal value in the backbone output tensor, I think the problem are locate in this part: image the output of ret.dense() value are all zero, and it will generate nan value in the next part. Maybe it is a BUG in spconv?

poodarchu commented 4 years ago

please check your data generation process

muzi2045 commented 4 years ago

check the data input

example points num: torch.Size([40302, 6])
voxels shape: torch.Size([11661, 10, 5])
voxels:  tensor([[[ 4.8300e-02,  1.8905e-01, -1.5104e-02,  3.0000e+00,  0.0000e+00],
         [ 4.5918e-02,  1.7916e-01, -1.4346e-02,  4.0000e+00,  0.0000e+00],
         [ 4.6489e-02,  1.8153e-01, -1.4528e-02,  5.0000e+00,  0.0000e+00],
         [ 3.0577e-02,  1.1835e-01, -9.6192e-03,  5.0000e+00,  0.0000e+00],
         [ 4.2775e-02,  1.6670e-01, -1.3385e-02,  9.0000e+00,  0.0000e+00],
         [ 4.6045e-02,  1.7988e-01, -1.4401e-02,  1.0000e+01,  0.0000e+00],
         [ 4.3247e-02,  1.6862e-01, -1.3533e-02,  3.0000e+00,  0.0000e+00],
         [ 3.5177e-02,  1.3646e-01, -1.1033e-02,  8.0000e+00,  0.0000e+00],
         [ 3.8003e-02,  1.4799e-01, -1.1931e-02,  4.0000e+00,  0.0000e+00],
         [ 2.9690e-02,  1.1506e-01, -9.3616e-03,  4.0000e+00,  0.0000e+00]],

        [[ 3.4465e+00,  1.0413e-01, -1.1676e+00,  1.9000e+01,  0.0000e+00],
         [ 3.4545e+00,  1.4275e-01, -1.1701e+00,  1.9000e+01,  0.0000e+00],
         [ 3.4526e+00,  1.6199e-01, -1.1694e+00,  2.0000e+01,  0.0000e+00],
         [ 3.4467e+00,  1.2346e-01, -1.1676e+00,  2.0000e+01,  0.0000e+00],
         [ 3.4661e+00,  1.8086e-01, -1.1740e+00,  1.9000e+01,  0.0000e+00],
         [ 0.0000e+00,  0.0000e+00,  0.0000e+00,  0.0000e+00,  0.0000e+00],
         [ 0.0000e+00,  0.0000e+00,  0.0000e+00,  0.0000e+00,  0.0000e+00],
         [ 0.0000e+00,  0.0000e+00,  0.0000e+00,  0.0000e+00,  0.0000e+00],
         [ 0.0000e+00,  0.0000e+00,  0.0000e+00,  0.0000e+00,  0.0000e+00],
         [ 0.0000e+00,  0.0000e+00,  0.0000e+00,  0.0000e+00,  0.0000e+00]]],
       device='cuda:0')
coordinates shape: torch.Size([11661, 4])
coordinates: tensor([[  0,  24, 505, 504],
        [  0,  19, 505, 538],
        [  0,  24, 504, 504],
        [  0,  24, 506, 504],
        [  0,  32, 677, 465]], device='cuda:0', dtype=torch.int32)
num_points_in_voxel: torch.Size([11661])
num_points_in_voxel: tensor([10,  5, 10, 10,  5, 10,  1,  1,  1,  1], device='cuda:0',
       dtype=torch.int32)
batch_size: 1
 input features shape: torch.Size([11661, 5])
 input features: tensor([[ 4.0622e-02,  1.5828e-01, -1.2724e-02,  5.5000e+00,  0.0000e+00],
        [ 3.4533e+00,  1.4264e-01, -1.1697e+00,  1.9400e+01,  0.0000e+00],
        [ 9.1351e-03,  3.5254e-02, -2.8756e-03,  8.9000e+00,  0.0000e+00],
        [ 6.1948e-02,  2.4106e-01, -1.9032e-02,  1.2100e+01,  0.0000e+00],
        [-3.8832e+00,  1.7355e+01,  1.5177e+00,  2.8000e+00,  0.0000e+00]],
       device='cuda:0')
 voxel_features shape: torch.Size([11661, 5])
 coors shape: torch.Size([11661, 4])
 input shape: [1008 1008   40]
 batch_size: 1
 sparse shape: [  41 1008 1008]

these backbone input data are look no problem, and what confused me is the first epoch can get no nan value in some loss

box preds : tensor([[[[-9.6849e-03, -7.2358e-02, -2.2122e-02,  4.4190e-02, -5.9396e-05,
            2.6133e-02, -5.1007e-03,  1.1115e-01,  1.7999e-02, -5.6601e-02,
            9.5100e-02,  3.7395e-02, -3.8890e-02, -6.5123e-03, -7.3020e-02,
            3.3581e-02, -1.1426e-01,  1.6578e-02]]]], device='cuda:0',
       grad_fn=<SliceBackward>)

OrderedDict([('loss', [nan, nan, nan, nan, nan, nan]), 
('cls_pos_loss', [0.042508386075496674, 705.7900390625, 548.344970703125, 0.04974224418401718, 1752.1131591796875, 553.69189453125]),
('cls_neg_loss', [281.08453369140625, 314.4130859375, 295.0320739746094, 4259.857421875, 831.6119995117188, 300.91009521484375]), 
('dir_loss_reduced', [0.8444174528121948, 0.6966732740402222, 0.677081823348999, 0.7119743227958679, 0.9957306385040283, 0.6837730407714844]), 
('cls_loss_reduced', [562.2116088867188, 1334.6162109375, 1138.4091796875, 8519.765625, 3415.337158203125, 1155.51220703125]), 
('loc_loss_reduced', [nan, nan, nan, nan, nan, nan]), 
('loc_loss_elem', [[0.05761663615703583, 0.10798001289367676, 0.726987361907959, 0.13482341170310974, 0.14108069241046906, 0.1265934556722641, nan, nan, 0.13661809265613556], [0.029355794191360474, 0.1126936823129654, 0.5263068079948425, 0.327540785074234, 0.4411305785179138, 0.2663941979408264, nan, nan, 0.16507576406002045], [0.41729214787483215, 0.37830650806427, 0.5796593427658081, 0.623768150806427, 0.8224196434020996, 0.4774845838546753, nan, nan, 0.3317689299583435], [0.003682222682982683, 0.05905560776591301, 0.3116404414176941, 0.16559872031211853, 0.4050639867782593, 0.007569252047687769, nan, nan, 0.34379780292510986], [0.1909443438053131, 0.28265494108200073, 0.8142379522323608, 0.44696342945098877, 0.46764999628067017, 0.2768707573413849, nan, nan, 0.506321132183075], [0.1946674883365631, 0.21984361112117767, 0.7735015153884888, 0.15417969226837158, 0.13026654720306396, 0.13411535322666168, nan, nan, 0.605083703994751]]),
 ('num_pos', [15, 25, 28, 1, 10, 29]),
 ('num_neg', [31719, 63447, 63419, 31750, 63488, 63465])])
muzi2045 commented 4 years ago

Which version of spconv are you using in your repo ?

poodarchu commented 4 years ago

you can use my fork. https://github.com/poodarchu/spconv/commits/master

muzi2045 commented 4 years ago

Guys, using your fork version, the nan value problem still exist there.. check the whole repo, the spconv are mainly using in scn.py, And I am using pytorch1.0, you recommed using pytorch1.3. Are you sure the fork version can work?

poodarchu commented 4 years ago

I've tested it on 1.0 - 1.3. I recommend you using 8 GPUs to train, or you may need to adjust lr and weight decay.

muzi2045 commented 4 years ago

trying print the generate info message, find something strange value, the nan value in gt_boxes and velocity will leading the training pipeline generate nan feature map? @poodarchu

{'lidar_path': '/home/muzi2045/nvme0/Nuscene/trainval/samples/LIDAR_TOP/n015-2018-07-18-11-07-57+0800__LIDAR_TOP__1531883530449377.pcd.bin', 'cam_front_path': '/home/muzi2045/nvme0/Nuscene/trainval/samples/CAM_FRONT/n015-2018-07-18-11-07-57+0800__CAM_FRONT__1531883530412470.jpg', 'cam_intrinsic': array([[1.26641720e+03, 0.00000000e+00, 8.16267020e+02],
       [0.00000000e+00, 1.26641720e+03, 4.91507066e+02],
       [0.00000000e+00, 0.00000000e+00, 1.00000000e+00]]), 'token': 'e93e98b63d3b40209056d129dc53ceee', 'sweeps': [{'lidar_path': '/home/muzi2045/nvme0/Nuscene/trainval/samples/LIDAR_TOP/n015-2018-07-18-11-07-57+0800__LIDAR_TOP__1531883530449377.pcd.bin', 'sample_data_token': '3388933b59444c5db71fade0bbfef470', 'transform_matrix': None, 'time_lag': 0}, {'lidar_path': '/home/muzi2045/nvme0/Nuscene/trainval/samples/LIDAR_TOP/n015-2018-07-18-11-07-57+0800__LIDAR_TOP__1531883530449377.pcd.bin', 'sample_data_token': '3388933b59444c5db71fade0bbfef470', 'transform_matrix': None, 'time_lag': 0}, {'lidar_path': '/home/muzi2045/nvme0/Nuscene/trainval/samples/LIDAR_TOP/n015-2018-07-18-11-07-57+0800__LIDAR_TOP__1531883530449377.pcd.bin', 'sample_data_token': '3388933b59444c5db71fade0bbfef470', 'transform_matrix': None, 'time_lag': 0}, {'lidar_path': '/home/muzi2045/nvme0/Nuscene/trainval/samples/LIDAR_TOP/n015-2018-07-18-11-07-57+0800__LIDAR_TOP__1531883530449377.pcd.bin', 'sample_data_token': '3388933b59444c5db71fade0bbfef470', 'transform_matrix': None, 'time_lag': 0}, {'lidar_path': '/home/muzi2045/nvme0/Nuscene/trainval/samples/LIDAR_TOP/n015-2018-07-18-11-07-57+0800__LIDAR_TOP__1531883530449377.pcd.bin', 'sample_data_token': '3388933b59444c5db71fade0bbfef470', 'transform_matrix': None, 'time_lag': 0}, {'lidar_path': '/home/muzi2045/nvme0/Nuscene/trainval/samples/LIDAR_TOP/n015-2018-07-18-11-07-57+0800__LIDAR_TOP__1531883530449377.pcd.bin', 'sample_data_token': '3388933b59444c5db71fade0bbfef470', 'transform_matrix': None, 'time_lag': 0}, {'lidar_path': '/home/muzi2045/nvme0/Nuscene/trainval/samples/LIDAR_TOP/n015-2018-07-18-11-07-57+0800__LIDAR_TOP__1531883530449377.pcd.bin', 'sample_data_token': '3388933b59444c5db71fade0bbfef470', 'transform_matrix': None, 'time_lag': 0}, {'lidar_path': '/home/muzi2045/nvme0/Nuscene/trainval/samples/LIDAR_TOP/n015-2018-07-18-11-07-57+0800__LIDAR_TOP__1531883530449377.pcd.bin', 'sample_data_token': '3388933b59444c5db71fade0bbfef470', 'transform_matrix': None, 'time_lag': 0}, {'lidar_path': '/home/muzi2045/nvme0/Nuscene/trainval/samples/LIDAR_TOP/n015-2018-07-18-11-07-57+0800__LIDAR_TOP__1531883530449377.pcd.bin', 'sample_data_token': '3388933b59444c5db71fade0bbfef470', 'transform_matrix': None, 'time_lag': 0}], 'ref_from_car': array([[ 0.00203327, -0.99998053, -0.00589965,  0.00893789],
       [ 0.99970406,  0.00217566, -0.02422936, -0.89884612],
       [ 0.02424172, -0.00584864,  0.99968902, -1.86253495],
       [ 0.        ,  0.        ,  0.        ,  1.        ]]), 'car_from_global': array([[ 1.23886954e-01,  9.92036123e-01,  2.27234240e-02,
        -7.31089020e+02],
       [-9.92293997e-01,  1.23903923e-01,  6.65101391e-04,
         9.26666849e+02],
       [-2.15571678e-03, -2.26307146e-02,  9.99741568e-01,
         1.60006535e+01],
       [ 0.00000000e+00,  0.00000000e+00,  0.00000000e+00,
         1.00000000e+00]]), 'timestamp': 1531883530.4493768, 'gt_boxes': array([[-1.61843454e+01, -1.17404151e+00, -1.24046699e+00,
         3.00000000e-01,  2.91000000e-01,  7.34000000e-01,
                    nan,             nan,  1.36455067e+00],
       [-1.54493912e+01, -4.28768163e+00, -1.30136452e+00,
         3.15000000e-01,  3.38000000e-01,  7.12000000e-01,
                    nan,             nan,  1.25993331e+00],
       [-1.02275670e+01,  1.94608211e+01,  3.74364245e-02,
         2.31200000e+00,  7.51600000e+00,  3.09300000e+00,
                    nan,             nan, -9.64144213e-01],
       [ 9.21442005e+00, -5.57960735e+00, -1.07856950e+00,
         1.63800000e+00,  4.25000000e+00,  1.44000000e+00,
                    nan,             nan, -1.92929042e+00],
       [-1.57271212e+01, -8.16090985e-01, -6.97936424e-01,
         7.39000000e-01,  5.63000000e-01,  1.71100000e+00,
                    nan,             nan,  1.36455067e+00],
       [ 3.84646471e-01, -1.32284491e+01, -1.21462740e+00,
         1.87100000e+00,  4.47800000e+00,  1.45600000e+00,
                    nan,             nan, -2.66401236e+00],
       [-4.75276596e+01,  3.51366615e+01,  6.94957388e-01,
         2.87700000e+00,  6.37200000e+00,  2.97800000e+00,
                    nan,             nan, -4.15992929e+00],
       [-1.61056541e+01, -7.16475402e-02, -6.86282715e-01,
         6.65000000e-01,  5.44000000e-01,  1.73900000e+00,
                    nan,             nan,  1.36455067e+00],
       [-1.59411481e+01, -2.44787704e+00, -1.28580015e+00,
         3.38000000e-01,  3.09000000e-01,  7.12000000e-01,
                    nan,             nan,  1.35798077e+00],
       [-1.93828613e+01,  2.55393813e+01,  3.19190807e-02,
         2.15600000e+00,  6.22700000e+00,  2.60100000e+00,
                    nan,             nan, -1.05998024e+00]]), 'gt_boxes_velocity': array([[nan, nan, nan],
       [nan, nan, nan],
       [nan, nan, nan],
       [nan, nan, nan],
       [nan, nan, nan],
       [nan, nan, nan],
       [nan, nan, nan],
       [nan, nan, nan],
       [nan, nan, nan],
       [nan, nan, nan]]), 'gt_names': array(['traffic_cone', 'traffic_cone', 'truck', 'car', 'pedestrian',
       'car', 'truck', 'pedestrian', 'traffic_cone', 'truck'],
      dtype='<U12'), 'gt_boxes_token': array(['173a50411564442ab195e132472fde71',
       '5123ed5e450948ac8dc381772f2ae29a',
       'acce0b7220754600b700257a1de1573d',
       '8d7cb5e96cae48c39ef4f9f75182013a',
       'f64bfd3d4ddf46d7a366624605cb7e91',
       'f9dba7f32ed34ee8adc92096af767868',
       '086e3f37a44e459987cde7a3ca273b5b',
       '3964235c58a745df8589b6a626c29985',
       '31a96b9503204a8688da75abcd4b56b2',
       'b0284e14d17a444a8d0071bd1f03a0a2'], dtype='<U32')}
a157801 commented 4 years ago

trying print the generate info message, find something strange value, the nan value in gt_boxes and velocity will leading the training pipeline generate nan feature map? @poodarchu

{'lidar_path': '/home/muzi2045/nvme0/Nuscene/trainval/samples/LIDAR_TOP/n015-2018-07-18-11-07-57+0800__LIDAR_TOP__1531883530449377.pcd.bin', 'cam_front_path': '/home/muzi2045/nvme0/Nuscene/trainval/samples/CAM_FRONT/n015-2018-07-18-11-07-57+0800__CAM_FRONT__1531883530412470.jpg', 'cam_intrinsic': array([[1.26641720e+03, 0.00000000e+00, 8.16267020e+02],
       [0.00000000e+00, 1.26641720e+03, 4.91507066e+02],
       [0.00000000e+00, 0.00000000e+00, 1.00000000e+00]]), 'token': 'e93e98b63d3b40209056d129dc53ceee', 'sweeps': [{'lidar_path': '/home/muzi2045/nvme0/Nuscene/trainval/samples/LIDAR_TOP/n015-2018-07-18-11-07-57+0800__LIDAR_TOP__1531883530449377.pcd.bin', 'sample_data_token': '3388933b59444c5db71fade0bbfef470', 'transform_matrix': None, 'time_lag': 0}, {'lidar_path': '/home/muzi2045/nvme0/Nuscene/trainval/samples/LIDAR_TOP/n015-2018-07-18-11-07-57+0800__LIDAR_TOP__1531883530449377.pcd.bin', 'sample_data_token': '3388933b59444c5db71fade0bbfef470', 'transform_matrix': None, 'time_lag': 0}, {'lidar_path': '/home/muzi2045/nvme0/Nuscene/trainval/samples/LIDAR_TOP/n015-2018-07-18-11-07-57+0800__LIDAR_TOP__1531883530449377.pcd.bin', 'sample_data_token': '3388933b59444c5db71fade0bbfef470', 'transform_matrix': None, 'time_lag': 0}, {'lidar_path': '/home/muzi2045/nvme0/Nuscene/trainval/samples/LIDAR_TOP/n015-2018-07-18-11-07-57+0800__LIDAR_TOP__1531883530449377.pcd.bin', 'sample_data_token': '3388933b59444c5db71fade0bbfef470', 'transform_matrix': None, 'time_lag': 0}, {'lidar_path': '/home/muzi2045/nvme0/Nuscene/trainval/samples/LIDAR_TOP/n015-2018-07-18-11-07-57+0800__LIDAR_TOP__1531883530449377.pcd.bin', 'sample_data_token': '3388933b59444c5db71fade0bbfef470', 'transform_matrix': None, 'time_lag': 0}, {'lidar_path': '/home/muzi2045/nvme0/Nuscene/trainval/samples/LIDAR_TOP/n015-2018-07-18-11-07-57+0800__LIDAR_TOP__1531883530449377.pcd.bin', 'sample_data_token': '3388933b59444c5db71fade0bbfef470', 'transform_matrix': None, 'time_lag': 0}, {'lidar_path': '/home/muzi2045/nvme0/Nuscene/trainval/samples/LIDAR_TOP/n015-2018-07-18-11-07-57+0800__LIDAR_TOP__1531883530449377.pcd.bin', 'sample_data_token': '3388933b59444c5db71fade0bbfef470', 'transform_matrix': None, 'time_lag': 0}, {'lidar_path': '/home/muzi2045/nvme0/Nuscene/trainval/samples/LIDAR_TOP/n015-2018-07-18-11-07-57+0800__LIDAR_TOP__1531883530449377.pcd.bin', 'sample_data_token': '3388933b59444c5db71fade0bbfef470', 'transform_matrix': None, 'time_lag': 0}, {'lidar_path': '/home/muzi2045/nvme0/Nuscene/trainval/samples/LIDAR_TOP/n015-2018-07-18-11-07-57+0800__LIDAR_TOP__1531883530449377.pcd.bin', 'sample_data_token': '3388933b59444c5db71fade0bbfef470', 'transform_matrix': None, 'time_lag': 0}], 'ref_from_car': array([[ 0.00203327, -0.99998053, -0.00589965,  0.00893789],
       [ 0.99970406,  0.00217566, -0.02422936, -0.89884612],
       [ 0.02424172, -0.00584864,  0.99968902, -1.86253495],
       [ 0.        ,  0.        ,  0.        ,  1.        ]]), 'car_from_global': array([[ 1.23886954e-01,  9.92036123e-01,  2.27234240e-02,
        -7.31089020e+02],
       [-9.92293997e-01,  1.23903923e-01,  6.65101391e-04,
         9.26666849e+02],
       [-2.15571678e-03, -2.26307146e-02,  9.99741568e-01,
         1.60006535e+01],
       [ 0.00000000e+00,  0.00000000e+00,  0.00000000e+00,
         1.00000000e+00]]), 'timestamp': 1531883530.4493768, 'gt_boxes': array([[-1.61843454e+01, -1.17404151e+00, -1.24046699e+00,
         3.00000000e-01,  2.91000000e-01,  7.34000000e-01,
                    nan,             nan,  1.36455067e+00],
       [-1.54493912e+01, -4.28768163e+00, -1.30136452e+00,
         3.15000000e-01,  3.38000000e-01,  7.12000000e-01,
                    nan,             nan,  1.25993331e+00],
       [-1.02275670e+01,  1.94608211e+01,  3.74364245e-02,
         2.31200000e+00,  7.51600000e+00,  3.09300000e+00,
                    nan,             nan, -9.64144213e-01],
       [ 9.21442005e+00, -5.57960735e+00, -1.07856950e+00,
         1.63800000e+00,  4.25000000e+00,  1.44000000e+00,
                    nan,             nan, -1.92929042e+00],
       [-1.57271212e+01, -8.16090985e-01, -6.97936424e-01,
         7.39000000e-01,  5.63000000e-01,  1.71100000e+00,
                    nan,             nan,  1.36455067e+00],
       [ 3.84646471e-01, -1.32284491e+01, -1.21462740e+00,
         1.87100000e+00,  4.47800000e+00,  1.45600000e+00,
                    nan,             nan, -2.66401236e+00],
       [-4.75276596e+01,  3.51366615e+01,  6.94957388e-01,
         2.87700000e+00,  6.37200000e+00,  2.97800000e+00,
                    nan,             nan, -4.15992929e+00],
       [-1.61056541e+01, -7.16475402e-02, -6.86282715e-01,
         6.65000000e-01,  5.44000000e-01,  1.73900000e+00,
                    nan,             nan,  1.36455067e+00],
       [-1.59411481e+01, -2.44787704e+00, -1.28580015e+00,
         3.38000000e-01,  3.09000000e-01,  7.12000000e-01,
                    nan,             nan,  1.35798077e+00],
       [-1.93828613e+01,  2.55393813e+01,  3.19190807e-02,
         2.15600000e+00,  6.22700000e+00,  2.60100000e+00,
                    nan,             nan, -1.05998024e+00]]), 'gt_boxes_velocity': array([[nan, nan, nan],
       [nan, nan, nan],
       [nan, nan, nan],
       [nan, nan, nan],
       [nan, nan, nan],
       [nan, nan, nan],
       [nan, nan, nan],
       [nan, nan, nan],
       [nan, nan, nan],
       [nan, nan, nan]]), 'gt_names': array(['traffic_cone', 'traffic_cone', 'truck', 'car', 'pedestrian',
       'car', 'truck', 'pedestrian', 'traffic_cone', 'truck'],
      dtype='<U12'), 'gt_boxes_token': array(['173a50411564442ab195e132472fde71',
       '5123ed5e450948ac8dc381772f2ae29a',
       'acce0b7220754600b700257a1de1573d',
       '8d7cb5e96cae48c39ef4f9f75182013a',
       'f64bfd3d4ddf46d7a366624605cb7e91',
       'f9dba7f32ed34ee8adc92096af767868',
       '086e3f37a44e459987cde7a3ca273b5b',
       '3964235c58a745df8589b6a626c29985',
       '31a96b9503204a8688da75abcd4b56b2',
       'b0284e14d17a444a8d0071bd1f03a0a2'], dtype='<U32')}

We update the readme. You should re install nuscense-devkit.

a157801 commented 4 years ago

The install.md has been updated, please refer to the last paragraph. You should re-install setuptools 39.1.0

MeyLavie notifications@github.com 于2020年1月7日周二 下午10:22写道:

Hi @a157801 https://github.com/a157801 I tried to reinstall nuscense-devkit as in the INSTALL.md: $ git clone https://github.com/poodarchu/nuscenes.git $ cd nuscenes $ python setup.py install

but I get an error after the last command: error in nuscenes-zbj setup command: "values of 'package_data' dict" must be a list of strings (got '*.json')

I think it may be related to an old version of nuscenes devkit. Is it OK to install nuscenes devkit through pip as recommended?

— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/poodarchu/Det3D/issues/10?email_source=notifications&email_token=AFSH2KBRLVIVZDAYGLLK7TDQ4SFZ5A5CNFSM4J7HPHZ2YY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOEIJANDY#issuecomment-571606671, or unsubscribe https://github.com/notifications/unsubscribe-auth/AFSH2KDNMBIVEUANXGQB6HDQ4SFZ5ANCNFSM4J7HPHZQ .