Open rolan123 opened 4 years ago
my pytorch version=1.1.0
try opp_labels = (box_preds[..., -1] > 0) ^ dir_labels.bool()
----> opp_labels = (box_preds[..., -1] > 0) ^ dir_labels.byte()
@PengfeiMa Thank you, the error in multi-class training has been resolved. I did the training and the evaluation, and I found a new problem: Because the few number of cyclists and the training had been over-fit, the AP began to decline. But at the same time car's AP can continue to improve. Cars are about twenty times as numerous as cyclists.
How to deal with it?
@rolan123 What's your final results on per class?
epoch=70 Car AP@0.70, 0.70, 0.70: bbox AP:97.99, 90.07, 89.72 bev AP:98.49, 89.36, 88.96 3d AP:90.10, 86.40, 86.47 aos AP:97.94, 89.88, 89.28 Car AP@0.70, 0.50, 0.50: bbox AP:97.99, 90.07, 89.72 bev AP:98.07, 90.06, 89.78 3d AP:98.04, 90.06, 89.77 aos AP:97.94, 89.88, 89.28 Pedestrian AP@0.50, 0.50, 0.50: bbox AP:64.24, 56.45, 55.70 bev AP:63.57, 55.13, 54.11 3d AP:61.72, 53.77, 46.88 aos AP:60.37, 52.79, 51.84 Pedestrian AP@0.50, 0.25, 0.25: bbox AP:64.24, 56.45, 55.70 bev AP:75.22, 66.63, 58.93 3d AP:74.84, 66.39, 58.78 aos AP:60.37, 52.79, 51.84 Cyclist AP@0.50, 0.50, 0.50: bbox AP:89.66, 84.00, 82.35 bev AP:87.88, 80.72, 78.53 3d AP:87.29, 79.94, 77.71 aos AP:89.59, 83.48, 81.79 Cyclist AP@0.50, 0.25, 0.25: bbox AP:89.66, 84.00, 82.35 bev AP:89.05, 85.74, 79.35 3d AP:89.04, 85.74, 79.35 aos AP:89.59, 83.48, 81.79
epoch=80: Car AP@0.70, 0.70, 0.70: bbox AP:98.24, 90.19, 89.81 bev AP:97.79, 89.50, 89.08 3d AP:90.12, 86.85, 86.61 aos AP:98.21, 90.02, 89.55 Car AP@0.70, 0.50, 0.50: bbox AP:98.24, 90.19, 89.81 bev AP:98.28, 90.18, 89.85 3d AP:98.25, 90.17, 89.83 aos AP:98.21, 90.02, 89.55 Pedestrian AP@0.50, 0.50, 0.50: bbox AP:63.58, 62.11, 55.34 bev AP:63.02, 54.94, 53.83 3d AP:61.89, 53.77, 52.48 aos AP:61.06, 58.98, 52.45 Pedestrian AP@0.50, 0.25, 0.25: bbox AP:63.58, 62.11, 55.34 bev AP:74.64, 66.71, 65.34 3d AP:74.73, 66.70, 65.33 aos AP:61.06, 58.98, 52.45 Cyclist AP@0.50, 0.50, 0.50: bbox AP:89.08, 88.90, 82.40 bev AP:86.78, 85.01, 78.72 3d AP:81.74, 80.38, 78.26 aos AP:88.99, 88.63, 82.14 Cyclist AP@0.50, 0.25, 0.25: bbox AP:89.08, 88.90, 82.40 bev AP:87.58, 85.69, 79.43 3d AP:87.58, 85.69, 79.43 aos AP:88.99, 88.63, 82.14
@rolan123 I test my epoch-70 and epoch-80, the results are almost the same as yours.
error in multi-class training: Traceback (most recent call last): File "train.py", line 127, in
main()
File "train.py", line 117, in main
log_interval = cfg.log_config.interval
File "/root/aaaaaaaa/SA-SSD/tools/train_utils/init.py", line 99, in train_model
log_interval = log_interval
File "/root/aaaaaaaa/SA-SSD/tools/train_utils/init.py", line 57, in train_one_epoch
outputs = batch_processor(model, data_batch)
File "/root/aaaaaaaa/SA-SSD/tools/train_utils/init.py", line 29, in batch_processor
losses = model(data)
File "/root/miniconda3/envs/SASSD/lib/python3.6/site-packages/torch/nn/modules/module.py", line 493, in call
result = self.forward(*input, *kwargs)
File "/root/miniconda3/envs/SASSD/lib/python3.6/site-packages/torch/nn/parallel/data_parallel.py", line 152, in forward
outputs = self.parallel_apply(replicas, inputs, kwargs)
File "/root/miniconda3/envs/SASSD/lib/python3.6/site-packages/torch/nn/parallel/data_parallel.py", line 162, in parallel_apply
return parallel_apply(replicas, inputs, kwargs, self.device_ids[:len(replicas)])
File "/root/miniconda3/envs/SASSD/lib/python3.6/site-packages/torch/nn/parallel/parallel_apply.py", line 83, in parallel_apply
raise output
File "/root/miniconda3/envs/SASSD/lib/python3.6/site-packages/torch/nn/parallel/parallel_apply.py", line 59, in _worker
output = module(input, kwargs)
File "/root/miniconda3/envs/SASSD/lib/python3.6/site-packages/torch/nn/modules/module.py", line 493, in call
result = self.forward(*input, kwargs)
File "/root/aaaaaaaa/SA-SSD/mmdet/models/detectors/base.py", line 79, in forward
return self.forward_train(img, img_meta, kwargs)
File "/root/aaaaaaaa/SA-SSD/mmdet/models/detectors/single_stage.py", line 97, in forward_train
ret['anchors_mask'], ret['gt_bboxes'], ret['gt_labels'], thr=self.train_cfg.rpn.anchor_thr)
File "/root/aaaaaaaa/SA-SSD/mmdet/models/single_stage_heads/ssd_rotate_head.py", line 361, in get_guided_anchors
opp_labels = (box_preds[..., -1] > 0) ^ dir_labels.bool()
AttributeError: 'Tensor' object has no attribute 'bool'
How to deal with it?