Closed xiaowanzizz closed 3 years ago
Could you provide more details about the command you used and post the training log?
Exception during training: Traceback (most recent call last): File "/home/zzf/Desktop/CenterNet2/detectron2/engine/train_loop.py", line 138, in train self.run_step() File "/home/zzf/Desktop/CenterNet2/detectron2/engine/defaults.py", line 441, in run_step self._trainer.run_step() File "/home/zzf/Desktop/CenterNet2/detectron2/engine/train_loop.py", line 232, in run_step loss_dict = self.model(data) File "/home/zzf/miniconda3/envs/torch1.7.1/lib/python3.7/site-packages/torch/nn/modules/module.py", line 727, in _call_impl result = self.forward(*input, **kwargs) File "/home/zzf/Desktop/detectron2-0.4/YOLOF-master/yolof/modeling/yolof.py", line 295, in forward pred_logits, pred_anchor_deltas) File "/home/zzf/Desktop/detectron2-0.4/YOLOF-master/yolof/modeling/yolof.py", line 407, in losses matched_predicted_boxes, target_boxes, reduction="sum") File "/home/zzf/miniconda3/envs/torch1.7.1/lib/python3.7/site-packages/fvcore-0.1.3.post20210317-py3.7.egg/fvcore/nn/giou_loss.py", line 32, in giou_loss assert (x2 >= x1).all(), "bad box: x1 larger than x2"
Traceback (most recent call last):
File "./tools/train_net.py", line 249, in
python ./tools/train_net.py --num-gpus 1 --config-file ./configs/yolof_R_50_C5_1x.yaml --> command
The learning rate and steps are set for 8 GPUS. Have you modified the learning rate and steps in the config file according to the linear learning rate scaling rule as said in Detectron2?
thanks. changing the learning rate can solve this problem.
meet same problem now……, really awful
The learning rate and steps are set for 8 GPUS. Have you modified the learning rate and steps in the config file according to the linear learning rate scaling rule as said in Detectron2?
Thanks!
File "/home/zzf/miniconda3/envs/torch1.7.1/lib/python3.7/site-packages/fvcore-0.1.3.post20210317-py3.7.egg/fvcore/nn/giou_loss.py", line 32, in giou_loss AssertionError: bad box: x1 larger than x2