Open kaishijeng opened 6 years ago
I use the following script to train, bu got an error below:
export CACHE_NAME=cache_voc MODEL_NAME=model_voc MODEL=model.mobilenet.MobileNet python3 train.py -b 32 -lr 1e-3 -e 160 -m cache/name=$CACHE_NAME model/name=$MODEL_NAME model/dnn=$MODEL train/optimizer='lambda params, lr: torch.optim.SGD(param s, lr, momentum=0.9)' train/scheduler='lambda optimizer: torch.optim.lr_scheduler.MultiStepLR(optimizer, milestones=[60, 90], gamma=0.1)' -d
Error:
epoch=0/160: 0%| | 0/518 [00:00<?, ?it/s] Traceback (most recent call last): File "train.py", line 366, in call kwargs = self.step(inference, optimizer, data) File "train.py", line 322, in step loss, debug = pybenchmark.profile('loss')(model.loss)(self.anchors, norm_data(data, height, width, rows, cols), pred, self.config.getfloat('model', 'threshold')) File "/usr/local/lib/python3.5/dist-packages/pybenchmark/profile.py", line 24, in profile return function(*args, **kw) File "/home/fc/2TB/src/yolo2-pytorch/model/init__.py", line 154, in loss loss['center'] = F.mse_loss(pred['center_offset'][_positive], _center_offset[_positive], size_average=False) RuntimeError: The shape of the mask [32, 144, 5, 1] at index 3 does not match the shape of the indexed tensor [32, 144, 5, 2] at index 3
Use the latest version. I've tested MobileNet via the command arguments, no such error occurred.
I use the latest code and still got the same error. This error also happens yolo2.Tiny model. Can you share your command and config.ini file?
Thanks,
After further debugging, _center_offset expects [32,144, 52, 2], but _positive has shape [32,144, 52, 1] and this causes an error in loss['center'] = F.mse_loss(pred['center_offset'][_positive], _center_offset[_positive], size_average=False)
Do you know why _positive shape is not [32, 144, 5, 2]?
Thanks,
Is the following script correct to train yolo2 with mobilenet?
python3 train.py -b 64 -lr 1e-3 -e 160 -m cache/name=cache_voc model/name=model_voc model/dnn=model.mobilenet.MobileNet train/optimizer='lambda params, lr: torch.optim.SGD(params, lr, momentum=0.9)' train/scheduler='lambda optimizer: torch.optim.lr_scheduler.MultiStepLR(optimizer, milestones=[60, 90], gamma=0.1)' -d
Thanks,