microsoft / FairMOT

This project provides an official implementation of our recent work on real-time multi-object tracking in videos. The previous works conduct object detection and tracking with two separate models so they are very slow. In contrast, we propose a one-stage solution which does detection and tracking with a single network by elegantly solving the alignment problem. The resulting approach achieves groundbreaking results in terms of both accuracy and speed: (1) it ranks first among all the trackers on the MOT challenges; (2) it is significantly faster than the previous state-of-the-arts. In addition, it scales gracefully to handle a large number of objects.
MIT License
160 stars 21 forks source link

I got ValueError: cannot reshape array of size 20 into shape (6) when I started custom training. What you think? #14

Open mgultekin opened 2 years ago

mgultekin commented 2 years ago

!sh /content/FairMOT/experiments/all_dla34.sh

!sh /content/FairMOT/experiments/ft_mot15_dla34.sh

/content/FairMOT/experiments/all_dla34.sh: 1: cd: can't cd to src Using tensorboardX Fix size testing. training chunk_sizes: [8] The output will be saved to /content/FairMOT/src/lib/../../exp/mot/all_dla34 Setting up data...

dataset summary OrderedDict([('safety', 1.9113280000000001)]) total # identities: 2 start index OrderedDict([('safety', 0)])

heads {'hm': 1, 'wh': 2, 'id': 512, 'reg': 2} Namespace(K=128, arch='dla_34', batch_size=8, cat_spec_wh=False, chunk_sizes=[8], conf_thres=0.6, data_cfg='/content/FairMOT/src/lib/cfg/data.json', data_dir='/data/yfzhang/MOT/JDE', dataset='jde', debug_dir='/content/FairMOT/src/lib/../../exp/mot/all_dla34/debug', dense_wh=False, det_thres=0.3, down_ratio=4, exp_dir='/content/FairMOT/src/lib/../../exp/mot', exp_id='all_dla34', fix_res=True, gpus=[0], gpus_str='0', head_conv=256, heads={'hm': 1, 'wh': 2, 'id': 512, 'reg': 2}, hide_data_time=False, hm_weight=1, id_loss='ce', id_weight=1, img_size=(1088, 608), input_h=1088, input_res=1088, input_video='../videos/MOT16-03.mp4', input_w=608, keep_res=False, load_model='/content/drive/MyDrive/fairmot_dla34.pth', lr=0.0001, lr_step=[20, 27], master_batch_size=8, mean=None, metric='loss', min_box_area=200, mse_loss=False, nID=2, nms_thres=0.4, norm_wh=False, not_cuda_benchmark=False, not_prefetch_test=False, not_reg_offset=False, num_classes=1, num_epochs=30, num_iters=-1, num_stacks=1, num_workers=8, off_weight=1, output_format='video', output_h=272, output_res=272, output_root='../results', output_w=152, pad=31, print_iter=0, reg_loss='l1', reg_offset=True, reid_dim=512, resume=False, root_dir='/content/FairMOT/src/lib/../..', save_all=False, save_dir='/content/FairMOT/src/lib/../../exp/mot/all_dla34', seed=317, std=None, task='mot', test=False, test_mot15=False, test_mot16=False, test_mot17=False, test_mot20=False, track_buffer=30, trainval=False, val_intervals=5, val_mot15=False, val_mot16=False, val_mot17=False, val_mot20=False, vis_thresh=0.5, wh_weight=0.1) Creating model... loaded /content/drive/MyDrive/fairmot_dla34.pth, epoch 30 Skip loading parameter wh.2.weight, required shapetorch.Size([2, 256, 1, 1]), loaded shapetorch.Size([4, 256, 1, 1]). If you see this, your model does not fully load the pre-trained weight. Please make sure you have correctly specified --arch xxx or set the correct --num_classes for your own dataset. Skip loading parameter wh.2.bias, required shapetorch.Size([2]), loaded shapetorch.Size([4]). If you see this, your model does not fully load the pre-trained weight. Please make sure you have correctly specified --arch xxx or set the correct --num_classes for your own dataset. Skip loading parameter id.2.weight, required shapetorch.Size([512, 256, 1, 1]), loaded shapetorch.Size([128, 256, 1, 1]). If you see this, your model does not fully load the pre-trained weight. Please make sure you have correctly specified --arch xxx or set the correct --num_classes for your own dataset. Skip loading parameter id.2.bias, required shapetorch.Size([512]), loaded shapetorch.Size([128]). If you see this, your model does not fully load the pre-trained weight. Please make sure you have correctly specified --arch xxx or set the correct --num_classes for your own dataset. Starting training... okay on image /content/drive/MyDrive/safety-quipmentColab/images/val/z00211.jpg /content/drive/MyDrive/safety-quipmentColab/labels_with_ids/val/z00211.txt okay on image /content/drive/MyDrive/safety-quipmentColab/images/val/z00481.jpg /content/drive/MyDrive/safety-quipmentColab/labels_with_ids/val/z00481.txt okay on image /content/drive/MyDrive/safety-quipmentColab/images/val/z00076.jpg /content/drive/MyDrive/safety-quipmentColab/labels_with_ids/val/z00076.txt okay on image /content/drive/MyDrive/safety-quipmentColab/images/val/z00661.jpg /content/drive/MyDrive/safety-quipmentColab/labels_with_ids/val/z00661.txt okay on image /content/drive/MyDrive/safety-quipmentColab/images/val/z00931.jpg /content/drive/MyDrive/safety-quipmentColab/labels_with_ids/val/z00931.txt okay on image /content/drive/MyDrive/safety-quipmentColab/images/val/z00811.jpg /content/drive/MyDrive/safety-quipmentColab/labels_with_ids/val/z00811.txt okay on image /content/drive/MyDrive/safety-quipmentColab/images/val/z00556.jpg /content/drive/MyDrive/safety-quipmentColab/labels_with_ids/val/z00556.txt okay on image /content/drive/MyDrive/safety-quipmentColab/images/val/z00766.jpg /content/drive/MyDrive/safety-quipmentColab/labels_with_ids/val/z00766.txt Traceback (most recent call last): File "train.py", line 102, in main(opt) File "train.py", line 73, in main log_dicttrain, = trainer.train(epoch, train_loader) File "/content/FairMOT/src/lib/trains/base_trainer.py", line 124, in train return self.run_epoch('train', epoch, data_loader) File "/content/FairMOT/src/lib/trains/base_trainer.py", line 67, in run_epoch for iter_id, batch in enumerate(data_loader): File "/usr/local/lib/python3.7/dist-packages/torch/utils/data/dataloader.py", line 345, in next data = self._next_data() File "/usr/local/lib/python3.7/dist-packages/torch/utils/data/dataloader.py", line 856, in _next_data return self._process_data(data) File "/usr/local/lib/python3.7/dist-packages/torch/utils/data/dataloader.py", line 881, in _process_data data.reraise() File "/usr/local/lib/python3.7/dist-packages/torch/_utils.py", line 394, in reraise raise self.exc_type(msg) ValueError: Caught ValueError in DataLoader worker process 0. Original Traceback (most recent call last): File "/usr/local/lib/python3.7/dist-packages/torch/utils/data/_utils/worker.py", line 178, in _worker_loop data = fetcher.fetch(index) File "/usr/local/lib/python3.7/dist-packages/torch/utils/data/_utils/fetch.py", line 44, in fetch data = [self.dataset[idx] for idx in possibly_batched_index] File "/usr/local/lib/python3.7/dist-packages/torch/utils/data/_utils/fetch.py", line 44, in data = [self.dataset[idx] for idx in possibly_batched_index] File "/content/FairMOT/src/lib/datasets/dataset/jde.py", line 427, in getitem imgs, labels, img_path, (input_h, input_w) = self.get_data(img_path, label_path) File "/content/FairMOT/src/lib/datasets/dataset/jde.py", line 194, in get_data labels0 = np.loadtxt(label_path, dtype=np.float32).reshape(-1, 6) ValueError: cannot reshape array of size 20 into shape (6)