vita-epfl / butterflydetector

Other
9 stars 1 forks source link

RuntimeError: The size of tensor a (256) must match the size of tensor b (255) at non-singleton dimension 3 #5

Open HuXinzhi1004 opened 2 years ago

HuXinzhi1004 commented 2 years ago

Hello! Thanks for your great job. When I run

time CUDA_VISIBLE_DEVICES=0,4 python3 -m butterflydetector.train --lr=1e-3 --momentum=0.95 --epochs=150 --lr-decay 120 140 --batch-size=16 --basenet=hrnetw32det --head-quad=1 --headnets butterfly10 --square-edge=512 --lambdas 1 1 1 1 --dataset uavdt

I got the error.

(cluster) hxz@ubuntu16:/home/data/hxz/butterflydetector$ time CUDA_VISIBLE_DEVICES=0,4 python3 -m butterflydetector.train --lr=1e-3 --momentum=0.95 --epochs=150 --lr-decay 120 140 --batch-size=16 --basenet=hrnetw32det --head-quad=1 --headnets butterfly10 --square-edge=512 --lambdas 1 1 1 1 --dataset uavdt INFO:butterflydetector.logs:{'type': 'process', 'argv': ['/home/data/hxz/butterflydetector/butterflydetector/train.py', '--lr=1e-3', '--momentum=0.95', '--epochs=150', '--lr-decay', '120', '140', '--batch-size=16', '--basenet=hrnetw32det', '--head-quad=1', '--headnets', 'butterfly10', '--square-edge=512', '--lambdas', '1', '1', '1', '1', '--dataset', 'uavdt'], 'args': {'debug': False, 'checkpoint': None, 'basenet': 'hrnetw32det', 'headnets': ['butterfly10'], 'pretrained': True, 'cross_talk': 0.0, 'head_dropout': 0.0, 'head_quad': 1, 'lambdas': [1.0, 1.0, 1.0, 1.0], 'r_smooth': 0.0, 'regression_loss': 'laplace', 'background_weight': 1.0, 'margin_loss': False, 'auto_tune_mtl': False, 'butterfly_side_length': 1, 'momentum': 0.95, 'beta2': 0.999, 'adam_eps': 1e-06, 'nesterov': True, 'weight_decay': 0.0, 'adam': False, 'amsgrad': False, 'lr': 0.001, 'lr_decay': [120, 140], 'lr_burn_in_epochs': 2, 'lr_burn_in_factor': 0.001, 'lr_gamma': 0.1, 'dataset': 'uavdt', 'train_annotations': None, 'train_image_dir': None, 'val_annotations': None, 'val_image_dir': None, 'pre_n_images': 8000, 'n_images': None, 'duplicate_data': None, 'pre_duplicate_data': None, 'loader_workers': 2, 'batch_size': 16, 'output': 'outputs/hrnetw32det-butterfly10-edge512-211210-152433.pkl', 'stride_apply': 1, 'epochs': 150, 'freeze_base': 0, 'pre_lr': 0.0001, 'rescale_images': 1.0, 'orientation_invariant': False, 'update_batchnorm_runningstatistics': False, 'square_edge': 512, 'ema': 0.001, 'disable_cuda': False, 'augmentation': True, 'debug_fields_indices': [], 'profile': None, 'device': device(type='cuda'), 'pin_memory': True}, 'version': '0.0.1', 'hostname': 'ubuntu16'} INFO:butterflydetector.network.hrnet:=> init weights from normal distribution INFO:butterflydetector.network.hrnet:=> loading pretrained model pretrained/imagenet/hrnet_w32-36af842e.pth INFO:butterflydetector.network.basenetworks:stride = 4 INFO:butterflydetector.network.basenetworks:output features = 512 INFO:butterflydetector.network.heads:selected head CompositeField for butterfly10 Using multiple GPUs: 2 INFO:butterflydetector.network.losses:multihead loss: ['butterfly10.c', 'butterfly10.vec1', 'butterfly10.scales1', 'butterfly10.scales2'], [1.0, 1.0, 1.0, 1.0] /home/data/hxz/butterflydetector/butterflydetector/data_manager/uavdt.py:66: VisibleDeprecationWarning: Creating an ndarray from ragged nested sequences (which is a list-or-tuple of lists-or-tuples-or ndarrays with different lengths or shapes) is deprecated. If you meant to do this, you must specify 'dtype=object' when creating the ndarray. self.targets = np.asarray(self.targets) /home/data/hxz/butterflydetector/butterflydetector/data_manager/uavdt.py:67: VisibleDeprecationWarning: Creating an ndarray from ragged nested sequences (which is a list-or-tuple of lists-or-tuples-or ndarrays with different lengths or shapes) is deprecated. If you meant to do this, you must specify 'dtype=object' when creating the ndarray. self.targets_ignore = np.asarray(self.targets_ignore) Images: 40409 Images: 16580 Images: 8000 INFO:butterflydetector.optimize:SGD optimizer INFO:butterflydetector.network.trainer:{'type': 'config', 'field_names': ['butterfly10.c', 'butterfly10.vec1', 'butterfly10.scales1', 'butterfly10.scales2']} /home/hxz/anaconda3/envs/cluster/lib/python3.7/site-packages/torch/nn/functional.py:3635: UserWarning: Default upsampling behavior when mode=bilinear is changed to align_corners=False since 0.4.0. Please specify align_corners=True if the old behavior is desired. See the documentation of nn.Upsample for details. "See the documentation of nn.Upsample for details.".format(mode) Traceback (most recent call last): File "/home/hxz/anaconda3/envs/cluster/lib/python3.7/runpy.py", line 193, in _run_module_as_main "main", mod_spec) File "/home/hxz/anaconda3/envs/cluster/lib/python3.7/runpy.py", line 85, in _run_code exec(code, run_globals) File "/home/data/hxz/butterflydetector/butterflydetector/train.py", line 200, in main() File "/home/data/hxz/butterflydetector/butterflydetector/train.py", line 196, in main trainer.loop(train_loader, val_loader, args.epochs, start_epoch=start_epoch) File "/home/data/hxz/butterflydetector/butterflydetector/network/trainer.py", line 99, in loop self.train(train_scenes, epoch) File "/home/data/hxz/butterflydetector/butterflydetector/network/trainer.py", line 173, in train loss, head_losses = self.train_batch(data, target, meta, apply_gradients) File "/home/data/hxz/butterflydetector/butterflydetector/network/trainer.py", line 116, in train_batch loss, head_losses = self.loss(outputs, targets) File "/home/hxz/anaconda3/envs/cluster/lib/python3.7/site-packages/torch/nn/modules/module.py", line 1102, in _call_impl return forward_call(*input, *kwargs) File "/home/data/hxz/butterflydetector/butterflydetector/network/losses.py", line 176, in forward for l, f, t in zip(self.losses, head_fields, head_targets) File "/home/data/hxz/butterflydetector/butterflydetector/network/losses.py", line 177, in for ll in l(f, t)] File "/home/hxz/anaconda3/envs/cluster/lib/python3.7/site-packages/torch/nn/modules/module.py", line 1102, in _call_impl return forward_call(input, **kwargs) File "/home/data/hxz/butterflydetector/butterflydetector/network/losses.py", line 457, in forward ) / 100.0 / batch_size RuntimeError: The size of tensor a (256) must match the size of tensor b (255) at non-singleton dimension 3

How to fix it ?

TonyzBi commented 2 years ago

@HuXinzhi1004 I got the same error with dataset Viddrone2019, Did you fix it ?

HuXinzhi1004 commented 2 years ago

just change the image input size, but I don’t know if it’s the right thing to do

发自我的iPhone

------------------ Original ------------------ From: TonyzBi @.> Date: Wed,Dec 22,2021 10:39 AM To: vita-epfl/butterflydetector @.> Cc: HuXinzhi1004 @.>, Mention @.> Subject: Re: [vita-epfl/butterflydetector] RuntimeError: The size of tensor a (256) must match the size of tensor b (255) at non-singleton dimension 3 (Issue #5)

@HuXinzhi1004 I got the the same error with dataset of Visdrone2019, Did you fixed it ?

— Reply to this email directly, view it on GitHub, or unsubscribe. Triage notifications on the go with GitHub Mobile for iOS or Android. You are receiving this because you were mentioned.Message ID: @.***>

TonyzBi commented 2 years ago

@HuXinzhi1004 Thanks, I have tried other size of input, Just report other size error, it seems no matching the target size when running loss.

TonyzBi commented 2 years ago

@HuXinzhi1004 I change the size of output feature by Heads module, It worked, But I am not sure if this modification has change the purpose of the author, I don't know if it’s the right thing to do too.

+++ b/butterflydetector/network/heads.py
@@ -190,15 +190,26 @@ class CompositeField(torch.nn.Module):
         scales_x = [torch.nn.functional.relu(scale_x) for scale_x in scales_x]

         # upscale
+        # for _ in range(self._quad):
+        #     classes_x = [self.dequad_op(class_x)[:, :, :-1, :-1]
+        #                  for class_x in classes_x]
+        #     regs_x = [self.dequad_op(reg_x)[:, :, :-1, :-1]
+        #               for reg_x in regs_x]
+        #     regs_x_spread = [self.dequad_op(reg_x_spread)[:, :, :-1, :-1]
+        #                      for reg_x_spread in regs_x_spread]
+        #     scales_x = [self.dequad_op(scale_x)[:, :, :-1, :-1]
+        #                 for scale_x in scales_x]
+
         for _ in range(self._quad):
-            classes_x = [self.dequad_op(class_x)[:, :, :-1, :-1]
+            classes_x = [self.dequad_op(class_x)
                          for class_x in classes_x]
-            regs_x = [self.dequad_op(reg_x)[:, :, :-1, :-1]
+            regs_x = [self.dequad_op(reg_x)
                       for reg_x in regs_x]
-            regs_x_spread = [self.dequad_op(reg_x_spread)[:, :, :-1, :-1]
+            regs_x_spread = [self.dequad_op(reg_x_spread)
                              for reg_x_spread in regs_x_spread]
-            scales_x = [self.dequad_op(scale_x)[:, :, :-1, :-1]
+            scales_x = [self.dequad_op(scale_x)
                         for scale_x in scales_x]
+
         # reshape regressions