open-mmlab / mmdetection

OpenMMLab Detection Toolbox and Benchmark
https://mmdetection.readthedocs.io
Apache License 2.0
29.01k stars 9.36k forks source link

error when using rtmdet instance segmentation model with SemiBaseDetector #10189

Open zjhthu opened 1 year ago

zjhthu commented 1 year ago

Describe the bug I got the error when using RTMDet instance segmentation model with SemiBaseDetector for semi-supervised learning.

    main()                                                   
  File "tools/train.py", line 129, in main                                                                                                                                                                                                             
    runner.train()                                           
  File "/data/project/zjh/openmmlab/lib/python3.8/site-packages/mmengine/runner/runner.py", line 1706, in train
    model = self.train_loop.run()  # type: ignore
  File "/data/project/zjh/openmmlab/lib/python3.8/site-packages/mmengine/runner/loops.py", line 96, in run                                                                                                                                             
    self.run_epoch()                                         
  File "/data/project/zjh/openmmlab/lib/python3.8/site-packages/mmengine/runner/loops.py", line 112, in run_epoch
    self.run_iter(idx, data_batch)                         
  File "/data/project/zjh/openmmlab/lib/python3.8/site-packages/mmengine/runner/loops.py", line 128, in run_iter
    outputs = self.runner.model.train_step(                                                                                
  File "/data/project/zjh/openmmlab/lib/python3.8/site-packages/mmengine/model/base_model/base_model.py", line 114, in train_step
    losses = self._run_forward(data, mode='loss')  # type: ignore
  File "/data/project/zjh/openmmlab/lib/python3.8/site-packages/mmengine/model/base_model/base_model.py", line 326, in _run_forward
    results = self(**data, mode=mode)                                                                                      
  File "/data/project/zjh/openmmlab/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1051, in _call_impl
    return forward_call(*input, **kwargs)
  File "/data/project/zjh/mmdetection/mmdet/models/detectors/base.py", line 92, in forward
    return self.loss(inputs, data_samples)                   
  File "/data/project/zjh/mmdetection/mmdet/models/detectors/semi_base.py", line 89, in loss
    losses.update(**self.loss_by_pseudo_instances(
  File "/data/project/zjh/mmdetection/mmdet/models/detectors/semi_base.py", line 137, in loss_by_pseudo_instances
    losses = self.student.loss(batch_inputs, batch_data_samples)
  File "/data/project/zjh/mmdetection/mmdet/models/detectors/single_stage.py", line 78, in loss
    losses = self.bbox_head.loss(x, batch_data_samples)
  File "/data/project/zjh/mmdetection/mmdet/models/dense_heads/base_dense_head.py", line 123, in loss
    losses = self.loss_by_feat(*loss_inputs)
  File "/data/project/zjh/mmdetection/mmdet/models/dense_heads/rtmdet_ins_head.py", line 751, in loss_by_feat
    loss_mask = self.loss_mask_by_feat(mask_feat, flatten_kernels,
  File "/data/project/zjh/mmdetection/mmdet/models/dense_heads/rtmdet_ins_head.py", line 630, in loss_mask_by_feat
    pos_gt_masks = torch.cat(pos_gt_masks, 0)
RuntimeError: Sizes of tensors must match except in dimension 1. Got 256 and 249 (The offending index is 0)

It seems the pseudo-masks generated by the teacher network do not work with the student network. I checked the pos_gt_masks variable, all masks are empty but with different shapes:

[tensor([], device='cuda:0', size=(0, 256, 256), dtype=torch.bool), tensor([], device='cuda:0', size=(0, 256, 256), dtype=torch.bool), tensor([], device='cuda:0', size=(0, 161, 161), dtype=torch.bool), tensor([], device='cuda:0', size=(0, 199, 199), dtype=torch.bool), tensor([], device='cuda:0', size=(0, 256, 256), dtype=torch.bool), tensor([], device='cuda:0', size=(0, 256, 256), dtype=torch.bool), tensor([], device='cuda:0', size=(0, 256, 256), dtype=torch.bool), tensor([], device='cuda:0', size=(0, 256, 256), dtype=torch.bool), tensor([], device='cuda:0', size=(0, 256, 256), dtype=torch.bool), tensor([], device='cuda:0', size=(0, 162, 162), dtype=torch.bool), tensor([], device='cuda:0', size=(0, 198, 198), dtype=torch.bool), tensor([], device='cuda:0', size=(0, 256, 256), dtype=torch.bool), tensor([], device='cuda:0', size=(0, 186, 186), dtype=torch.bool), tensor([], device='cuda:0', size=(0, 163, 163), dtype=torch.bool), tensor([], device='cuda:0', size=(0, 133, 133), dtype=torch.bool), tensor([], device='cuda:0', size=(0, 256, 256), dtype=torch.bool), tensor([], device='cuda:0', size=(0, 256, 256), dtype=torch.bool), tensor([], device='cuda:0', size=(0, 185, 185), dtype=torch.bool), tensor([], device='cuda:0', size=(0, 256, 256), dtype=torch.bool), tensor([], device='cuda:0', size=(0, 256, 256), dtype=torch.bool), tensor([], device='cuda:0', size=(0, 153, 153), dtype=torch.bool), tensor([], device='cuda:0', size=(0, 256, 256), dtype=torch.bool), tensor([], device='cuda:0', size=(0, 129, 129), dtype=torch.bool), tensor([], device='cuda:0', size=(0, 256, 256), dtype=torch.bool), tensor([], device='cuda:0', size=(0, 186, 186), dtype=torch.bool), tensor([], device='cuda:0', size=(0, 256, 256), dtype=torch.bool), tensor([], device='cuda:0', size=(0, 256, 256), dtype=torch.bool), tensor([], device='cuda:0', size=(0, 256, 256), dtype=torch.bool), tensor([], device='cuda:0', size=(0, 256, 256), dtype=torch.bool), tensor([], device='cuda:0', size=(0, 187, 187), dtype=torch.bool), tensor([], device='cuda:0', size=(0, 219, 219), dtype=torch.bool), tensor([], device='cuda:0', size=(0, 256, 256), dtype=torch.bool), tensor([], device='cuda:0', size=(0, 141, 141), dtype=torch.bool), tensor([], device='cuda:0', size=(0, 256, 256), dtype=torch.bool), tensor([], device='cuda:0', size=(0, 256, 256), dtype=torch.bool), tensor([], device='cuda:0', size=(0, 256, 256), dtype=torch.bool), tensor([], device='cuda:0', size=(0, 256, 256), dtype=torch.bool), tensor([], device='cuda:0', size=(0, 256, 256), dtype=torch.bool), tensor([], device='cuda:0', size=(0, 128, 128), dtype=torch.bool), tensor([], device='cuda:0', size=(0, 227, 227), dtype=torch.bool), tensor([], device='cuda:0', size=(0, 224, 224), dtype=torch.bool), tensor([], device='cuda:0', size=(0, 256, 256), dtype=torch.bool), tensor([], device='cuda:0', size=(0, 212, 212), dtype=torch.bool), tensor([], device='cuda:0', size=(0, 178, 178), dtype=torch.bool), tensor([], device='cuda:0', size=(0, 195, 195), dtype=torch.bool), tensor([], device='cuda:0', size=(0, 256, 256), dtype=torch.bool), tensor([], device='cuda:0', size=(0, 210, 210), dtype=torch.bool), tensor([], device='cuda:0', size=(0, 133, 133), dtype=torch.bool), tensor([], device='cuda:0', size=(0, 256, 256), dtype=torch.bool), tensor([], device='cuda:0', size=(0, 256, 256), dtype=torch.bool), tensor([], device='cuda:0', size=(0, 141, 141), dtype=torch.bool)]
> /data/project/zjh/mmdetection/mmdet/models/dense_heads/rtmdet_ins_head.py(636)loss_mask_by_feat()

Reproduction

I will upload the config if needed. The experiments are based on the customized dataset.

Environment

sys.platform: linux Python: 3.8.10 (default, Mar 13 2023, 10:26:41) [GCC 9.4.0] CUDA available: True numpy_random_seed: 2147483648 GPU 0: Tesla T4 CUDA_HOME: /usr/local/cuda NVCC: Cuda compilation tools, release 11.2, V11.2.152 GCC: x86_64-linux-gnu-gcc (Ubuntu 9.3.0-17ubuntu1~20.04) 9.3.0 PyTorch: 1.9.0+cu102PyTorch compiling details: PyTorch built with: - GCC 7.3 - C++ Version: 201402 - Intel(R) Math Kernel Library Version 2020.0.0 Product Build 20191122 for Intel(R) 64 architecture applications

TorchVision: 0.10.0+cu102 OpenCV: 4.7.0 MMEngine: 0.7.2 MMDetection: 3.0.0+ecac3a7

zjhthu commented 1 year ago

I run another experiment that only performs the object detection task, no error was encountered. I will check the difference between these two tasks.

zjhthu commented 1 year ago

The root is this line, RTMDet generates masks using img_shape. My data augmentation config comes from the semi_coco_detection. This config does not pad the image which will make img_shape = resize_shape. After adding the pad operation dict(type='Pad', size=image_size, pad_val=dict(img=(pad_val, pad_val, pad_val))), , the error disappears.

zjhthu commented 1 year ago

But I am wondering why there is no error when I did not pad the image? Will MMDet pad the image itself?

Czm369 commented 1 year ago

RTMDet requires a fixed size of the input picture, while the size of the semi-supervised input picture is random. At present, the part of semi-supervised learning does not support instance segmentation

NIKEmissa commented 7 months ago

Any progress of semi-supervised learning on instance segmentation (like rtmdet)? Thanks.