EricFH / SOR

implementation of "Salient Object Ranking with Position-Preserved Attention"
Apache License 2.0
25 stars 5 forks source link

RuntimeError: Sizes of tensors must match except in dimension 1. Expected size 92 but got size 60 for tensor number 1 in the list #6

Open PineREN opened 2 years ago

PineREN commented 2 years ago

Hello, when I use the ppa.yamL to run the model you provided, it will report this error. Am I using it incorrectly? Looking forward to your answer

[03/30 15:50:53 detectron2]: Arguments: Namespace(config_file='configs/sor/ppa.yaml', input='goutu_test', output='output', confidence_threshold=0.4, opts=[])
WARNING [03/30 15:50:53 d2.config.compat]: Config 'configs/sor/ppa.yaml' has no VERSION. Assuming it to be compatible with latest v2.
The checkpoint state_dict contains keys that are not used by the model:
  pixel_mean
  pixel_std
  0%|          | 0/7 [00:00<?, ?it/s]C:\Users\Pine\.conda\envs\SOR\lib\site-packages\torch\functional.py:445: UserWarning: torch.meshgrid: in an upcoming release, it will be required to pass the indexing argument. (Triggered internally at  ..\aten\src\ATen\native\TensorShape.cpp:2157.)
  return _VF.meshgrid(tensors, **kwargs)  # type: ignore[attr-defined]
  0%|          | 0/7 [00:03<?, ?it/s]
Traceback (most recent call last):
  File "C:\apine\projects\SOR\sor_ppa\vis\demo.py", line 78, in <module>
    predictions, visualized_output = demo.run_on_image(img)
  File "C:\apine\projects\SOR\sor_ppa\vis\predictor.py", line 29, in run_on_image
    predictions = self.predictor(image)
  File "C:\Users\Pine\.conda\envs\SOR\lib\site-packages\detectron2\engine\defaults.py", line 317, in __call__
    predictions = self.model([inputs])[0]
  File "C:\Users\Pine\.conda\envs\SOR\lib\site-packages\torch\nn\modules\module.py", line 1102, in _call_impl
    return forward_call(*input, **kwargs)
  File "C:\Users\Pine\.conda\envs\SOR\lib\site-packages\detectron2\modeling\meta_arch\rcnn.py", line 146, in forward
    return self.inference(batched_inputs)
  File "C:\Users\Pine\.conda\envs\SOR\lib\site-packages\detectron2\modeling\meta_arch\rcnn.py", line 209, in inference
    results, _ = self.roi_heads(images, features, proposals, None)
  File "C:\Users\Pine\.conda\envs\SOR\lib\site-packages\torch\nn\modules\module.py", line 1102, in _call_impl
    return forward_call(*input, **kwargs)
  File "C:\apine\projects\SOR\sor_ppa\centermask\modeling\centermask\center_heads.py", line 458, in forward
    pred_instances = self.forward_with_given_boxes(features, proposals)
  File "C:\apine\projects\SOR\sor_ppa\centermask\modeling\centermask\center_heads.py", line 487, in forward_with_given_boxes
    instances, mask_features, pos = self._forward_mask(features, instances)
  File "C:\apine\projects\SOR\sor_ppa\centermask\modeling\centermask\center_heads.py", line 534, in _forward_mask
    features_.append(torch.cat((feature, y, x), dim=1))
RuntimeError: Sizes of tensors must match except in dimension 1. Expected size 92 but got size 60 for tensor number 1 in the list.
PineREN commented 2 years ago

Hello, is there a requirement for the scale of the input picture ? I found that there was a ratio that did not report errors, so I resized the picture to this ratio, but the result was not ideal. I can solve this by filling in the black edge.

EricFH commented 2 years ago

The size is not fixed, every proposal will get the same size after roi pooling no matter what input shape it is. The scale and ratio is not fixed.