w1oves / Rein

[CVPR 2024] Official implement of <Stronger, Fewer, & Superior: Harnessing Vision Foundation Models for Domain Generalized Semantic Segmentation>
https://zxwei.site/rein
GNU General Public License v3.0
215 stars 19 forks source link

About the problems encountered when using GTA5+SYNTHIA configuration file #35

Closed xiaoxia0722 closed 4 months ago

xiaoxia0722 commented 4 months ago

Hello, I got an error when using the GTA5+SYNTHIA configuration file in Releases:

Traceback (most recent call last):
  File "/workspace/Rein/tools/train.py", line 116, in <module>
    main()
  File "/workspace/Rein/tools/train.py", line 112, in main
    runner.train()
  File "/opt/conda/lib/python3.10/site-packages/mmengine/runner/runner.py", line 1777, in train
    model = self.train_loop.run()  # type: ignore
  File "/opt/conda/lib/python3.10/site-packages/mmengine/runner/loops.py", line 286, in run
    self.run_iter(data_batch)
  File "/opt/conda/lib/python3.10/site-packages/mmengine/runner/loops.py", line 309, in run_iter
    outputs = self.runner.model.train_step(
  File "/opt/conda/lib/python3.10/site-packages/mmengine/model/base_model/base_model.py", line 114, in train_step
    losses = self._run_forward(data, mode='loss')  # type: ignore
  File "/opt/conda/lib/python3.10/site-packages/mmengine/model/base_model/base_model.py", line 361, in _run_forward
    results = self(**data, mode=mode)
  File "/opt/conda/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl
    return forward_call(*args, **kwargs)
  File "/opt/conda/lib/python3.10/site-packages/mmseg/models/segmentors/base.py", line 94, in forward
    return self.loss(inputs, data_samples)
  File "/opt/conda/lib/python3.10/site-packages/mmseg/models/segmentors/encoder_decoder.py", line 178, in loss
    loss_decode = self._decode_head_forward_train(x, data_samples)
  File "/opt/conda/lib/python3.10/site-packages/mmseg/models/segmentors/encoder_decoder.py", line 139, in _decode_head_forward_train
    loss_decode = self.decode_head.loss(inputs, data_samples,
  File "/opt/conda/lib/python3.10/site-packages/mmseg/models/decode_heads/mask2former_head.py", line 126, in loss
    losses = self.loss_by_feat(all_cls_scores, all_mask_preds,
  File "/opt/conda/lib/python3.10/site-packages/mmdet/models/dense_heads/maskformer_head.py", line 348, in loss_by_feat
    losses_cls, losses_mask, losses_dice = multi_apply(
  File "/opt/conda/lib/python3.10/site-packages/mmdet/models/utils/misc.py", line 219, in multi_apply
    return tuple(map(list, zip(*map_results)))
  File "/opt/conda/lib/python3.10/site-packages/mmdet/models/dense_heads/mask2former_head.py", line 273, in _loss_by_feat_single
    avg_factor) = self.get_targets(cls_scores_list, mask_preds_list,
  File "/opt/conda/lib/python3.10/site-packages/mmdet/models/dense_heads/maskformer_head.py", line 237, in get_targets
    results = multi_apply(self._get_targets_single, cls_scores_list,
  File "/opt/conda/lib/python3.10/site-packages/mmdet/models/utils/misc.py", line 219, in multi_apply
    return tuple(map(list, zip(*map_results)))
  File "/opt/conda/lib/python3.10/site-packages/mmdet/models/dense_heads/mask2former_head.py", line 213, in _get_targets_single
    gt_points_masks = point_sample(
  File "/opt/conda/lib/python3.10/site-packages/mmcv/ops/point_sample.py", line 270, in point_sample
    output = F.grid_sample(
  File "/opt/conda/lib/python3.10/site-packages/torch/nn/functional.py", line 4244, in grid_sample
    return torch.grid_sampler(input, grid, mode_enum, padding_mode_enum, align_corners)
RuntimeError: grid_sampler(): expected grid to have size 3 in last dimension, but got grid with sizes [5, 12544, 1, 2]
w1oves commented 4 months ago

I dont know why. I will try to solve it.

xiaoxia0722 commented 4 months ago

problem solved. The SYNTHIA dataset needs to be preprocessed using DDB before it can be used normally.

w1oves commented 4 months ago

Thank you.