open-mmlab / mmcv

OpenMMLab Computer Vision Foundation
https://mmcv.readthedocs.io/en/latest/
Apache License 2.0
5.91k stars 1.65k forks source link

[Bug] no "self_attn_cfg" attribute in BaseTransformerLayer class #3182

Closed guillaumeferreol closed 1 month ago

guillaumeferreol commented 1 month ago

Prerequisite

Environment

torch 2.0.1 (pip) CUDA 11.8 mmcv 2.0.0 mmdet 3.3.0

Reproduces the problem - code sample

This is a snippet of the config file I use in the command, you can use it to create instances of relevant classes to reproduce the issue

imports for those classes are the following: from mmdet.models.relation_heads.mask2former_relation_head import Mask2FormerRelationHead from mmdet.models.layers.msdeformattn_pixel_decoder import MSDeformAttnPixelDecoder from mmdet.models.layers.transformer.detr_layers import DetrTransformerEncoder from mmcv.cnn.bricks.transformer import BaseTransformerLayer

panoptic_head=dict(
          type='Mask2FormerRelationHead',
          in_channels=[256, 512, 1024, 2048],  # pass to pixel_decoder inside
          strides=[4, 8, 16, 32],
          feat_channels=256,
          out_channels=256,
          num_things_classes=num_things_classes,
          num_stuff_classes=num_stuff_classes,
          num_queries=100,
          num_transformer_feat_level=3,

          pixel_decoder=dict(
              type='MSDeformAttnPixelDecoder',
              out_channels=3,
              norm_cfg=dict(type='GN', num_groups=32),
              act_cfg=dict(type='ReLU'),
              encoder=dict(
                  type='DetrTransformerEncoder',
                  num_layers=6,
                  layer_cfg=dict(
                      type='BaseTransformerLayer',
                      attn_cfgs=dict(
                          type='MultiScaleDeformableAttention',
                          embed_dims=256,
                          num_heads=8,
                          num_levels=3,
                          num_points=4,
                          im2col_step=64,
                          dropout=0.0,
                          batch_first=False,
                          norm_cfg=None,
                          init_cfg=None, 
                          value_proj_ratio=1.0),
                      ffn_cfgs=dict(
                          type='FFN',
                          embed_dims=256,
                          feedforward_channels=1024,
                          num_fcs=2,
                          ffn_drop=0.0,
                          dropout_layer=None,
                          add_identity=True,
                          init_cfg=None,
                          layer_scale_init_value=0.0,
                          act_cfg=dict(type='ReLU', inplace=True)),
                      operation_order=('self_attn', 'norm', 'ffn', 'norm'),
                      init_config=None),
                  num_cp=-1, 
                  init_cfg=None),
              positional_encoding=dict(
                  type='SinePositionalEncoding', num_feats=128, normalize=True),
              init_cfg=None),
          enforce_decoder_input_project=False,
          positional_encoding=dict(
              type='SinePositionalEncoding', num_feats=128, normalize=True),
          transformer_decoder=dict(
              type='DetrTransformerDecoder',
              return_intermediate=True,
              num_layers=9,
              layer_cfg=dict(
                  type='DetrTransformerDecoderLayer',
                  self_attn_cfg=dict(
                      type='MultiheadAttention',
                      embed_dims=256,
                      num_heads=8,
                      attn_drop=0.0,
                      proj_drop=0.0,
                      dropout_layer=None,
                      init_config=None,
                      batch_first=False),
                  cross_attn_cfg=dict(
                      type='MultiheadAttention',
                      embed_dims=256,
                      num_heads=8,
                      attn_drop=0.0,
                      proj_drop=0.0,
                      dropout_layer=None,
                      init_config=None,
                      batch_first=True),
                  ffn_cfg=dict(
                      type='FFN',
                      embed_dims=256,
                      feedforward_channels=2048,
                      num_fcs=2,
                      act_cfg=dict(type='ReLU', inplace=True),
                      ffn_drop=0.0,
                      dropout_layer=None,
                      add_identity=True, 
                      init_config=None,
                      layer_scale_init_value=0.0), 
                  norm_cfgs=dict(
                      type="LN"),
                  init_cfg=None,
                  feedforward_channels=2048,
                  operation_order=('cross_attn', 'norm', 'self_attn', 'norm',
                                   'ffn', 'norm')),
              post_norm_cfg = dict(
                      type="LN"),
              init_cfg=None),

          loss_cls=dict(
              type='CrossEntropyLoss',
              use_sigmoid=False,
              loss_weight=2.0,
              reduction='mean',
              class_weight=[1.0] * num_object_classes + [0.1]),
          loss_mask=dict(
              type='CrossEntropyLoss',
              use_sigmoid=True,
              reduction='mean',
              loss_weight=5.0),
          loss_dice=dict(
              type='DiceLoss',
              use_sigmoid=True,
              activate=True,
              reduction='mean',
              naive_dice=True,
              eps=1.0,
              loss_weight=5.0),
          train_cfg=dict(
              use_pan_seg_losses=False)
      )

Reproduces the problem - command or script

PYTHONPATH="$(dirname $0)/..":$PYTHONPATH python3 -m torch.distributed.launch --nproc_per_node=1 --master_port=8540 tools/train.py configs/psg/baseline_v2_r50.py --resume --no-validate --launcher pytorch

Reproduces the problem - error message

Traceback (most recent call last): File "/home/gferreol/.local/lib/python3.10/site-packages/mmengine/config/config.py", line 109, in getattr value = super().getattr(name) File "/home/gferreol/.local/lib/python3.10/site-packages/addict/addict.py", line 67, in getattr return self.getitem(item) File "/home/gferreol/.local/lib/python3.10/site-packages/mmengine/config/config.py", line 138, in getitem return self.build_lazy(super().getitem(key)) File "/home/gferreol/.local/lib/python3.10/site-packages/mmengine/config/config.py", line 105, in missing raise KeyError(name) KeyError: 'self_attn_cfg'

During handling of the above exception, another exception occurred:

Traceback (most recent call last): File "/home/gferreol/project/tools/train.py", line 340, in main() File "/home/gferreol/project/tools/train.py", line 329, in main runner = Runner.from_cfg(cfg)() File "/home/gferreol/.local/lib/python3.10/site-packages/mmengine/runner/runner.py", line 462, in from_cfg runner = cls( File "/home/gferreol/.local/lib/python3.10/site-packages/mmengine/runner/runner.py", line 429, in init self.model = self.build_model(model) File "/home/gferreol/.local/lib/python3.10/site-packages/mmengine/runner/runner.py", line 836, in build_model model = MODELS.build(model) File "/home/gferreol/.local/lib/python3.10/site-packages/mmengine/registry/registry.py", line 570, in build return self.build_func(cfg, args, kwargs, registry=self) File "/home/gferreol/.local/lib/python3.10/site-packages/mmengine/registry/build_functions.py", line 232, in build_model_from_cfg return build_from_cfg(cfg, registry, default_args) File "/home/gferreol/.local/lib/python3.10/site-packages/mmengine/registry/build_functions.py", line 121, in build_from_cfg obj = obj_cls(args) # type: ignore File "/home/gferreol/project/kings_sgg/models/detectors/mask2former_relation_v2.py", line 66, in init super(Mask2FormerRelationV2, self).init( File "/home/gferreol/.local/lib/python3.10/site-packages/mmdet/models/detectors/mask2former.py", line 22, in init super().init( File "/home/gferreol/.local/lib/python3.10/site-packages/mmdet/models/detectors/maskformer.py", line 36, in init self.panoptic_head = MODELS.build(panoptichead) File "/home/gferreol/.local/lib/python3.10/site-packages/mmengine/registry/registry.py", line 570, in build return self.build_func(cfg, args, kwargs, registry=self) File "/home/gferreol/.local/lib/python3.10/site-packages/mmengine/registry/build_functions.py", line 232, in build_model_from_cfg return build_from_cfg(cfg, registry, default_args) File "/home/gferreol/.local/lib/python3.10/site-packages/mmengine/registry/build_functions.py", line 121, in build_from_cfg obj = obj_cls(args) # type: ignore File "/home/gferreol/project/kings_sgg/models/relation_heads/mask2former_relation_head.py", line 39, in init super(Mask2FormerRelationHead, self).init( File "/home/gferreol/.local/lib/python3.10/site-packages/mmdet/models/dense_heads/mask2former_head.py", line 113, in init self.pixel_decoder = MODELS.build(pixeldecoder) File "/home/gferreol/.local/lib/python3.10/site-packages/mmengine/registry/registry.py", line 570, in build return self.build_func(cfg, *args, kwargs, registry=self) File "/home/gferreol/.local/lib/python3.10/site-packages/mmengine/registry/build_functions.py", line 232, in build_model_from_cfg return build_from_cfg(cfg, registry, default_args) File "/home/gferreol/.local/lib/python3.10/site-packages/mmengine/registry/build_functions.py", line 121, in build_from_cfg obj = obj_cls(args) # type: ignore File "/home/gferreol/.local/lib/python3.10/site-packages/mmdet/models/layers/msdeformattn_pixel_decoder.py", line 62, in init encoder.layer_cfg.self_attn_cfg.num_levels File "/home/gferreol/.local/lib/python3.10/site-packages/mmengine/config/config.py", line 113, in getattr raise AttributeError(f"'{self.class.name}' object has no " AttributeError: 'ConfigDict' object has no attribute 'self_attn_cfg'

Additional information

Hello,

I created this issue because I am trying to implement the following project in newer versions of mmcv and mmdet: https://github.com/franciszzj/VLPrompt/tree/main

In this project, they create a class which inherits from Mask2Former (mmdet). This class takes as input a panoptic head (Mask2FormerRelationHead from mmdet) which itself takes as input a pixel decoder.

The pixel decoder (MSDeformAttnPixelDecoder from mmdet) takes an encoder (DetrTransformerEncoder from mmdet) which takes a layer configuration (BaseTransformerLayer from mmcv) which has an attribute "attn_cfgs" for the attention configuration, used for both self and cross-attention.

My issue comes when creating an instance of the classes MSDeformAttnPixelDecoder (1) or Mask2FormerRelationHead (2) (inheriting from Mask2FormerHead) as they have the following code respectively (I have only tested with those 2 since they are used in the project but there may be other classes with the same issue) :

this line is inside the pixel decoder (1) self.num_encoder_levels = encoder.layer_cfg.self_attn_cfg.num_levels

this line is inside the panoptic head (2) assert pixel_decoder.encoder.layer_cfg.self_attn_cfg.num_levels == num_transformer_feat_level

However, as stated above, when using BaseTransformerLayer as layer_cfg, this attribute does not exist since attn_cfgs is used for both self and cross-attention.

I don't know if this issue should belong here or on the mmdet project and I hope that I provided enough details. Any help would be greatly appreciated!