chenller / mmseg-extension

mmsegmentation extension library containing the latest paper code.
Apache License 2.0
6 stars 0 forks source link

AssertionError: mmengine does not support to load mmsegextension config #4

Open antopost opened 1 day ago

antopost commented 1 day ago

I am trying to load just the model for training to incorporate it into my custom training loop. I tried this:

from mmengine.config import Config
from mmseg.registry import MODELS
from mmseg.utils import register_all_modules
import mmsegext

register_all_modules()

config_file = 'mmseg-extension/configs/vit_adapter/mask2former_beitv2_adapter_large_896_80k_ade20k_ss.py'
cfg = Config.fromfile(config_file)

But got he following error: AssertionError: mmengine does not support to load mmsegextension config.

Is there a recommended way how to load just the model?

Edit: I also get the same error when performing: python tools/train.py mmseg-extension/configs/vit_adapter/mask2former_beit_adapter_large_640_160k_ade20k_ss.py

chenller commented 3 hours ago

Modify the second line of the the CONFIG file to " 'mmsegext::base/datasets/ade20k_512_tta_without_ratio.py', ".

chenller commented 3 hours ago

'mmsegext::_base_/datasets/ade20k_512_tta_without_ratio.py',

antopost commented 3 hours ago

I modified the line as described. Now I get a different error. I will post the full output here:

python3 mmseg-extension/tools/train.py /home/anba/catkin_ws/src/tas_dev/dev/anba/Mask2Former/mmseg-extension/configs/vit_adapter/mask2former_beit_adapter_large_896_80k_ade20k_ss.py

/home/anba/anaconda3/envs/mmsegext/lib/python3.9/site-packages/mmengine/optim/optimizer/zero_optimizer.py:11: DeprecationWarning: `TorchScript` support for functional optimizers is deprecated and will be removed in a future PyTorch release. Consider using the `torch.compile` optimizer instead.
  from torch.distributed.optim import \
/home/anba/anaconda3/envs/mmsegext/lib/python3.9/site-packages/mmsegextlib_msda-1.0-py3.9-linux-x86_64.egg/mmsegextlib_msda/functions/ms_deform_attn_func.py:22: FutureWarning: `torch.cuda.amp.custom_fwd(args...)` is deprecated. Please use `torch.amp.custom_fwd(args..., device_type='cuda')` instead.
  @custom_fwd(cast_inputs=torch.float32)
/home/anba/anaconda3/envs/mmsegext/lib/python3.9/site-packages/mmsegextlib_msda-1.0-py3.9-linux-x86_64.egg/mmsegextlib_msda/functions/ms_deform_attn_func.py:39: FutureWarning: `torch.cuda.amp.custom_bwd(args...)` is deprecated. Please use `torch.amp.custom_bwd(args..., device_type='cuda')` instead.
  def backward(ctx, grad_output):
/home/anba/anaconda3/envs/mmsegext/lib/python3.9/site-packages/mmsegextlib_dcnv3-1.0-py3.9-linux-x86_64.egg/mmsegextlib_dcnv3/functions/dcnv3_func.py:22: FutureWarning: `torch.cuda.amp.custom_fwd(args...)` is deprecated. Please use `torch.amp.custom_fwd(args..., device_type='cuda')` instead.
  def forward(
/home/anba/anaconda3/envs/mmsegext/lib/python3.9/site-packages/mmsegextlib_dcnv3-1.0-py3.9-linux-x86_64.egg/mmsegextlib_dcnv3/functions/dcnv3_func.py:51: FutureWarning: `torch.cuda.amp.custom_bwd(args...)` is deprecated. Please use `torch.amp.custom_bwd(args..., device_type='cuda')` instead.
  def backward(ctx, grad_output):
/home/anba/anaconda3/envs/mmsegext/lib/python3.9/site-packages/mmsegextlib_dcnv3-1.0-py3.9-linux-x86_64.egg/mmsegextlib_dcnv3/modules/dcnv3.py:20: UserWarning: Now, we support DCNv4 in InternImage.
  warnings.warn('Now, we support DCNv4 in InternImage.')
/home/anba/anaconda3/envs/mmsegext/lib/python3.9/site-packages/torch/cuda/__init__.py:654: UserWarning: Can't initialize NVML
  warnings.warn("Can't initialize NVML")
/home/anba/anaconda3/envs/mmsegext/lib/python3.9/site-packages/mmsegextlib_dcnv4/functions/dcnv4_func.py:65: FutureWarning: `torch.cuda.amp.custom_fwd(args...)` is deprecated. Please use `torch.amp.custom_fwd(args..., device_type='cuda')` instead.
  def forward(
/home/anba/anaconda3/envs/mmsegext/lib/python3.9/site-packages/mmsegextlib_dcnv4/functions/dcnv4_func.py:111: FutureWarning: `torch.cuda.amp.custom_bwd(args...)` is deprecated. Please use `torch.amp.custom_bwd(args..., device_type='cuda')` instead.
  def backward(ctx, grad_output):
09/24 12:37:04 - mmengine - INFO - 
------------------------------------------------------------
System environment:
    sys.platform: linux
    Python: 3.9.19 | packaged by conda-forge | (main, Mar 20 2024, 12:50:21) [GCC 12.3.0]
    CUDA available: True
    MUSA available: False
    numpy_random_seed: 1611889984
    GPU 0: NVIDIA RTX A4000
    CUDA_HOME: /usr/local/cuda
    NVCC: Cuda compilation tools, release 12.1, V12.1.105
    GCC: gcc (Ubuntu 9.4.0-1ubuntu1~20.04.2) 9.4.0
    PyTorch: 2.4.1+cu121
    PyTorch compiling details: PyTorch built with:
  - GCC 9.3
  - C++ Version: 201703
  - Intel(R) oneAPI Math Kernel Library Version 2022.2-Product Build 20220804 for Intel(R) 64 architecture applications
  - Intel(R) MKL-DNN v3.4.2 (Git Hash 1137e04ec0b5251ca2b4400a4fd3c667ce843d67)
  - OpenMP 201511 (a.k.a. OpenMP 4.5)
  - LAPACK is enabled (usually provided by MKL)
  - NNPACK is enabled
  - CPU capability usage: AVX512
  - CUDA Runtime 12.1
  - NVCC architecture flags: -gencode;arch=compute_50,code=sm_50;-gencode;arch=compute_60,code=sm_60;-gencode;arch=compute_70,code=sm_70;-gencode;arch=compute_75,code=sm_75;-gencode;arch=compute_80,code=sm_80;-gencode;arch=compute_86,code=sm_86;-gencode;arch=compute_90,code=sm_90
  - CuDNN 90.1  (built against CUDA 12.4)
  - Magma 2.6.1
  - Build settings: BLAS_INFO=mkl, BUILD_TYPE=Release, CUDA_VERSION=12.1, CUDNN_VERSION=9.1.0, CXX_COMPILER=/opt/rh/devtoolset-9/root/usr/bin/c++, CXX_FLAGS= -D_GLIBCXX_USE_CXX11_ABI=0 -fabi-version=11 -fvisibility-inlines-hidden -DUSE_PTHREADPOOL -DNDEBUG -DUSE_KINETO -DLIBKINETO_NOROCTRACER -DUSE_FBGEMM -DUSE_PYTORCH_QNNPACK -DUSE_XNNPACK -DSYMBOLICATE_MOBILE_DEBUG_HANDLE -O2 -fPIC -Wall -Wextra -Werror=return-type -Werror=non-virtual-dtor -Werror=bool-operation -Wnarrowing -Wno-missing-field-initializers -Wno-type-limits -Wno-array-bounds -Wno-unknown-pragmas -Wno-unused-parameter -Wno-unused-function -Wno-unused-result -Wno-strict-overflow -Wno-strict-aliasing -Wno-stringop-overflow -Wsuggest-override -Wno-psabi -Wno-error=pedantic -Wno-error=old-style-cast -Wno-missing-braces -fdiagnostics-color=always -faligned-new -Wno-unused-but-set-variable -Wno-maybe-uninitialized -fno-math-errno -fno-trapping-math -Werror=format -Wno-stringop-overflow, LAPACK_INFO=mkl, PERF_WITH_AVX=1, PERF_WITH_AVX2=1, PERF_WITH_AVX512=1, TORCH_VERSION=2.4.1, USE_CUDA=ON, USE_CUDNN=ON, USE_CUSPARSELT=1, USE_EXCEPTION_PTR=1, USE_GFLAGS=OFF, USE_GLOG=OFF, USE_GLOO=ON, USE_MKL=ON, USE_MKLDNN=ON, USE_MPI=OFF, USE_NCCL=1, USE_NNPACK=ON, USE_OPENMP=ON, USE_ROCM=OFF, USE_ROCM_KERNEL_ASSERT=OFF, 

    TorchVision: 0.19.1+cu121
    OpenCV: 4.8.1
    MMEngine: 0.10.5

Runtime environment:
    cudnn_benchmark: True
    mp_cfg: {'mp_start_method': 'fork', 'opencv_num_threads': 0}
    dist_cfg: {'backend': 'nccl'}
    seed: 1611889984
    Distributed launcher: none
    Distributed training: False
    GPU number: 1
------------------------------------------------------------

09/24 12:37:05 - mmengine - INFO - Config:
crop_size = (
    896,
    896,
)
data_preprocessor = dict(
    bgr_to_rgb=True,
    mean=[
        123.675,
        116.28,
        103.53,
    ],
    pad_val=0,
    seg_pad_val=255,
    size=(
        896,
        896,
    ),
    std=[
        58.395,
        57.12,
        57.375,
    ],
    type='SegDataPreProcessor')
data_root = '/home/yansu/dataset/mmseg/ADEChallengeData2016/'
dataset_type = 'ADE20KDataset'
default_hooks = dict(
    checkpoint=dict(
        _scope_='mmseg', by_epoch=False, interval=8000, type='CheckpointHook'),
    logger=dict(
        _scope_='mmseg',
        interval=50,
        log_metric_by_epoch=False,
        type='LoggerHook'),
    param_scheduler=dict(_scope_='mmseg', type='ParamSchedulerHook'),
    sampler_seed=dict(_scope_='mmseg', type='DistSamplerSeedHook'),
    timer=dict(_scope_='mmseg', type='IterTimerHook'),
    visualization=dict(_scope_='mmseg', type='SegVisualizationHook'))
default_scope = 'mmseg'
env_cfg = dict(
    cudnn_benchmark=True,
    dist_cfg=dict(backend='nccl'),
    mp_cfg=dict(mp_start_method='fork', opencv_num_threads=0))
launcher = 'none'
load_from = None
log_level = 'INFO'
log_processor = dict(by_epoch=False)
model = dict(
    backbone=dict(
        _scope_='mmsegextension',
        cffn_ratio=0.25,
        conv_inplane=64,
        deform_num_heads=16,
        deform_ratio=0.5,
        depth=24,
        drop_path_rate=0.3,
        embed_dim=1024,
        img_size=896,
        init_values=1e-06,
        interaction_indexes=[
            [
                0,
                5,
            ],
            [
                6,
                11,
            ],
            [
                12,
                17,
            ],
            [
                18,
                23,
            ],
        ],
        mlp_ratio=4,
        n_points=4,
        num_heads=16,
        patch_size=16,
        qkv_bias=True,
        type='BEiTAdapter',
        use_abs_pos_emb=False,
        use_rel_pos_bias=True,
        with_cp=True),
    decode_head=dict(
        _scope_='mmsegextension',
        enforce_decoder_input_project=False,
        feat_channels=1024,
        in_channels=[
            1024,
            1024,
            1024,
            1024,
        ],
        in_index=[
            0,
            1,
            2,
            3,
        ],
        loss_cls=dict(
            class_weight=[
                1.0,
                1.0,
                1.0,
                1.0,
                1.0,
                1.0,
                1.0,
                1.0,
                1.0,
                1.0,
                1.0,
                1.0,
                1.0,
                1.0,
                1.0,
                1.0,
                1.0,
                1.0,
                1.0,
                1.0,
                1.0,
                1.0,
                1.0,
                1.0,
                1.0,
                1.0,
                1.0,
                1.0,
                1.0,
                1.0,
                1.0,
                1.0,
                1.0,
                1.0,
                1.0,
                1.0,
                1.0,
                1.0,
                1.0,
                1.0,
                1.0,
                1.0,
                1.0,
                1.0,
                1.0,
                1.0,
                1.0,
                1.0,
                1.0,
                1.0,
                1.0,
                1.0,
                1.0,
                1.0,
                1.0,
                1.0,
                1.0,
                1.0,
                1.0,
                1.0,
                1.0,
                1.0,
                1.0,
                1.0,
                1.0,
                1.0,
                1.0,
                1.0,
                1.0,
                1.0,
                1.0,
                1.0,
                1.0,
                1.0,
                1.0,
                1.0,
                1.0,
                1.0,
                1.0,
                1.0,
                1.0,
                1.0,
                1.0,
                1.0,
                1.0,
                1.0,
                1.0,
                1.0,
                1.0,
                1.0,
                1.0,
                1.0,
                1.0,
                1.0,
                1.0,
                1.0,
                1.0,
                1.0,
                1.0,
                1.0,
                1.0,
                1.0,
                1.0,
                1.0,
                1.0,
                1.0,
                1.0,
                1.0,
                1.0,
                1.0,
                1.0,
                1.0,
                1.0,
                1.0,
                1.0,
                1.0,
                1.0,
                1.0,
                1.0,
                1.0,
                1.0,
                1.0,
                1.0,
                1.0,
                1.0,
                1.0,
                1.0,
                1.0,
                1.0,
                1.0,
                1.0,
                1.0,
                1.0,
                1.0,
                1.0,
                1.0,
                1.0,
                1.0,
                1.0,
                1.0,
                1.0,
                1.0,
                1.0,
                1.0,
                1.0,
                1.0,
                1.0,
                1.0,
                1.0,
                1.0,
                0.1,
            ],
            loss_weight=2.0,
            reduction='mean',
            type='CrossEntropyLoss',
            use_sigmoid=False),
        loss_dice=dict(
            activate=True,
            eps=1.0,
            loss_weight=5.0,
            naive_dice=True,
            reduction='mean',
            type='DiceLoss',
            use_sigmoid=True),
        loss_mask=dict(
            loss_weight=5.0,
            reduction='mean',
            type='CrossEntropyLoss',
            use_sigmoid=True),
        num_classes=150,
        num_queries=200,
        num_stuff_classes=50,
        num_things_classes=100,
        num_transformer_feat_level=3,
        out_channels=1024,
        pixel_decoder=dict(
            act_cfg=dict(type='ReLU'),
            encoder=dict(
                init_cfg=None,
                num_layers=6,
                transformerlayers=dict(
                    attn_cfgs=dict(
                        batch_first=False,
                        dropout=0.0,
                        embed_dims=1024,
                        im2col_step=64,
                        init_cfg=None,
                        norm_cfg=None,
                        num_heads=32,
                        num_levels=3,
                        num_points=4,
                        type='MultiScaleDeformableAttention'),
                    ffn_cfgs=dict(
                        act_cfg=dict(inplace=True, type='ReLU'),
                        embed_dims=1024,
                        feedforward_channels=4096,
                        ffn_drop=0.0,
                        num_fcs=2,
                        type='FFN'),
                    operation_order=(
                        'self_attn',
                        'norm',
                        'ffn',
                        'norm',
                    ),
                    type='BaseTransformerLayer'),
                type='DetrTransformerEncoder'),
            init_cfg=None,
            norm_cfg=dict(num_groups=32, type='GN'),
            num_outs=3,
            positional_encoding=dict(
                _scope_='mmdet',
                normalize=True,
                num_feats=512,
                type='SinePositionalEncoding'),
            type='MSDeformAttnPixelDecoder'),
        positional_encoding=dict(
            _scope_='mmdet',
            normalize=True,
            num_feats=512,
            type='SinePositionalEncoding'),
        test_cfg=dict(
            crop_size=(
                896,
                896,
            ),
            filter_low_score=True,
            instance_on=True,
            iou_thr=0.8,
            max_per_image=100,
            mode='slide',
            panoptic_on=True,
            semantic_on=False,
            stride=(
                512,
                512,
            )),
        train_cfg=dict(
            assigner=dict(
                _scope_='mmdet',
                cls_cost=dict(type='ClassificationCost', weight=2.0),
                dice_cost=dict(
                    eps=1.0, pred_act=True, type='DiceCost', weight=5.0),
                mask_cost=dict(
                    type='CrossEntropyLossCost', use_sigmoid=True, weight=5.0),
                type='MaskHungarianAssigner'),
            importance_sample_ratio=0.75,
            num_points=12544,
            oversample_ratio=3.0,
            sampler=dict(_scope_='mmdet', type='MaskPseudoSampler')),
        transformer_decoder=dict(
            init_cfg=None,
            num_layers=9,
            return_intermediate=True,
            transformerlayers=dict(
                attn_cfgs=dict(
                    attn_drop=0.0,
                    batch_first=False,
                    dropout_layer=None,
                    embed_dims=1024,
                    num_heads=32,
                    proj_drop=0.0,
                    type='MultiheadAttention'),
                feedforward_channels=4096,
                ffn_cfgs=dict(
                    act_cfg=dict(inplace=True, type='ReLU'),
                    add_identity=True,
                    dropout_layer=None,
                    embed_dims=1024,
                    feedforward_channels=4096,
                    ffn_drop=0.0,
                    num_fcs=2),
                operation_order=(
                    'cross_attn',
                    'norm',
                    'self_attn',
                    'norm',
                    'ffn',
                    'norm',
                ),
                type='DetrTransformerDecoderLayer'),
            type='DetrTransformerDecoder'),
        type='AdapterMask2FormerHead'),
    test_cfg=dict(crop_size=(
        896,
        896,
    ), mode='slide', stride=(
        512,
        512,
    )),
    train_cfg=dict(),
    type='EncoderDecoder')
norm_cfg = dict(requires_grad=True, type='SyncBN')
num_classes = 150
optim_wrapper = dict(
    _scope_='mmseg',
    clip_grad=None,
    optimizer=dict(lr=0.01, momentum=0.9, type='SGD', weight_decay=0.0005),
    type='OptimWrapper')
optimizer = dict(
    _scope_='mmseg', lr=0.01, momentum=0.9, type='SGD', weight_decay=0.0005)
param_scheduler = [
    dict(
        _scope_='mmseg',
        begin=0,
        by_epoch=False,
        end=80000,
        eta_min=0.0001,
        power=0.9,
        type='PolyLR'),
]
resume = False
test_cfg = dict(_scope_='mmseg', type='TestLoop')
test_dataloader = dict(
    batch_size=1,
    dataset=dict(
        _scope_='mmseg',
        data_prefix=dict(
            img_path='images/validation',
            seg_map_path='annotations/validation'),
        data_root='/home/yansu/dataset/mmseg/ADEChallengeData2016/',
        pipeline=[
            dict(type='LoadImageFromFile'),
            dict(keep_ratio=True, scale=(
                3584,
                896,
            ), type='Resize'),
            dict(reduce_zero_label=True, type='LoadAnnotations'),
            dict(type='PackSegInputs'),
        ],
        type='ADE20KDataset'),
    num_workers=4,
    persistent_workers=True,
    sampler=dict(_scope_='mmsegext', shuffle=False, type='DefaultSampler'))
test_evaluator = dict(
    _scope_='mmseg', iou_metrics=[
        'mIoU',
    ], type='IoUMetric')
test_pipeline = [
    dict(type='LoadImageFromFile'),
    dict(keep_ratio=True, scale=(
        3584,
        896,
    ), type='Resize'),
    dict(reduce_zero_label=True, type='LoadAnnotations'),
    dict(type='PackSegInputs'),
]
train_cfg = dict(
    _scope_='mmseg',
    max_iters=80000,
    type='IterBasedTrainLoop',
    val_interval=8000)
train_dataloader = dict(
    batch_size=2,
    dataset=dict(
        _scope_='mmseg',
        data_prefix=dict(
            img_path='images/training', seg_map_path='annotations/training'),
        data_root='/home/yansu/dataset/mmseg/ADEChallengeData2016/',
        pipeline=[
            dict(type='LoadImageFromFile'),
            dict(reduce_zero_label=True, type='LoadAnnotations'),
            dict(
                keep_ratio=True,
                ratio_range=(
                    0.5,
                    2.0,
                ),
                scale=(
                    3584,
                    896,
                ),
                type='RandomResize'),
            dict(
                cat_max_ratio=0.75, crop_size=(
                    896,
                    896,
                ), type='RandomCrop'),
            dict(prob=0.5, type='RandomFlip'),
            dict(type='PhotoMetricDistortion'),
            dict(type='PackSegInputs'),
        ],
        type='ADE20KDataset'),
    num_workers=4,
    persistent_workers=True,
    sampler=dict(_scope_='mmsegext', shuffle=True, type='InfiniteSampler'))
train_pipeline = [
    dict(type='LoadImageFromFile'),
    dict(reduce_zero_label=True, type='LoadAnnotations'),
    dict(
        keep_ratio=True,
        ratio_range=(
            0.5,
            2.0,
        ),
        scale=(
            3584,
            896,
        ),
        type='RandomResize'),
    dict(cat_max_ratio=0.75, crop_size=(
        896,
        896,
    ), type='RandomCrop'),
    dict(prob=0.5, type='RandomFlip'),
    dict(type='PhotoMetricDistortion'),
    dict(type='PackSegInputs'),
]
tta_model = dict(_scope_='mmseg', type='SegTTAModel')
tta_pipeline = [
    dict(backend_args=None, type='LoadImageFromFile'),
    dict(
        _scope_='mmseg',
        transforms=[
            [
                dict(keep_ratio=True, scale=(
                    3584,
                    896,
                ), type='Resize'),
            ],
            [
                dict(direction='horizontal', prob=0.0, type='RandomFlip'),
                dict(direction='horizontal', prob=1.0, type='RandomFlip'),
            ],
            [
                dict(type='LoadAnnotations'),
            ],
            [
                dict(type='PackSegInputs'),
            ],
        ],
        type='TestTimeAug'),
]
val_cfg = dict(_scope_='mmseg', type='ValLoop')
val_dataloader = dict(
    batch_size=1,
    dataset=dict(
        _scope_='mmseg',
        data_prefix=dict(
            img_path='images/validation',
            seg_map_path='annotations/validation'),
        data_root='/home/yansu/dataset/mmseg/ADEChallengeData2016/',
        pipeline=[
            dict(type='LoadImageFromFile'),
            dict(keep_ratio=True, scale=(
                3584,
                896,
            ), type='Resize'),
            dict(reduce_zero_label=True, type='LoadAnnotations'),
            dict(type='PackSegInputs'),
        ],
        type='ADE20KDataset'),
    num_workers=4,
    persistent_workers=True,
    sampler=dict(_scope_='mmsegext', shuffle=False, type='DefaultSampler'))
val_evaluator = dict(
    _scope_='mmseg', iou_metrics=[
        'mIoU',
    ], type='IoUMetric')
vis_backends = [
    dict(_scope_='mmseg', type='LocalVisBackend'),
]
visualizer = dict(
    _scope_='mmseg',
    name='visualizer',
    type='SegLocalVisualizer',
    vis_backends=[
        dict(type='LocalVisBackend'),
    ])
work_dir = './work_dirs/mask2former_beit_adapter_large_896_80k_ade20k_ss'

09/24 12:37:05 - mmengine - WARNING - Failed to import `mmsegextension.registry` make sure the registry.py exists in `mmsegextension` package.
09/24 12:37:05 - mmengine - WARNING - Failed to search registry with scope "mmsegextension" in the "model" registry tree. As a workaround, the current "model" registry in "mmseg" is used to build instance. This may cause unexpected failure when running the built modules. Please check whether "mmsegextension" is a correct scope, or whether the registry is initialized.
Traceback (most recent call last):
  File "/home/anba/catkin_ws/src/tas_dev/dev/anba/Mask2Former/mmseg-extension/tools/train.py", line 106, in <module>
    main()
  File "/home/anba/catkin_ws/src/tas_dev/dev/anba/Mask2Former/mmseg-extension/tools/train.py", line 95, in main
    runner = Runner.from_cfg(cfg)
  File "/home/anba/anaconda3/envs/mmsegext/lib/python3.9/site-packages/mmengine/runner/runner.py", line 462, in from_cfg
    runner = cls(
  File "/home/anba/anaconda3/envs/mmsegext/lib/python3.9/site-packages/mmengine/runner/runner.py", line 429, in __init__
    self.model = self.build_model(model)
  File "/home/anba/anaconda3/envs/mmsegext/lib/python3.9/site-packages/mmengine/runner/runner.py", line 836, in build_model
    model = MODELS.build(model)
  File "/home/anba/anaconda3/envs/mmsegext/lib/python3.9/site-packages/mmengine/registry/registry.py", line 570, in build
    return self.build_func(cfg, *args, **kwargs, registry=self)
  File "/home/anba/anaconda3/envs/mmsegext/lib/python3.9/site-packages/mmengine/registry/build_functions.py", line 232, in build_model_from_cfg
    return build_from_cfg(cfg, registry, default_args)
  File "/home/anba/anaconda3/envs/mmsegext/lib/python3.9/site-packages/mmengine/registry/build_functions.py", line 121, in build_from_cfg
    obj = obj_cls(**args)  # type: ignore
  File "/home/anba/anaconda3/envs/mmsegext/lib/python3.9/site-packages/mmseg/models/segmentors/encoder_decoder.py", line 89, in __init__
    self.backbone = MODELS.build(backbone)
  File "/home/anba/anaconda3/envs/mmsegext/lib/python3.9/site-packages/mmengine/registry/registry.py", line 570, in build
    return self.build_func(cfg, *args, **kwargs, registry=self)
  File "/home/anba/anaconda3/envs/mmsegext/lib/python3.9/site-packages/mmengine/registry/build_functions.py", line 232, in build_model_from_cfg
    return build_from_cfg(cfg, registry, default_args)
  File "/home/anba/anaconda3/envs/mmsegext/lib/python3.9/site-packages/mmengine/registry/build_functions.py", line 100, in build_from_cfg
    raise KeyError(
KeyError: 'BEiTAdapter is not in the mmseg::model registry. Please check whether the value of `BEiTAdapter` is correct or it was registered as expected. More details can be found at https://mmengine.readthedocs.io/en/latest/advanced_tutorials/config.html#import-the-custom-module'
chenller commented 3 hours ago

Modify the line 22 of the the CONFIG file to " type='ext-BEiTAdapter', _scope_='mmsegextension', ".

chenller commented 3 hours ago

Set the dataset path on line 198