w1oves / Rein

[CVPR 2024] Official implement of <Stronger, Fewer, & Superior: Harnessing Vision Foundation Models for Domain Generalized Semantic Segmentation>
https://zxwei.site/rein
GNU General Public License v3.0
215 stars 19 forks source link

About GTA5+SYNTHIA config file #23

Closed seabearlmx closed 5 months ago

seabearlmx commented 5 months ago

Hello, how to config the GTA5+SYNTHIA? In Release/source code/Rein-GTAV-Synthia/config/base/dataset/ your provided, I have not found the dg_gta_syn_xx.py config file. I try to set the config:

train_dataloader = dict( batch_size=2, num_workers=2, persistent_workers=False, pin_memory=False, sampler=dict(type="InfiniteSampler", shuffle=True), dataset=dict( type="ConcatDataset", datasets=[ {{base.train_gta}}, {{base.train_syn}} ], ), )

but it work failed.

w1oves commented 5 months ago

This config file in release is used for DG (Domain Generalization) settings, specifically converting GTAV+Synthia to Cityscapes+Bdd100k+Mapillary. You can verify this by examining the train_dataloader and val_dataloader within the file. Additionally, feel free to make direct modifications as needed.

seabearlmx commented 5 months ago

However, I follow the config file to modify, the error happens: TypeError: ConcatDataset.init() got an unexpected keyword argument 'pipeline'

w1oves commented 5 months ago

Please provide your modified config.

seabearlmx commented 5 months ago
_base_ = [
    "./gta_512x512.py",
    "./syn_512x512.py",
    "./bdd100k_512x512.py",
    "./cityscapes_512x512.py",
    "./mapillary_512x512.py",
]
train_dataloader = dict(
    batch_size=2,
    num_workers=2,
    persistent_workers=False,
    pin_memory=False,
    sampler=dict(type="InfiniteSampler", shuffle=True),
    dataset=dict(
        type="ConcatDataset",
        datasets=[
            {{_base_.train_gta}},
            {{_base_.train_syn}}
        ],
    ),
)
val_dataloader = dict(
    batch_size=1,
    num_workers=4,
    persistent_workers=False,
    sampler=dict(type="DefaultSampler", shuffle=False),
    dataset=dict(
        type="ConcatDataset",
        datasets=[
            {{_base_.val_cityscapes}},
            {{_base_.val_bdd}},
            {{_base_.val_mapillary}},
        ],
    ),
)
test_dataloader = val_dataloader
val_evaluator = dict(
    type="DGIoUMetric", iou_metrics=["mIoU"], dataset_keys=["citys", "map", "bdd"]
)
test_evaluator=val_evaluator
w1oves commented 5 months ago

This config seems right. Please provide generated entire config. It can be found at corresponded work_dirs.

seabearlmx commented 5 months ago

config.zip

w1oves commented 5 months ago

The error occurs because in ConcatDataset, you should not set the pipeline argument. The pipeline for different datasets should be created or updated individually for each dataset. From the provided config, it's unclear where you added this error argument. You should double-check it.

seabearlmx commented 5 months ago

I comment the code train_dataloader = dict(batch_size=4, dataset=dict(pipeline=train_pipeline)) in the Rein/configs/dinov2 /rein_dinov2_mask2former_512x512_bs1x4.py, the multi-source setting is work well. Need you comment this code in your code?

w1oves commented 5 months ago

This config is only for single source. And pipeline should be added at the single dataset config. Thank you for pointing out that!