Open NabeelaKausar opened 1 year ago
Following the stacktrace, it's in data_preprocessor
. Only size
or size_divisor
should be set in your data_preprocessor config. You likely set both. Check your config in your work_dirs
to find out the final config.
What is your model config? I meet the same error because I don't set the data_preprocessor in model.
I saw the same error when training the Cityscapes dataset using one of the pre-existing deeplabv3plus models. When I switched to a different model, the error did not appear, and training commenced successfully.
The error occurred when training based on this config file: deeplabv3plus_r50-d8_512x1024_40k_cityscapes.py The error did not occur when training on this config file: deeplabv3plus_r101-d8_4xb2-40k_cityscapes-512x1024.py
Probably the both the size
and size_divisor
are not set. Set size=crop_size
in the data_preprocessor
dict.
Remove the "data_preprocessor" configuration option from the configuration file.
This error is due to the code snippet of mmseg/models/data_preprocessor.py
If you set size_divisor = None there, error will be resolved
Probably the both the
size
andsize_divisor
are not set. Setsize=crop_size
in thedata_preprocessor
dict.
Thanks very much!!! I have well done according to your suggestion!
I am trying to use mmsegmentation for segmentation of pathology images on my own new dataset. I have 512x512 patches of images and annotations. I am representing four classes of my new dataset as given below and saving this file in /mmsegmentation/mmseg/datasets import os.path as osp from mmseg.datasets.basesegdataset import BaseSegDataset from mmseg.registry import DATASETS @DATASETS.register_module() class DrosDataset(BaseSegDataset): METAINFO = dict( classes = ('background', 'tumor', 'immunecells', 'epi'),
further I have made a file named medicle_dataset.py in /home/mmsegmentation/configs/base/datasets for dataset settings
dataset_type = 'DrosDataset' data_root = '/home/mmsegmentation/data'
crop_size = (512, 512)
train_pipeline = [
dict(type='LoadImageFromFile'), dict(type='LoadAnnotations'),
dict(type='RandomResize',
scale=(2048, 1024),
ratio_range=(0.5, 2.0), keep_ratio=True),
dict(type='RandomCrop',
crop_size=crop_size, cat_max_ratio=0.75),
dict(type='RandomFlip',
prob=0.5),
dict(type='PhotoMetricDistortion'),
dict(type='PackSegInputs')
] test_pipeline = [ dict(type='LoadImageFromFile'),
dict(type='Resize',
scale=(2048, 1024),
keep_ratio=True),
] train_dataloader = dict(
batch_size=2,
num_workers=2,
persistent_workers=True,
sampler=dict(type='InfiniteSampler', shuffle=True),
dataset=dict(
type=dataset_type,
data_root=data_root,
data_prefix=dict( img_path='/home/neda/mmsegmentation/data/img_dir/Train', seg_map_path='/home/neda/mmsegmentation/data/ann_dir/train'),
val_dataloader = dict( batch_size=1,
num_workers=4, persistent_workers=True,
sampler=dict(type='DefaultSampler', shuffle=False),
dataset=dict(
type=dataset_type,
data_root=data_root,
data_prefix=dict( img_path='/home/neda/mmsegmentation/data/img_dir/val_train', seg_map_path='/home/neda/mmsegmentation/data/ann_dir/train/val'),
pipeline=test_pipeline))
test_dataloader = val_dataloader
val_evaluator = dict(type='IoUMetric', iou_metrics=['mIoU']) test_evaluator = val_evaluator
### I am getting this error please help File "/home/neda/mmsegmentation/mmseg/model /data_preprocessor.py", line 123, in forward inputs, data_samples = stack_batch( File "/home/neda/mmsegmentation/mmseg/utils/misc.py", line 65, in stack_batch assert (size is not None) ^ (size_divisor is not None), \ AssertionError: only one of size and size_divisor should be valid
Thanks for your error report and we appreciate it a lot.
Checklist
Describe the bug A clear and concise description of what the bug is.
Reproduction
What command or script did you run?
Did you make any modifications on the code or config? Did you understand what you have modified?
What dataset did you use?
Environment
python mmseg/utils/collect_env.py
to collect necessary environment information and paste it here.$PATH
,$LD_LIBRARY_PATH
,$PYTHONPATH
, etc.)Error traceback
If applicable, paste the error trackback here.
Bug fix
If you have already identified the reason, you can provide the information here. If you are willing to create a PR to fix it, please also leave a comment here and that would be much appreciated!