open-mmlab / mmsegmentation

OpenMMLab Semantic Segmentation Toolbox and Benchmark.
https://mmsegmentation.readthedocs.io/en/main/
Apache License 2.0
8.3k stars 2.62k forks source link

AssertionError: only one of size and size_divisor should be valid #3425

Open NabeelaKausar opened 1 year ago

NabeelaKausar commented 1 year ago

I am trying to use mmsegmentation for segmentation of pathology images on my own new dataset. I have 512x512 patches of images and annotations. I am representing four classes of my new dataset as given below and saving this file in /mmsegmentation/mmseg/datasets import os.path as osp from mmseg.datasets.basesegdataset import BaseSegDataset from mmseg.registry import DATASETS @DATASETS.register_module() class DrosDataset(BaseSegDataset): METAINFO = dict( classes = ('background', 'tumor', 'immunecells', 'epi'),

PALETTE = [[255, 255, 255], [0, 0, 200], [1, 255, 0], [250, 1, 0]])

def __init__(self,
             img_suffix='.png',
             seg_map_suffix='.png',
             reduce_zero_label=False,
             **kwargs) -> None:
    super().__init__(
        img_suffix=img_suffix,
        seg_map_suffix=seg_map_suffix,
        reduce_zero_label=reduce_zero_label,
        **kwargs)

further I have made a file named medicle_dataset.py in /home/mmsegmentation/configs/base/datasets for dataset settings

dataset_type = 'DrosDataset' data_root = '/home/mmsegmentation/data'

crop_size = (512, 512)
train_pipeline = [
dict(type='LoadImageFromFile'), dict(type='LoadAnnotations'),
dict(type='RandomResize',
scale=(2048, 1024),
ratio_range=(0.5, 2.0), keep_ratio=True),
dict(type='RandomCrop',
crop_size=crop_size, cat_max_ratio=0.75),
dict(type='RandomFlip',
prob=0.5),
dict(type='PhotoMetricDistortion'),
dict(type='PackSegInputs')
] test_pipeline = [ dict(type='LoadImageFromFile'),
dict(type='Resize',
scale=(2048, 1024),
keep_ratio=True),

dict(type='LoadAnnotations'),  
dict(type='PackSegInputs')  

] train_dataloader = dict(
batch_size=2,
num_workers=2,
persistent_workers=True,
sampler=dict(type='InfiniteSampler', shuffle=True),
dataset=dict(
type=dataset_type,
data_root=data_root,
data_prefix=dict( img_path='/home/neda/mmsegmentation/data/img_dir/Train', seg_map_path='/home/neda/mmsegmentation/data/ann_dir/train'),
val_dataloader = dict( batch_size=1,
num_workers=4, persistent_workers=True,
sampler=dict(type='DefaultSampler', shuffle=False),
dataset=dict(
type=dataset_type,
data_root=data_root,
data_prefix=dict( img_path='/home/neda/mmsegmentation/data/img_dir/val_train', seg_map_path='/home/neda/mmsegmentation/data/ann_dir/train/val'),
pipeline=test_pipeline))
test_dataloader = val_dataloader

val_evaluator = dict(type='IoUMetric', iou_metrics=['mIoU']) test_evaluator = val_evaluator

### I am getting this error please help File "/home/neda/mmsegmentation/mmseg/model /data_preprocessor.py", line 123, in forward inputs, data_samples = stack_batch( File "/home/neda/mmsegmentation/mmseg/utils/misc.py", line 65, in stack_batch assert (size is not None) ^ (size_divisor is not None), \ AssertionError: only one of size and size_divisor should be valid

Thanks for your error report and we appreciate it a lot.

Checklist

  1. I have searched related issues but cannot get the expected help.
  2. The bug has not been fixed in the latest version.

Describe the bug A clear and concise description of what the bug is.

Reproduction

  1. What command or script did you run?

    A placeholder for the command.
  2. Did you make any modifications on the code or config? Did you understand what you have modified?

  3. What dataset did you use?

Environment

  1. Please run python mmseg/utils/collect_env.py to collect necessary environment information and paste it here.
  2. You may add addition that may be helpful for locating the problem, such as
    • How you installed PyTorch [e.g., pip, conda, source]
    • Other environment variables that may be related (such as $PATH, $LD_LIBRARY_PATH, $PYTHONPATH, etc.)

Error traceback

If applicable, paste the error trackback here.

A placeholder for trackback.

Bug fix

If you have already identified the reason, you can provide the information here. If you are willing to create a PR to fix it, please also leave a comment here and that would be much appreciated!

1dmesh commented 1 year ago

Following the stacktrace, it's in data_preprocessor. Only size or size_divisor should be set in your data_preprocessor config. You likely set both. Check your config in your work_dirs to find out the final config.

narrowsnap commented 10 months ago

What is your model config? I meet the same error because I don't set the data_preprocessor in model.

TheMakiran commented 7 months ago

I saw the same error when training the Cityscapes dataset using one of the pre-existing deeplabv3plus models. When I switched to a different model, the error did not appear, and training commenced successfully.

The error occurred when training based on this config file: deeplabv3plus_r50-d8_512x1024_40k_cityscapes.py The error did not occur when training on this config file: deeplabv3plus_r101-d8_4xb2-40k_cityscapes-512x1024.py

ashutoshsingh0223 commented 7 months ago

Probably the both the size and size_divisor are not set. Set size=crop_size in the data_preprocessor dict.

junhaojia commented 6 months ago

Remove the "data_preprocessor" configuration option from the configuration file.

0xD4rky commented 4 months ago

This error is due to the code snippet of mmseg/models/data_preprocessor.py

If you set size_divisor = None there, error will be resolved

ffushiyang commented 1 month ago

Probably the both the size and size_divisor are not set. Set size=crop_size in the data_preprocessor dict.

Thanks very much!!! I have well done according to your suggestion!