open-mmlab / mmsegmentation

OpenMMLab Semantic Segmentation Toolbox and Benchmark.
https://mmsegmentation.readthedocs.io/en/main/
Apache License 2.0
8.23k stars 2.61k forks source link

Incorrect or Missing Palette Values for Potsdam, Vaihingen, and iSaid checkpoints #3334

Open WilliamLockeIV opened 1 year ago

WilliamLockeIV commented 1 year ago

Possibly related to Issue #2867, I've found color palette errors when loading checkpointed models through MMSegInferencer to perform inference. Specifically, models trained on the Potsdam and Vaihingen datasets will incorrectly assign "buildings" and "low vegetation" the same color, while models trained on the iSaid dataset will throw an error due to having a saved list of classes but None for palette.

Immediately below is the code and output showing the errors, followed by additional code showing the installations and relevant imports. I ran all of the following on Google Colab with a T4 GPU.

Error Example with checkpointed model deeplabv3plus_r18-d8_4xb4-80k_vaihingen-512x512:

inferencer = MMSegInferencer(model='deeplabv3plus_r18-d8_4xb4-80k_vaihingen-512x512')
inferencer.model.dataset_meta
Loads checkpoint by http backend from path: https://download.openmmlab.com/mmsegmentation/v0.5/deeplabv3plus/deeplabv3plus_r18-d8_4x4_512x512_80k_vaihingen/deeplabv3plus_r18-d8_4x4_512x512_80k_vaihingen_20211231_230805-7626a263.pth

Downloading: "https://download.openmmlab.com/mmsegmentation/v0.5/deeplabv3plus/deeplabv3plus_r18-d8_4x4_512x512_80k_vaihingen/deeplabv3plus_r18-d8_4x4_512x512_80k_vaihingen_20211231_230805-7626a263.pth" to /root/.cache/torch/hub/checkpoints/deeplabv3plus_r18-d8_4x4_512x512_80k_vaihingen_20211231_230805-7626a263.pth
/content/mmsegmentation/mmseg/models/builder.py:36: UserWarning: ``build_loss`` would be deprecated soon, please use ``mmseg.registry.MODELS.build()`` 
  warnings.warn('``build_loss`` would be deprecated soon, please use '
/content/mmsegmentation/mmseg/models/losses/cross_entropy_loss.py:235: UserWarning: Default ``avg_non_ignore`` is False, if you would like to ignore the certain label and average loss over non-ignore labels, which is the same with PyTorch official cross_entropy, set ``avg_non_ignore=True``.
  warnings.warn(

09/18 19:48:29 - mmengine - WARNING - Failed to search registry with scope "mmseg" in the "function" registry tree. As a workaround, the current "function" registry in "mmengine" is used to build instance. This may cause unexpected failure when running the built modules. Please check whether "mmseg" is a correct scope, or whether the registry is initialized.

/usr/local/lib/python3.10/dist-packages/mmengine/visualization/visualizer.py:196: UserWarning: Failed to add <class 'mmengine.visualization.vis_backend.LocalVisBackend'>, please provide the `save_dir` argument.
  warnings.warn(f'Failed to add {vis_backend.__class__}, '

{'classes': ('impervious_surface',
  'building',
  'low_vegetation',
  'tree',
  'car',
  'clutter'),
 'palette': [[255, 255, 255],
  [0, 0, 255],
  [0, 0, 255],
  [0, 255, 0],
  [255, 255, 0],
  [255, 0, 0]]}

There's quite a few warnings before the output of the actual classes and palette, but you'll notice the second and third entries under 'palette' are identical, [0,0,255], meaning they both show up the same color in visualization. I fix the error by assigning a dataset specific palette and classes to inferencer.model.dataset_meta after initialization.

inferencer = MMSegInferencer(model='deeplabv3plus_r18-d8_4xb4-80k_vaihingen-512x512')
inferencer.model.dataset_meta = {'classes':vaihingen_classes(), 'palette':vaihingen_palette()}
inferencer.model.dataset_meta
Loads checkpoint by http backend from path: https://download.openmmlab.com/mmsegmentation/v0.5/deeplabv3plus/deeplabv3plus_r18-d8_4x4_512x512_80k_vaihingen/deeplabv3plus_r18-d8_4x4_512x512_80k_vaihingen_20211231_230805-7626a263.pth

/content/mmsegmentation/mmseg/models/builder.py:36: UserWarning: ``build_loss`` would be deprecated soon, please use ``mmseg.registry.MODELS.build()`` 
  warnings.warn('``build_loss`` would be deprecated soon, please use '
/content/mmsegmentation/mmseg/models/losses/cross_entropy_loss.py:235: UserWarning: Default ``avg_non_ignore`` is False, if you would like to ignore the certain label and average loss over non-ignore labels, which is the same with PyTorch official cross_entropy, set ``avg_non_ignore=True``.
  warnings.warn(

{'classes': ['impervious_surface',
  'building',
  'low_vegetation',
  'tree',
  'car',
  'clutter'],
 'palette': [[255, 255, 255],
  [0, 0, 255],
  [0, 255, 255],
  [0, 255, 0],
  [255, 255, 0],
  [255, 0, 0]]}

Here the third entry under 'palette' is corrected to [0,255,255]. If I do the same thing with a model pretrained on the Potsdam dataset, I get the same results. If I do the same thing with a model pretrained on the iSaid dataset, I get the following results:

inferencer = MMSegInferencer(model='deeplabv3plus_r18-d8_4xb4-80k_isaid-896x896')
inferencer.model.dataset_meta
Loads checkpoint by http backend from path: https://download.openmmlab.com/mmsegmentation/v0.5/deeplabv3plus/deeplabv3plus_r18-d8_4x4_896x896_80k_isaid/deeplabv3plus_r18-d8_4x4_896x896_80k_isaid_20220110_180526-7059991d.pth

Downloading: "https://download.openmmlab.com/mmsegmentation/v0.5/deeplabv3plus/deeplabv3plus_r18-d8_4x4_896x896_80k_isaid/deeplabv3plus_r18-d8_4x4_896x896_80k_isaid_20220110_180526-7059991d.pth" to /root/.cache/torch/hub/checkpoints/deeplabv3plus_r18-d8_4x4_896x896_80k_isaid_20220110_180526-7059991d.pth
/content/mmsegmentation/mmseg/models/builder.py:36: UserWarning: ``build_loss`` would be deprecated soon, please use ``mmseg.registry.MODELS.build()`` 
  warnings.warn('``build_loss`` would be deprecated soon, please use '
/content/mmsegmentation/mmseg/models/losses/cross_entropy_loss.py:235: UserWarning: Default ``avg_non_ignore`` is False, if you would like to ignore the certain label and average loss over non-ignore labels, which is the same with PyTorch official cross_entropy, set ``avg_non_ignore=True``.
  warnings.warn(
/usr/local/lib/python3.10/dist-packages/mmengine/visualization/visualizer.py:196: UserWarning: Failed to add <class 'mmengine.visualization.vis_backend.LocalVisBackend'>, please provide the `save_dir` argument.
  warnings.warn(f'Failed to add {vis_backend.__class__}, '

{'classes': ('background',
  'ship',
  'store_tank',
  'baseball_diamond',
  'tennis_court',
  'basketball_court',
  'Ground_Track_Field',
  'Bridge',
  'Large_Vehicle',
  'Small_Vehicle',
  'Helicopter',
  'Swimming_pool',
  'Roundabout',
  'Soccer_ball_field',
  'plane',
  'Harbor'),
 'palette': None}

This won't throw an error immediately, but if I try to then use inferencer to segment an image, I get the following error:

img_demo = mmcv.imread('/content/mmsegmentation/demo/demo.png')
results_demo = inferencer(img_demo, show=True, opacity=.4)
---------------------------------------------------------------------------

AssertionError                            Traceback (most recent call last)

[<ipython-input-13-4383f13111d7>](https://localhost:8080/#) in <cell line: 2>()
      1 img_demo = mmcv.imread('/content/mmsegmentation/demo/demo.png')
----> 2 results_demo = inferencer(img_demo, show=True, opacity=.4)

3 frames

[/content/mmsegmentation/mmseg/apis/mmseg_inferencer.py](https://localhost:8080/#) in __call__(self, inputs, return_datasamples, batch_size, show, wait_time, out_dir, img_out_dir, pred_out_dir, **kwargs)
    181             img_out_dir = ''
    182 
--> 183         return super().__call__(
    184             inputs=inputs,
    185             return_datasamples=return_datasamples,

[/usr/local/lib/python3.10/dist-packages/mmengine/infer/infer.py](https://localhost:8080/#) in __call__(self, inputs, return_datasamples, batch_size, **kwargs)
    222                      if self.show_progress else inputs):
    223             preds.extend(self.forward(data, **forward_kwargs))
--> 224         visualization = self.visualize(
    225             ori_inputs, preds,
    226             **visualize_kwargs)  # type: ignore  # noqa: E501

[/content/mmsegmentation/mmseg/apis/mmseg_inferencer.py](https://localhost:8080/#) in visualize(self, inputs, preds, show, wait_time, img_out_dir, opacity)
    220             raise ValueError('Visualization needs the "visualizer" term'
    221                              'defined in the config, but got None')
--> 222         self.visualizer.set_dataset_meta(**self.model.dataset_meta)
    223         self.visualizer.alpha = opacity
    224 

[/content/mmsegmentation/mmseg/visualization/local_visualizer.py](https://localhost:8080/#) in set_dataset_meta(self, classes, palette, dataset_name)
    146         classes = classes if classes else get_classes(dataset_name)
    147         palette = palette if palette else get_palette(dataset_name)
--> 148         assert len(classes) == len(
    149             palette), 'The length of classes should be equal to palette'
    150         self.dataset_meta: dict = {'classes': classes, 'palette': palette}

AssertionError: The length of classes should be equal to palette

Setup Below are the installation and import statements, in case there is useful information in them.

# Install PyTorch
!conda install pytorch==1.12.0 torchvision==0.13.0 torchaudio==0.12.0 cudatoolkit=11.3 -c pytorch
# Install mim
!pip install -U openmim
# Install mmengine
!mim install mmengine
# Install MMCV
!mim install 'mmcv >= 2.0.0rc1'
!rm -rf mmsegmentation
!git clone -b main https://github.com/open-mmlab/mmsegmentation.git
%cd mmsegmentation
!pip install -e .
%cd mmsegmentation
from mmseg.utils.class_names import potsdam_classes, potsdam_palette, vaihingen_classes, vaihingen_palette
from mmseg.utils.class_names import isaid_classes, isaid_palette
from mmseg.utils import collect_env
from mmseg.apis import MMSegInferencer
import mmcv
%cd /content

Environment

collect_env()
OrderedDict([('sys.platform', 'linux'),
             ('Python', '3.10.12 (main, Jun 11 2023, 05:26:28) [GCC 11.4.0]'),
             ('CUDA available', True),
             ('numpy_random_seed', 2147483648),
             ('GPU 0', 'Tesla T4'),
             ('CUDA_HOME', '/usr/local/cuda'),
             ('NVCC', 'Cuda compilation tools, release 11.8, V11.8.89'),
             ('GCC',
              'x86_64-linux-gnu-gcc (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0'),
             ('PyTorch', '2.0.1+cu118'),
             ('PyTorch compiling details',
              'PyTorch built with:\n  - GCC 9.3\n  - C++ Version: 201703\n  - Intel(R) oneAPI Math Kernel Library Version 2022.2-Product Build 20220804 for Intel(R) 64 architecture applications\n  - Intel(R) MKL-DNN v2.7.3 (Git Hash 6dbeffbae1f23cbbeae17adb7b5b13f1f37c080e)\n  - OpenMP 201511 (a.k.a. OpenMP 4.5)\n  - LAPACK is enabled (usually provided by MKL)\n  - NNPACK is enabled\n  - CPU capability usage: AVX2\n  - CUDA Runtime 11.8\n  - NVCC architecture flags: -gencode;arch=compute_37,code=sm_37;-gencode;arch=compute_50,code=sm_50;-gencode;arch=compute_60,code=sm_60;-gencode;arch=compute_70,code=sm_70;-gencode;arch=compute_75,code=sm_75;-gencode;arch=compute_80,code=sm_80;-gencode;arch=compute_86,code=sm_86;-gencode;arch=compute_90,code=sm_90\n  - CuDNN 8.7\n  - Magma 2.6.1\n  - Build settings: BLAS_INFO=mkl, BUILD_TYPE=Release, CUDA_VERSION=11.8, CUDNN_VERSION=8.7.0, CXX_COMPILER=/opt/rh/devtoolset-9/root/usr/bin/c++, CXX_FLAGS= -D_GLIBCXX_USE_CXX11_ABI=0 -fabi-version=11 -Wno-deprecated -fvisibility-inlines-hidden -DUSE_PTHREADPOOL -DNDEBUG -DUSE_KINETO -DLIBKINETO_NOROCTRACER -DUSE_FBGEMM -DUSE_QNNPACK -DUSE_PYTORCH_QNNPACK -DUSE_XNNPACK -DSYMBOLICATE_MOBILE_DEBUG_HANDLE -O2 -fPIC -Wall -Wextra -Werror=return-type -Werror=non-virtual-dtor -Werror=bool-operation -Wnarrowing -Wno-missing-field-initializers -Wno-type-limits -Wno-array-bounds -Wno-unknown-pragmas -Wunused-local-typedefs -Wno-unused-parameter -Wno-unused-function -Wno-unused-result -Wno-strict-overflow -Wno-strict-aliasing -Wno-error=deprecated-declarations -Wno-stringop-overflow -Wno-psabi -Wno-error=pedantic -Wno-error=redundant-decls -Wno-error=old-style-cast -fdiagnostics-color=always -faligned-new -Wno-unused-but-set-variable -Wno-maybe-uninitialized -fno-math-errno -fno-trapping-math -Werror=format -Werror=cast-function-type -Wno-stringop-overflow, LAPACK_INFO=mkl, PERF_WITH_AVX=1, PERF_WITH_AVX2=1, PERF_WITH_AVX512=1, TORCH_DISABLE_GPU_ASSERTS=ON, TORCH_VERSION=2.0.1, USE_CUDA=ON, USE_CUDNN=ON, USE_EXCEPTION_PTR=1, USE_GFLAGS=OFF, USE_GLOG=OFF, USE_MKL=ON, USE_MKLDNN=ON, USE_MPI=OFF, USE_NCCL=1, USE_NNPACK=ON, USE_OPENMP=ON, USE_ROCM=OFF, \n'),
             ('TorchVision', '0.15.2+cu118'),
             ('OpenCV', '4.8.0'),
             ('MMEngine', '0.8.4'),
             ('MMSegmentation', '1.1.1+')])
kuaiqushangzixiba commented 7 months ago

Did you solve this problem? I was also troubled by it. Even though I modified the configuration, I couldn't output the normal iSaid image