Possibly related to Issue #2867, I've found color palette errors when loading checkpointed models through MMSegInferencer to perform inference. Specifically, models trained on the Potsdam and Vaihingen datasets will incorrectly assign "buildings" and "low vegetation" the same color, while models trained on the iSaid dataset will throw an error due to having a saved list of classes but None for palette.
Immediately below is the code and output showing the errors, followed by additional code showing the installations and relevant imports. I ran all of the following on Google Colab with a T4 GPU.
Error
Example with checkpointed model deeplabv3plus_r18-d8_4xb4-80k_vaihingen-512x512:
Loads checkpoint by http backend from path: https://download.openmmlab.com/mmsegmentation/v0.5/deeplabv3plus/deeplabv3plus_r18-d8_4x4_512x512_80k_vaihingen/deeplabv3plus_r18-d8_4x4_512x512_80k_vaihingen_20211231_230805-7626a263.pth
Downloading: "https://download.openmmlab.com/mmsegmentation/v0.5/deeplabv3plus/deeplabv3plus_r18-d8_4x4_512x512_80k_vaihingen/deeplabv3plus_r18-d8_4x4_512x512_80k_vaihingen_20211231_230805-7626a263.pth" to /root/.cache/torch/hub/checkpoints/deeplabv3plus_r18-d8_4x4_512x512_80k_vaihingen_20211231_230805-7626a263.pth
/content/mmsegmentation/mmseg/models/builder.py:36: UserWarning: ``build_loss`` would be deprecated soon, please use ``mmseg.registry.MODELS.build()``
warnings.warn('``build_loss`` would be deprecated soon, please use '
/content/mmsegmentation/mmseg/models/losses/cross_entropy_loss.py:235: UserWarning: Default ``avg_non_ignore`` is False, if you would like to ignore the certain label and average loss over non-ignore labels, which is the same with PyTorch official cross_entropy, set ``avg_non_ignore=True``.
warnings.warn(
09/18 19:48:29 - mmengine - WARNING - Failed to search registry with scope "mmseg" in the "function" registry tree. As a workaround, the current "function" registry in "mmengine" is used to build instance. This may cause unexpected failure when running the built modules. Please check whether "mmseg" is a correct scope, or whether the registry is initialized.
/usr/local/lib/python3.10/dist-packages/mmengine/visualization/visualizer.py:196: UserWarning: Failed to add <class 'mmengine.visualization.vis_backend.LocalVisBackend'>, please provide the `save_dir` argument.
warnings.warn(f'Failed to add {vis_backend.__class__}, '
{'classes': ('impervious_surface',
'building',
'low_vegetation',
'tree',
'car',
'clutter'),
'palette': [[255, 255, 255],
[0, 0, 255],
[0, 0, 255],
[0, 255, 0],
[255, 255, 0],
[255, 0, 0]]}
There's quite a few warnings before the output of the actual classes and palette, but you'll notice the second and third entries under 'palette' are identical, [0,0,255], meaning they both show up the same color in visualization. I fix the error by assigning a dataset specific palette and classes to inferencer.model.dataset_meta after initialization.
Loads checkpoint by http backend from path: https://download.openmmlab.com/mmsegmentation/v0.5/deeplabv3plus/deeplabv3plus_r18-d8_4x4_512x512_80k_vaihingen/deeplabv3plus_r18-d8_4x4_512x512_80k_vaihingen_20211231_230805-7626a263.pth
/content/mmsegmentation/mmseg/models/builder.py:36: UserWarning: ``build_loss`` would be deprecated soon, please use ``mmseg.registry.MODELS.build()``
warnings.warn('``build_loss`` would be deprecated soon, please use '
/content/mmsegmentation/mmseg/models/losses/cross_entropy_loss.py:235: UserWarning: Default ``avg_non_ignore`` is False, if you would like to ignore the certain label and average loss over non-ignore labels, which is the same with PyTorch official cross_entropy, set ``avg_non_ignore=True``.
warnings.warn(
{'classes': ['impervious_surface',
'building',
'low_vegetation',
'tree',
'car',
'clutter'],
'palette': [[255, 255, 255],
[0, 0, 255],
[0, 255, 255],
[0, 255, 0],
[255, 255, 0],
[255, 0, 0]]}
Here the third entry under 'palette' is corrected to [0,255,255]. If I do the same thing with a model pretrained on the Potsdam dataset, I get the same results. If I do the same thing with a model pretrained on the iSaid dataset, I get the following results:
Loads checkpoint by http backend from path: https://download.openmmlab.com/mmsegmentation/v0.5/deeplabv3plus/deeplabv3plus_r18-d8_4x4_896x896_80k_isaid/deeplabv3plus_r18-d8_4x4_896x896_80k_isaid_20220110_180526-7059991d.pth
Downloading: "https://download.openmmlab.com/mmsegmentation/v0.5/deeplabv3plus/deeplabv3plus_r18-d8_4x4_896x896_80k_isaid/deeplabv3plus_r18-d8_4x4_896x896_80k_isaid_20220110_180526-7059991d.pth" to /root/.cache/torch/hub/checkpoints/deeplabv3plus_r18-d8_4x4_896x896_80k_isaid_20220110_180526-7059991d.pth
/content/mmsegmentation/mmseg/models/builder.py:36: UserWarning: ``build_loss`` would be deprecated soon, please use ``mmseg.registry.MODELS.build()``
warnings.warn('``build_loss`` would be deprecated soon, please use '
/content/mmsegmentation/mmseg/models/losses/cross_entropy_loss.py:235: UserWarning: Default ``avg_non_ignore`` is False, if you would like to ignore the certain label and average loss over non-ignore labels, which is the same with PyTorch official cross_entropy, set ``avg_non_ignore=True``.
warnings.warn(
/usr/local/lib/python3.10/dist-packages/mmengine/visualization/visualizer.py:196: UserWarning: Failed to add <class 'mmengine.visualization.vis_backend.LocalVisBackend'>, please provide the `save_dir` argument.
warnings.warn(f'Failed to add {vis_backend.__class__}, '
{'classes': ('background',
'ship',
'store_tank',
'baseball_diamond',
'tennis_court',
'basketball_court',
'Ground_Track_Field',
'Bridge',
'Large_Vehicle',
'Small_Vehicle',
'Helicopter',
'Swimming_pool',
'Roundabout',
'Soccer_ball_field',
'plane',
'Harbor'),
'palette': None}
This won't throw an error immediately, but if I try to then use inferencer to segment an image, I get the following error:
Possibly related to Issue #2867, I've found color palette errors when loading checkpointed models through MMSegInferencer to perform inference. Specifically, models trained on the Potsdam and Vaihingen datasets will incorrectly assign "buildings" and "low vegetation" the same color, while models trained on the iSaid dataset will throw an error due to having a saved list of classes but
None
for palette.Immediately below is the code and output showing the errors, followed by additional code showing the installations and relevant imports. I ran all of the following on Google Colab with a T4 GPU.
Error Example with checkpointed model deeplabv3plus_r18-d8_4xb4-80k_vaihingen-512x512:
There's quite a few warnings before the output of the actual classes and palette, but you'll notice the second and third entries under 'palette' are identical, [0,0,255], meaning they both show up the same color in visualization. I fix the error by assigning a dataset specific palette and classes to inferencer.model.dataset_meta after initialization.
Here the third entry under 'palette' is corrected to [0,255,255]. If I do the same thing with a model pretrained on the Potsdam dataset, I get the same results. If I do the same thing with a model pretrained on the iSaid dataset, I get the following results:
This won't throw an error immediately, but if I try to then use inferencer to segment an image, I get the following error:
Setup Below are the installation and import statements, in case there is useful information in them.
Environment