czczup / ViT-Adapter

[ICLR 2023 Spotlight] Vision Transformer Adapter for Dense Predictions
https://arxiv.org/abs/2205.08534
Apache License 2.0
1.26k stars 139 forks source link

Using Test.py #60

Closed dhosk22 closed 1 year ago

dhosk22 commented 1 year ago

Hi, I have two questions about the test.py file.

  1. I'm trying to use test.py in the same way that I got image_demo.py working. I've noticed that test.py doesn't have an image directory argument similar to the 'img' argument in image_demo.py. How should I tell test.py what data to test?

!CUDA_VISIBLE_DEVICES=0 python test.py \ /content/ViT-Adapter/segmentation/configs/cityscapes/mask2former_beit_adapter_large_896_80k_cityscapes_ms.py \ /content/ViT-Adapter/segmentation/release/mask2former_beit_adapter_large_896_80k_cityscapes.pth.tar \ /content/drive/MyDrive/ColabNotebooks/ViT-AdapterContent/dataA/dataA/CameraRGB \

The 3rd argument is the directory where my data is.

  1. Also I noticed that in the image_demo.py file, there is a --palette argument to choose the segmentation palette, but there isn't in the test.py file. If I added that to test.py, would I easily be able to add custom palettes, specific to the dataset of testing interest?
czczup commented 1 year ago

Hi, the test.py is a copy of https://github.com/open-mmlab/mmsegmentation/blob/master/tools/test.py. You can see the document provided by mmsegmentation for help.