w1oves / Rein

[CVPR 2024] Official implement of <Stronger, Fewer, & Superior: Harnessing Vision Foundation Models for Domain Generalized Semantic Segmentation>
https://zxwei.site/rein
GNU General Public License v3.0
215 stars 19 forks source link

Create model with input size of 128x128 #30

Closed xXCoffeeColaXc closed 4 months ago

xXCoffeeColaXc commented 4 months ago

Hello!

Can I convert the model to be able to pass through images with 128 x 128 dim ? After converting with your script, run inference with this converted model. I guess I would have to change the config file as well, but it is possible ?

Thanks

w1oves commented 4 months ago

Steps to Follow:

  1. Convert the Weight File: Run the following command to convert the weight file:

    python tools/convert_models/convert_dinov2.py checkpoints/dinov2_vitl14_pretrain.pth checkpoints/dinov2_converted_1024x1024.pth --height 128 --width 128
  2. Update Dataset Configuration: Modify the following keys in configs.datasets:

    • crop_size
    • scale
  3. Update Model Configuration: Adjust these key arguments in configs.models:

    • crop_size
    • img_size
    • test_cfg.crop_size
    • test_cfg.stride
  4. Adjust Final Configuration Scales: Change the scales parameter in the final configurations to meet your specific requirements.