Closed hwanyu112 closed 2 years ago
When I make base_lr=0.09, it gets higher mIoU. So could you please provide your whole hyperparameters and how many epochs for training respectively? Thanks a lot.
Hi @hwanyu112 ,
Thanks so much for your interest in LSeg!
We provide the train script for ADE20k dataset, and you could easily revise it for zero-shot experiments. As for FSS-1000, it should be very easy to reproduce the results. As for COCO and PASCAL datasets, due to very few classes, you need early stop (you should be able to get the optimal results using the models from epoch 0-3) and do a hyper-parameter sweep to find the best learning rate, the optimal lr should be smaller than the lr of FSS-1000.
Hope this helps!
Best, Boyi
Hi! Thanks for your interesting work! I am trying to reproduce the zero-shot experiments in the paper recently, but like https://github.com/isl-org/lang-seg/issues/19#issue-1213501618 , it gets mIoU much lower than yours.
Here is my scripts:
train_lseg_zs.py:
command:
Default aruguments: base_lr=0.004, weight_decay=1e-4, momentum=0.9
I wonder where the problem is. And could you please share your training scripts for the zero-shot experiment?