Open shiyutang opened 1 year ago
We reported mIoU=40.2 in the paper. We noticed that you have already reproduced this result. The model released in this repo is a better one after multiple training.
Thank you very much. And I wonder, did you tune anything during multiple training? or did you achieve a 1% raise in mIoU out of training fluctuation?
We didn't tune anything. We used the same settings as in this repo.
Thanks a lot.
Hello, I meet the same problem. I reproduce seaformer-base on ADR20K. But only get 39.96 mIoU. I find the lr in paper is 0.0005, but in this repo lr is 0.00025. I want to know whcih one is your config?
@nizhenliang Hello, lr is 0.0005 for batch_size 32 and 0.00025 for 16
We didn't tune anything. We used the same settings as in this repo.
hello,I wonder the value of the seed in this experiment,thanks.
I try to reproduce the base model, but I only get mIoU=40.0 rather than 41.2 as reported in the codebase.
Here are some change I do to the original code.
I train the model on 4 cards with 4 images for each card.
I load the pre-trained classification model through the config:
And the script I used to train is: export CUDA_VISIBLE_DEVICES=3,5,6,7;sh tools/dist_train.sh local_configs/seaformer/seaformer_base_512x512_160k_2x8_ade20k.py 4 --work-dir output
Could you guys point out where could be wrong? Thank you very much :)