isl-org / lang-seg

Language-Driven Semantic Segmentation
MIT License
691 stars 84 forks source link

Difference between the settings for demo and those in your paper #13

Closed whiteking64 closed 2 years ago

whiteking64 commented 2 years ago

Hi, I have a question on the difference of settings between the demo in your README and the experiment in your paper.

In the README, you published the pre-trained weight for demo. It says while training the backbones for both image and text are ViT-L/16. The section 5.1 in your paper says

We used LSeg with DPT and a smaller ViT-B/32 backbone together with the CLIP ViT-B/32 text encoder ...

When reproducing your results in 5.1, does that require a full-scratch training with ViT-B/32 backbone for the images? Also, are there any other differences, such as batch size? More specifically, How do I change the arguments in train.sh ?

Finally, is it possible to share with us (or me) the weight used for your results?

Thank you in advance.

Boyiliee commented 2 years ago

Hi @whiteking64 ,

Thanks for your interest in LSeg!

  1. Currently, we provide the demo model for users to play around with it using any label set with random length and order.
  2. For all the ablation study such as the results in 5.1, we train LSeg with DPT and a smaller ViT-B/32 backbone together with the CLIP ViT-B/32 text encoder on ADE20k dataset. Therefore, you can follow the training and testing instruction in README. The primary thing to change is to set --backbone clip_vitb32_384, you can check details via this link.
  3. For these experiments, for the demo model, we use batch size = 8, it is the same with the train.sh , we use 8 gpus, for each GPU, we load one batch.
  4. Yes, we will share these weights. Please allow some time for me to sort them out (as well as the code). Should be in the next few months.
  5. For more details about the argument, please check here.

Hope this helps. Best wishes for your research!

whiteking64 commented 2 years ago

@Boyiliee

Thank you for your reply. So, the command to reproduce your results in section 5.1 without any blocks (block depth = 0) is

python train_lseg.py \
        --dataset ade20k \
        --data_path ../datasets \
        --batch_size 8 \
        --exp_name lseg_ade20k_l32 \
        --base_lr 0.004 \
        --weight_decay 1e-4 \
        --no-scaleinv \
        --max_epochs 240 \
        --widehead \
        --accumulate_grad_batches 2 \
        --backbone clip_vitb32_384 \
        --num_features=512

I changed the backbone and num_features (which is 256 by default and in Table 5 you used 512 on the first row, and it is the same result as Table 4 with depth 0)

Am I correct?

Also, I just ran experimentally with batch size = 1 with 1 GPU. It took about 17 hours for one epoch. With 8 GPUs, I assume it is reduced to 2.1 hours, but 240 epochs would tale days to complete. Was it so on your experiments?

Boyiliee commented 2 years ago

Hi @whiteking64 ,

In this case, when you are using smaller backbone, you could increase the --batch_size as 4 or larger based on your GPU memory. I set it as 4 in my experiments, you could change it based on your GPU memory. As has been mentioned in the paper, we primarily use Quadro RTX 6000 or V100 for experiments, so it seems faster than your speed. In addition, we primarily follow the training principle of DPT. You could also refer to it for more details.

Hope this helps!

soskek commented 2 years ago

Is Model for demo of this repo the same as the model in Section 5.2?

スクリーンショット 2022-04-12 10 49 23
Boyiliee commented 2 years ago

Yes!

soskek commented 2 years ago

Thank you again!!