Closed whiteking64 closed 2 years ago
Hi @whiteking64 ,
Thanks for your interest in LSeg!
--backbone clip_vitb32_384
, you can check details via this link.train.sh
, we use 8 gpus, for each GPU, we load one batch.Hope this helps. Best wishes for your research!
@Boyiliee
Thank you for your reply. So, the command to reproduce your results in section 5.1 without any blocks (block depth = 0) is
python train_lseg.py \
--dataset ade20k \
--data_path ../datasets \
--batch_size 8 \
--exp_name lseg_ade20k_l32 \
--base_lr 0.004 \
--weight_decay 1e-4 \
--no-scaleinv \
--max_epochs 240 \
--widehead \
--accumulate_grad_batches 2 \
--backbone clip_vitb32_384 \
--num_features=512
I changed the backbone
and num_features
(which is 256 by default and in Table 5 you used 512 on the first row, and it is the same result as Table 4 with depth 0)
Am I correct?
Also, I just ran experimentally with batch size = 1 with 1 GPU. It took about 17 hours for one epoch. With 8 GPUs, I assume it is reduced to 2.1 hours, but 240 epochs would tale days to complete. Was it so on your experiments?
Hi @whiteking64 ,
In this case, when you are using smaller backbone, you could increase the --batch_size
as 4 or larger based on your GPU memory. I set it as 4 in my experiments, you could change it based on your GPU memory. As has been mentioned in the paper, we primarily use Quadro RTX 6000 or V100 for experiments, so it seems faster than your speed. In addition, we primarily follow the training principle of DPT. You could also refer to it for more details.
Hope this helps!
Is Model for demo
of this repo the same as the model in Section 5.2?
Yes!
Thank you again!!
Hi, I have a question on the difference of settings between the demo in your README and the experiment in your paper.
In the README, you published the pre-trained weight for demo. It says while training the backbones for both image and text are
ViT-L/16
. The section 5.1 in your paper saysWhen reproducing your results in 5.1, does that require a full-scratch training with ViT-B/32 backbone for the images? Also, are there any other differences, such as batch size? More specifically, How do I change the arguments in
train.sh
?Finally, is it possible to share with us (or me) the weight used for your results?
Thank you in advance.