czczup / ViT-Adapter

[ICLR 2023 Spotlight] Vision Transformer Adapter for Dense Predictions
https://arxiv.org/abs/2205.08534
Apache License 2.0
1.27k stars 140 forks source link

about batchsize #116

Closed RYHSmmc closed 1 year ago

RYHSmmc commented 1 year ago

hello, when I run dist_train.sh, I get Cuda Out Memory on 8x80GA100, I want to use a smaller batchsize, but I can't find where to change this setting, can anybody help me? ths

czczup commented 1 year ago

You can set with_cp=True in config to save memory. Which config file do you use?

RYHSmmc commented 1 year ago

ths, the issue has been solved!