czczup / ViT-Adapter

[ICLR 2023 Spotlight] Vision Transformer Adapter for Dense Predictions
https://arxiv.org/abs/2205.08534
Apache License 2.0
1.27k stars 140 forks source link

Video memory occupancy while training #99

Open CA-TT-AC opened 1 year ago

CA-TT-AC commented 1 year ago

Hi! I am interested in your work! I am training your UperNet+ViT_adapter_large on ade20k, but I found that while training the model, the cost of video memory is gradually increasing. In detail, I train it on 4*A100 with batch size 4 pre gpu. At first, it costs 16GB per gpu, but after about 60k iters, it increases to 44GB per gpu. I wanna know if it is normal. Looking forward to your reply!

czczup commented 1 year ago

Hi, you can check nvidia-smi for double confirmation. If it is consistent with or close to the memory cost shown in the log, then it is normal.