Open zhangvia opened 2 months ago
cc @SunMarc and @muellerzr
Hey! This seems like a usage issue! (10 steps just means it normally OOMED) Recommending you this: https://pytorch.org/blog/understanding-gpu-memory-1/ 🤗
@ArthurZucker I would like to take this issue up.
.take
System Info
Who can help?
No response
Information
Tasks
examples
folder (such as GLUE/SQuAD, ...)Reproduction
i'm running train_xl.sh in this repo. and i change the 8bit adam optimizer to adafactor optimizer using transformers.optimization.Adafactor. i'm using two 40GB a100, deepspeed stage 2, batchsize=1,VTON-HD dataset.
the adafactor optimizer should use less gpu memory, because of less optimizer states than 8bit adam, but it get oom in this line
and oom happens after 10 steps, i don't know what happen in 10th step, i call the
accelerate.backward()
andoptimizer.step()
every step.and in 10th step, the memory usage increased from 29GB to 39GB when using 8bit adam optimizer, and get oom when using adafactor optimizer
Expected behavior
could anybody explain this phenomenon