huggingface / transformers

🤗 Transformers: State-of-the-art Machine Learning for Pytorch, TensorFlow, and JAX.
https://huggingface.co/transformers
Apache License 2.0
133.23k stars 26.61k forks source link

Model loading OOM when using FSDP + QLoRA #31721

Closed Neo9061 closed 1 month ago

Neo9061 commented 3 months ago

System Info

Base line. For single instance of p4de.24xlarge (640GB GPU, 1000 GB CPU), i am able to use Q(4-bit)LoRA to train a large model wit size close to 300B. Device_map is set as auto with code as below.

model = AutoModelForCausalLM.from_pretrained(
    model_id,
    device_map="auto",
    attn_implementation="flash_attention_2",
    torch_dtype=torch.bfloat16,
    quantization_config=bnb_config
)

However, when I use FSDP + QLoRA with 2 p4de.24xlarge instances. Model loading went OOM on CPU.

Can anyone please share some insights? Looking at the from_pretained method's code here and here. Can I get clarification on the following questions? Many thanks.

  1. For FSDP + QLoRA, during model loading, Please comment if my understanding is correct.

    • If model is quantized, then the model is loaded on GPU and further casted into CPU, because of is_quantized in this line) and this comment.
    • If model is not quantized, then the model is directly loaded into CPU.
  2. The OOM happens on CPU as I didn't see any error of "not enough CUDA memory". Thus, for the model that is quantized, when you cast the model into CPU, is only rank 0 doing the job or each of all ranks is casting into CPU leading CPU memory exploding? Same comment for the model that is not quantized during loading.

  3. For quantized model, if you load it firstly into GPU, are you using all GPUs to load the model or using rank 0 to load it?

Who can help?

@SunMarc @ArthurZucker

Information

Tasks

Reproduction

Distributed-finetuning.zip Here is my code to reproduce the issue

Expected behavior

error free

github-actions[bot] commented 2 months ago

This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.

Please note that issues that do not follow the contributing guidelines are likely to be ignored.

matthewdouglas commented 2 months ago

This should be fixed with #32276. Related: #31577

ArthurZucker commented 1 month ago

Not stale but the PR was reverted!