When I run the code, parameters first be fully loaded to each GPU and then be sharded. But after I try to use zero3+lora instead zero3+qlora (just remove bnb_config = BitsAndBytesConfig(...) ), it magically worked! Parameters first shard then load to each GPU!
So I am confused if bitsandbytes doesn't support zero3_init, or there are some errors in my code. I'll really appreciate if someone can help me!
One thing I want to say is in the case of qlora, parameters just be loaded to CPU memory and GPU memory just rise less than 1GB (I also want to know what is it, maybe is the quantization constant?) during from_pretrained(). And when run trainer.train() the parameters fully loaded to each GPU then sharded.
I use the following code and torch.cuda.memory_reserved() added in the source code to measure memory usage. It changes as the size of the model size changes(llama2_7B will take about 0.58GB and llama2_13B will take about 0.80GB).
And I also use above code to measure memory used to check if zero_init run successfully. When I use the entire above code, there is only "extra memory" loaded to GPU like the picture below:
When I use the code removing bnb_config, there is correct result(7B sharded to 8 GPUs):
Now the situation is whether with or without lora, bnb will contribute to incorrect result with zero_init but without bnb it will be success. So maybe the fact is bnb prevent zero_init.
@younesbelkada @matthewdouglas @Titus-von-Koeller
Expected behavior
Successfully using zero3_init to construct llama2: parameters first sharded and then loaded to GPUs.
System Info
Reproduction
When I run the code, parameters first be fully loaded to each GPU and then be sharded. But after I try to use zero3+lora instead zero3+qlora (just remove bnb_config = BitsAndBytesConfig(...) ), it magically worked! Parameters first shard then load to each GPU! So I am confused if bitsandbytes doesn't support zero3_init, or there are some errors in my code. I'll really appreciate if someone can help me!
Here is my code refer to https://huggingface.co/docs/peft/accelerate/deepspeed#use-peft-qlora-and-deepspeed-with-zero3-for-finetuning-large-models-on-multiple-gpus and https://huggingface.co/docs/transformers/v4.18.0/en/main_classes/deepspeed#constructing-massive-models:~:text=If%20you%20want%20to%20use%20a,is%20how%20example%20scripts%20are%20written.:
Here is my accelerate config:
Here is my deepspeedzero3 config:
Here is my launcher context:
One thing I want to say is in the case of qlora, parameters just be loaded to CPU memory and GPU memory just rise less than 1GB (I also want to know what is it, maybe is the quantization constant?) during from_pretrained(). And when run trainer.train() the parameters fully loaded to each GPU then sharded.
I use the following code and torch.cuda.memory_reserved() added in the source code to measure memory usage. It changes as the size of the model size changes(llama2_7B will take about 0.58GB and llama2_13B will take about 0.80GB).
And I also use above code to measure memory used to check if zero_init run successfully. When I use the entire above code, there is only "extra memory" loaded to GPU like the picture below: When I use the code removing bnb_config, there is correct result(7B sharded to 8 GPUs): Now the situation is whether with or without lora, bnb will contribute to incorrect result with zero_init but without bnb it will be success. So maybe the fact is bnb prevent zero_init. @younesbelkada @matthewdouglas @Titus-von-Koeller
Expected behavior
Successfully using zero3_init to construct llama2: parameters first sharded and then loaded to GPUs.