huggingface / accelerate

🚀 A simple way to launch, train, and use PyTorch models on almost any device and distributed configuration, automatic mixed precision (including fp8), and easy-to-configure FSDP and DeepSpeed support
https://huggingface.co/docs/accelerate
Apache License 2.0
7.76k stars 938 forks source link

Accelerate + DeepSpeed #2821

Closed ByungKwanLee closed 2 months ago

ByungKwanLee commented 3 months ago

System Info

all is the latest

Information

Tasks

Reproduction

I made deepspeed config by accelerate config

I have treid to train the 4-bit quantized model (bitsandbytes) by using deepspeed zero2 or 3 (I tried a lot for each stage 2 3)

However, there is always happening: "ValueError: .to is not supported for 4-bit or 8-bit models. Please use the model as it is, since the model has already been set to the correct devices and casted to the correct dtype"

Accelerate's deepspeed config and bitsandbytes are not compatible???

How to solve??

Expected behavior

I want to train the 4-bit quantized model (bitsandbytes) by using deepspeed zero 2 or 3

muellerzr commented 3 months ago

Can you give us a full reproducer? I believe that should be confirmed to work (verifying 100% soon/shortly) and it may be something up with the code, but first it'd be good to have a full reproducer

muellerzr commented 3 months ago

You can check with the docs here: https://huggingface.co/docs/peft/accelerate/deepspeed#compatibility-with-bitsandbytes-quantization--lora

github-actions[bot] commented 2 months ago

This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.

Please note that issues that do not follow the contributing guidelines are likely to be ignored.