artidoro / qlora

QLoRA: Efficient Finetuning of Quantized LLMs
https://arxiv.org/abs/2305.14314
MIT License
10.08k stars 823 forks source link

Current git version of accelerate breaks QLoRA #190

Open BugReporterZ opened 1 year ago

BugReporterZ commented 1 year ago

Right now, requirements.txt has accelerate @ git+https://github.com/huggingface/accelerate.git, but as of now this breaks QLoRA functionality.

Using instead the latest stable release of accelerate (released 3 weeks ago) by changing that line to accelerate==0.20.3 and then doing (if the git version was already installed):

pip install --force-reinstall -r requirements.txt

makes it work again.

Nazzaroth2 commented 1 year ago

just incase anyone will be searching for this problem, this solved my error: ValueError: .to is not supported for 4-bit or 8-bit models. Please use the model as it is, since the model has already been set to the correct devices and casted to the correct dtype.

reverting accelerate allowed me to load and train models again.

gptzerozero commented 1 year ago

@Nazzaroth2 Do you mean .to is not supported for quantized models like GPTQ and GGML ones? Or was it breaking during QLoRA finetuning?

Nazzaroth2 commented 1 year ago

@gptzerozero for me the trainer would not start when using newer versions of accelerate for qlora. How it interacts with the other quantization methods i don't know.