Alpha-VLLM / LLaMA2-Accessory

An Open-source Toolkit for LLM Development
https://llama2-accessory.readthedocs.io/
Other
2.71k stars 176 forks source link

Finetuning with quant (Integer parameters are unsupported) #42

Closed qkrtnskfk23 closed 1 year ago

qkrtnskfk23 commented 1 year ago

When i run the finetuning code with --quant for efficient training, I got an error " Integer parameters are unsupported" in torch/distributed/fsdp/flat_param.py", line 435, in _init_flat_param.

Is there any solution for this issue?

Thanks.

kriskrisliu commented 1 year ago

--llama_type should be "llama_peft" when using quantization. Please check the document where we show examples of how to run with quantization. https://github.com/Alpha-VLLM/LLaMA2-Accessory/blob/main/docs/finetune.md#quantization-assisted-parameter-efficient-fine-tuning

See the quantization finetuning script: https://github.com/Alpha-VLLM/LLaMA2-Accessory/blob/main/accessory/exps/finetune/sg/alpaca_llamaPeft_normBias_QF.sh

qkrtnskfk23 commented 1 year ago

Thank you! The problem is solved!!