Closed yuxuan2015 closed 1 year ago
you should upgrade your library: pip install -q -U bitsandbytes pip install -q -U git+https://github.com/huggingface/transformers.git pip install -q -U git+https://github.com/huggingface/peft.git pip install -q -U git+https://github.com/huggingface/accelerate.git
@jianzhnie
I directly downloaded the latest source code and put it under the project directory. In theory, the effect should be the same.
Does it require specific versions of Python ?
I used 3.10
I have update a demo in example folder, you can try it again
Traceback (most recent call last): File "/data1/Semantic_team/my-chatbot/Chinese-Guanaco/qlora_int4_finetune.py", line 917, in
train()
File "/data1/Semantic_team/my-chatbot/Chinese-Guanaco/qlora_int4_finetune.py", line 728, in train
model = get_accelerate_model(args, checkpoint_dir)
File "/data1/Semantic_team/my-chatbot/Chinese-Guanaco/qlora_int4_finetune.py", line 318, in get_accelerate_model
model = AutoModelForCausalLM.from_pretrained(
File "/data1/Semantic_team/my-chatbot/Chinese-Guanaco/transformers/models/auto/auto_factory.py", line 490, in from_pretrained
return model_class.from_pretrained(
File "/data1/Semantic_team/my-chatbot/Chinese-Guanaco/transformers/modeling_utils.py", line 2765, in from_pretrained
raise ValueError(
ValueError: You are using
device_map='auto'
on a 4bit loaded version of the model. To automatically compute the appropriate device map, you should upgrade youraccelerate
library,pip install --upgrade accelerate
or install it from source to support fp4 auto device mapcalculation. You may encounter unexpected behavior, or pass your own device mapI have used the lastest version accelerate or source code of github, the error remains the same.