(.env) (base) [root@localhost finetune]# python3 test.py
Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained.
Traceback (most recent call last):
File "/opt/ai/WisdomShell/CodeShell-7B-Chat-int4/finetune/test.py", line 8, in <module>
model = AutoModelForCausalLM.from_pretrained("output_models", trust_remote_code=True, local_files_only=True).to(device)
File "/root/anaconda3/lib/python3.9/site-packages/transformers/models/auto/auto_factory.py", line 560, in from_pretrained
return model_class.from_pretrained(
File "/root/.cache/huggingface/modules/transformers_modules/output_models/modeling_codeshell.py", line 1045, in from_pretrained
model = load_state_dict_for_qunantied_model(model, state_dict)
File "/root/.cache/huggingface/modules/transformers_modules/output_models/quantizer.py", line 257, in load_state_dict_for_qunantied_model
set_value(model, name, state_dict, is_4bit)
File "/root/.cache/huggingface/modules/transformers_modules/output_models/quantizer.py", line 212, in set_value
setattr(parent, keys[-1], state_dict[name])
File "/root/anaconda3/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1715, in __setattr__
raise TypeError(f"cannot assign '{torch.typename(value)}' as parameter '{name}' "
TypeError: cannot assign 'torch.BFloat16Tensor' as parameter 'weight' (torch.nn.Parameter or None expected)
我准备的微调数据如下:
我运行
后得到了微调后的模型 我准备了一个test.py文件,该文件指定了模型为我刚才微调后的 CodeShell-7B-Chat-int4 模型(即output_models):
我运行
后出现错误:
我把test.py中指定的模型修改为CodeShell-7B-Chat-int后再运行就没问题:
请问哪里出问题了? 我的CUDA版本是11.8,torch版本是2.1.0,TensorFlow版本是2.13.0,python版本是3.9,我的系统是centos7 x86_64,内核版本是 3.10 请大神帮帮忙,感激不尽!