jianzhnie / LLamaTuner

Easy and Efficient Finetuning LLMs. (Supported LLama, LLama2, LLama3, Qwen, Baichuan, GLM , Falcon) 大模型高效量化训练+部署.
https://jianzhnie.github.io/llmtech/
Apache License 2.0
557 stars 62 forks source link

baichuan-7B: AttributeError: 'CastOutputToFloat' object has no attribute 'weight' #39

Closed franciszhang92 closed 1 year ago

franciszhang92 commented 1 year ago

chatllms - INFO - Adding special tokens. Using pad_token, but it is not set yet. Traceback (most recent call last): File "/content/drive/MyDrive/Efficient-Tuning-LLMs/train_qlora.py", line 156, in main() File "/content/drive/MyDrive/Efficient-Tuning-LLMs/train_qlora.py", line 80, in main add_special_tokens_if_missing(tokenizer, model) File "/content/drive/MyDrive/Efficient-Tuning-LLMs/chatllms/utils/model_utils.py", line 47, in add_special_tokens_if_missing smart_tokenizer_and_embedding_resize(special_tokens_dict, tokenizer, File "/content/drive/MyDrive/Efficient-Tuning-LLMs/chatllms/utils/model_utils.py", line 77, in smart_tokenizer_and_embedding_resize model.resize_token_embeddings(len(tokenizer)) File "/usr/local/lib/python3.10/dist-packages/transformers/modeling_utils.py", line 1395, in resize_token_embeddings model_embeds = self._resize_token_embeddings(new_num_tokens) File "/usr/local/lib/python3.10/dist-packages/transformers/modeling_utils.py", line 1416, in _resize_token_embeddings new_lm_head = self._get_resized_lm_head(old_lm_head, new_num_tokens) File "/usr/local/lib/python3.10/dist-packages/transformers/modeling_utils.py", line 1520, in _get_resized_lm_head old_lm_head.weight.size() if not transposed else old_lm_head.weight.t().size() File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1614, in getattr raise AttributeError("'{}' object has no attribute '{}'".format( AttributeError: 'CastOutputToFloat' object has no attribute 'weight'

想请教一下是什么问题

jianzhnie commented 1 year ago

this bug has solved