beeevita / EvoPrompt

Official implementation of the paper Connecting Large Language Models with Evolutionary Algorithms Yields Powerful Prompt Optimizers
74 stars 12 forks source link

张量不匹配问题 #7

Open wertyhb-sns opened 1 month ago

wertyhb-sns commented 1 month ago

2/alpaca/all/ga/bd10_top10_para_topk_init/topk/davinci/seed15 --dev_file ./data/cls/sst2/seed15/dev.txt You are using the default legacy behaviour of the <class 'transformers.models.llama.tokenization_llama.LlamaTokenizer'>. This is expected, and simply means that the legacy (previous) behavior will be used so nothing changes for you. If you want to use the new behaviour, set legacy=False. This should only be set if you understand what it means, and thoroughly read the reason why this was added as explained in https://github.com/huggingface/transformers/pull/24565 The load_in_4bit and load_in_8bit arguments are deprecated and will be removed in the future versions. Please, pass a BitsAndBytesConfig object in quantization_config argument instead. /home/ubun/anaconda3/envs/fgvc_pim_master_wej/lib/python3.12/site-packages/transformers/generation/configuration_utils.py:494: UserWarning: pad_token_id should be positive but got -1. This will cause errors when batch generating, if there is padding. Please set pas_token_id explicitly by model.generation_config.pad_token_id=PAD_TOKEN_ID to avoid errors in generation, and ensure your input_ids input does not have negative values. warnings.warn( Loading checkpoint shards: 0%| | 0/3 [00:00<?, ?it/s] Traceback (most recent call last): File "/media/ubun/Student/wej/jinhuasuanfa/run.py", line 26, in run(args) File "/media/ubun/Student/wej/jinhuasuanfa/run.py", line 13, in run evaluator = task2evaluatorargs.task ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/media/ubun/Student/wej/jinhuasuanfa/evaluator.py", line 300, in init super(CLSEvaluator, self).init(args) File "/media/ubun/Student/wej/jinhuasuanfa/evaluator.py", line 63, in init self.model = LlamaForCausalLM.from_pretrained( ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/ubun/anaconda3/envs/fgvc_pim_master_wej/lib/python3.12/site-packages/transformers/modeling_utils.py", line 3754, in from_pretrained ) = cls._load_pretrained_model( ^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/ubun/anaconda3/envs/fgvc_pim_master_wej/lib/python3.12/site-packages/transformers/modeling_utils.py", line 4214, in _load_pretrained_model new_error_msgs, offload_index, state_dict_index = _load_state_dict_into_meta_model( ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/ubun/anaconda3/envs/fgvc_pim_master_wej/lib/python3.12/site-packages/transformers/modeling_utils.py", line 887, in _load_state_dict_into_meta_model set_module_tensor_to_device(model, param_name, param_device, **set_module_kwargs) File "/home/ubun/anaconda3/envs/fgvc_pim_master_wej/lib/python3.12/site-packages/accelerate/utils/modeling.py", line 358, in set_module_tensor_to_device raise ValueError( ValueError: Trying to set a tensor of shape torch.Size([32001, 4096]) in "weight" (which has shape torch.Size([32000, 5120])), this look incorrect. 这个是为什么啊,我也是官网下载的模型,没有改变任何东西

beeevita commented 1 month ago

2/alpaca/all/ga/bd10_top10_para_topk_init/topk/davinci/seed15 --dev_file ./data/cls/sst2/seed15/dev.txt You are using the default legacy behaviour of the <class 'transformers.models.llama.tokenization_llama.LlamaTokenizer'>. This is expected, and simply means that the legacy (previous) behavior will be used so nothing changes for you. If you want to use the new behaviour, set legacy=False. This should only be set if you understand what it means, and thoroughly read the reason why this was added as explained in huggingface/transformers#24565 The load_in_4bit and load_in_8bit arguments are deprecated and will be removed in the future versions. Please, pass a BitsAndBytesConfig object in quantization_config argument instead. /home/ubun/anaconda3/envs/fgvc_pim_master_wej/lib/python3.12/site-packages/transformers/generation/configuration_utils.py:494: UserWarning: pad_token_id should be positive but got -1. This will cause errors when batch generating, if there is padding. Please set pas_token_id explicitly by model.generation_config.pad_token_id=PAD_TOKEN_ID to avoid errors in generation, and ensure your input_ids input does not have negative values. warnings.warn( Loading checkpoint shards: 0%| | 0/3 [00:00<?, ?it/s] Traceback (most recent call last): File "/media/ubun/Student/wej/jinhuasuanfa/run.py", line 26, in run(args) File "/media/ubun/Student/wej/jinhuasuanfa/run.py", line 13, in run evaluator = task2evaluatorargs.task ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/media/ubun/Student/wej/jinhuasuanfa/evaluator.py", line 300, in init super(CLSEvaluator, self).init(args) File "/media/ubun/Student/wej/jinhuasuanfa/evaluator.py", line 63, in init self.model = LlamaForCausalLM.from_pretrained( ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/ubun/anaconda3/envs/fgvc_pim_master_wej/lib/python3.12/site-packages/transformers/modeling_utils.py", line 3754, in from_pretrained ) = cls._load_pretrained_model( ^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/ubun/anaconda3/envs/fgvc_pim_master_wej/lib/python3.12/site-packages/transformers/modeling_utils.py", line 4214, in _load_pretrained_model new_error_msgs, offload_index, state_dict_index = _load_state_dict_into_meta_model( ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/ubun/anaconda3/envs/fgvc_pim_master_wej/lib/python3.12/site-packages/transformers/modeling_utils.py", line 887, in _load_state_dict_into_meta_model set_module_tensor_to_device(model, param_name, param_device, **set_module_kwargs) File "/home/ubun/anaconda3/envs/fgvc_pim_master_wej/lib/python3.12/site-packages/accelerate/utils/modeling.py", line 358, in set_module_tensor_to_device raise ValueError( ValueError: Trying to set a tensor of shape torch.Size([32001, 4096]) in "weight" (which has shape torch.Size([32000, 5120])), this look incorrect. 这个是为什么啊,我也是官网下载的模型,没有改变任何东西

是 llama 系列模型吗?如果是的话,有可能是预训练权重文件可能在下载或保存过程中有损坏,导致加载模型时不匹配。