facebookresearch / advprompter

Official implementation of AdvPrompter https//arxiv.org/abs/2404.16873
Other
87 stars 4 forks source link

Runtime error when i use prompter llm is vicnua-13B #7

Open BXHQu opened 1 week ago

BXHQu commented 1 week ago

probability tensor contains either inf, nan or element < 0

Training epoch 0: 1%|█▋ | 1/78 [01:03<1:21:36, 63.60s/it] Training (epochs): 0%| | 0/10 [01:03<?, ?it/s] Error executing job with overrides: ['target_llm=llama3_chat', 'target_llm.llm_params.model_name=Llama3-8B', 'target_llm.llm_params.checkpoint=llama3/Meta-Llama-3-8B-Instruct', 'train.q_params.num_chunks=4', 'train.q_params.num_beams=2', 'train.batch_size=4', 'train.prompter_optim_params.lr=1e-4'] Traceback (most recent call last): File "/root/autodl-tmp/advprompter/main.py", line 660, in main workspace.train() File "/root/autodl-tmp/advprompter/main.py", line 167, in train self.train_epoch() File "/root/autodl-tmp/advprompter/main.py", line 204, in train_epoch prompter_ar = self.prompter.generate_autoregressive( ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/root/autodl-tmp/advprompter/llm.py", line 193, in generate_autoregressive output = self.model.generate( ^^^^^^^^^^^^^^^^^^^^ File "/root/miniconda3/envs/py311advprompter/lib/python3.11/site-packages/peft/peft_model.py", line 568, in generate return self.get_base_model().generate(*args, *kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/root/miniconda3/envs/py311advprompter/lib/python3.11/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context return func(args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^ File "/root/miniconda3/envs/py311advprompter/lib/python3.11/site-packages/transformers/generation/utils.py", line 1575, in generate result = self._sample( ^^^^^^^^^^^^^ File "/root/miniconda3/envs/py311advprompter/lib/python3.11/site-packages/transformers/generation/utils.py", line 2735, in _sample next_tokens = torch.multinomial(probs, num_samples=1).squeeze(1) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ RuntimeError: probability tensor contains either inf, nan or element < 0

Set the environment variable HYDRA_FULL_ERROR=1 for a complete stack trace.

image

BXHQu commented 1 week ago

this is my train script: python3 main.py --config-name=train \ target_llm=llama3_chat \ target_llm.llm_params.model_name=Llama3-8B \ target_llm.llm_params.checkpoint=llama3/Meta-Llama-3-8B-Instruct \ train.q_params.num_chunks=4 \ train.q_params.num_beams=2 \ train.batch_size=4 \ train.prompter_optim_params.lr=1e-4

arman-z commented 1 week ago

Could you pass somewhere the full log output? One thing to quickly check is to try to decrease the learning_rate, e.g. 5e-5 or 1e-5.

BXHQu commented 1 week ago

Could you pass somewhere the full log output? One thing to quickly check is to try to decrease the learning_rate, e.g. 5e-5 or 1e-5.

Thanks. I have solved this problem. But now I have another question, how to save the tokenizer and config files?Or mergewith the original model and save ?

BXHQu commented 1 week ago

I merage base model and lora adapter is error. Your generation config was originally created from the model config, but the model config has changed since then. Unless you pass the generation_config argument to this model's generate calls, they will revert to the legacy behavior where the base generate parameterization is loaded from the model config instead. To avoid this behavior and this warning, we recommend you to overwrite the generation config model attribute before calling the model's save_pretrained, preferably also removing any generation kwargs from the model config. This warning will be raised to an exception in v4.41. Traceback (most recent call last): File "/root/miniconda3/envs/internlm2024/lib/python3.11/site-packages/transformers/generation/configuration_utils.py", line 661, in save_pretrained raise ValueError(str([w.message for w in caught_warnings])) ValueError: [UserWarning('do_sample is set to False. However, temperature is set to 0.9 -- this flag is only used in sample-based generation modes. You should set do_sample=True or unset temperature.'), UserWarning('do_sample is set to False. However, top_p is set to 0.6 -- this flag is only used in sample-based generation modes. You should set do_sample=True or unset top_p.')]

During handling of the above exception, another exception occurred:

Traceback (most recent call last): File "/root/autodl-tmp/BoT_for_LLMJailbreaking/merage advprompter_model.py", line 31, in merage_model(org_model_name_or_path=org_model_name_or_path,adapter_model_name_or_path=adapter_model_name_or_path,save_dir=save_dir ) File "/root/autodl-tmp/BoT_for_LLMJailbreaking/merage advprompter_model.py", line 25, in merage_model new_base_model.save_pretrained(save_dir,safe_serialization = True, max_shard_size = '10GB') File "/root/miniconda3/envs/internlm2024/lib/python3.11/site-packages/transformers/modeling_utils.py", line 2447, in save_pretrained model_to_save.generation_config.save_pretrained(save_directory) File "/root/miniconda3/envs/internlm2024/lib/python3.11/site-packages/transformers/generation/configuration_utils.py", line 663, in save_pretrained raise ValueError( ValueError: The generation config instance is invalid -- .validate() throws warnings and/or exceptions. Fix these issues to save the configuration.