Closed CaiJichang212 closed 1 month ago
how can you so fast?
bug code position
when i run run_knowedit_llama2.py with llama2 & ROME,there is no bug. is the qwen1.5 a reason?
Hello, run_knowedit_llama2.py is prepared to run llama2 for the knowedit dataset. If you want to run qwen, I recommend you use the functions inside edit.py and load the dataset using KnowEditDataset
thank for you response, however I don't think this bug has anything to do with the dataset.
elif 'qwen' in self.model_name.lower():
# self.model = AutoModelForCausalLM.from_pretrained(self.model_name,fp32=False,trust_remote_code=True, device_map=device_map)
# fix cjc@0603 TypeError: __init__() got an unexpected keyword argument 'fp32' qwen1.5
self.model = AutoModel.from_pretrained(self.model_name,trust_remote_code=True, torch_dtype=torch_dtype, device_map=device_map)
self.tok = AutoTokenizer.from_pretrained(self.model_name, eos_token='<|endoftext|>', pad_token='<|endoftext|>',unk_token='<|endoftext|>', trust_remote_code=True)
whether running run_knowedit_llama2.py
or something else xxx.py
script,executes this load model&tok code.
Hello, run_knowedit_llama2.py is prepared to run llama2 for the knowedit dataset. If you want to run qwen, I recommend you use the functions inside edit.py and load the dataset using KnowEditDataset
Looking forward to your reply
when i run
run_knowedit_llama2.py
with Qwen1.5 & ROME: