johnsmith0031 / alpaca_lora_4bit

MIT License
534 stars 84 forks source link

see following error when running python finetune.py #87

Closed wesleysanjose closed 1 year ago

wesleysanjose commented 1 year ago

Loading Model ... Traceback (most recent call last): File "finetune.py", line 60, in model, tokenizer = load_llama_model_4bit_low_ram(ft_config.llama_q4_config_dir, File "/home/missa/dev/4bit_alpaca_lora/autograd_4bit.py", line 202, in load_llama_model_4bit_low_ram model = accelerate.load_checkpoint_and_dispatch( File "/home/missa/miniconda3/envs/jsllama4b/lib/python3.8/site-packages/accelerate/big_modeling.py", line 479, in load_checkpoint_and_dispatch load_checkpoint_in_model( File "/home/missa/miniconda3/envs/jsllama4b/lib/python3.8/site-packages/accelerate/utils/modeling.py", line 946, in load_checkpoint_in_model set_module_tensor_to_device(model, param_name, param_device, value=param, dtype=dtype) File "/home/missa/miniconda3/envs/jsllama4b/lib/python3.8/site-packages/accelerate/utils/modeling.py", line 131, in set_module_tensor_to_device raise ValueError(f"{module} does not have a parameter or a buffer named {tensor_name}.") ValueError: Autograd4bitQuantLinear() does not have a parameter or a buffer named zeros.

not sure why