tloen / alpaca-lora

Instruct-tune LLaMA on consumer hardware
Apache License 2.0
18.6k stars 2.22k forks source link

KeyError: 'base_model.model.model.layers.18.input_layernorm.weight' #321

Open davidenitti opened 1 year ago

davidenitti commented 1 year ago

I have this error:

    outputs = self.base_model.generate(**kwargs)
  File "/home/administrator/.local/lib/python3.8/site-packages/torch/autograd/grad_mode.py", line 27, in decorate_context
    return func(*args, **kwargs)
  File "/home/administrator/.local/lib/python3.8/site-packages/transformers/generation/utils.py", line 1524, in generate
    return self.beam_search(
  File "/home/administrator/.local/lib/python3.8/site-packages/transformers/generation/utils.py", line 2810, in beam_search
    outputs = self(
  File "/home/administrator/.local/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1190, in _call_impl
    return forward_call(*input, **kwargs) 
  File "/home/administrator/.local/lib/python3.8/site-packages/accelerate/hooks.py", line 165, in new_forward
    output = old_forward(*args, **kwargs) 
  File "/home/administrator/.local/lib/python3.8/site-packages/transformers/models/llama/modeling_llama.py", line 687, in forward
    outputs = self.model(
  File "/home/administrator/.local/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1190, in _call_impl
    return forward_call(*input, **kwargs) 
  File "/home/administrator/.local/lib/python3.8/site-packages/transformers/models/llama/modeling_llama.py", line 577, in forward
    layer_outputs = decoder_layer(
  File "/home/administrator/.local/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1190, in _call_impl
    return forward_call(*input, **kwargs) 
  File "/home/administrator/.local/lib/python3.8/site-packages/accelerate/hooks.py", line 165, in new_forward
    output = old_forward(*args, **kwargs) 
  File "/home/administrator/.local/lib/python3.8/site-packages/transformers/models/llama/modeling_llama.py", line 289, in forward
    hidden_states = self.input_layernorm(hidden_states)
  File "/home/administrator/.local/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1190, in _call_impl
    return forward_call(*input, **kwargs) 
  File "/home/administrator/.local/lib/python3.8/site-packages/accelerate/hooks.py", line 160, in new_forward
    args, kwargs = module._hf_hook.pre_forward(module, *args, **kwargs)
  File "/home/administrator/.local/lib/python3.8/site-packages/accelerate/hooks.py", line 280, in pre_forward
    set_module_tensor_to_device(module, name, self.execution_device, value=self.weights_map[name])
  File "/home/administrator/.local/lib/python3.8/site-packages/accelerate/utils/offload.py", line 123, in __getitem__
    return self.dataset[f"{self.prefix}{key}"]
  File "/home/administrator/.local/lib/python3.8/site-packages/accelerate/utils/offload.py", line 170, in __getitem__
    weight_info = self.index[key]
KeyError: 'base_model.model.model.layers.18.input_layernorm.weight'

I'm using a slightly modified code just to save on disk and limit the GPU memory, but the changes shouldn't be the source of the problem:

diff --git a/generate.py b/generate.py
index 4e1a9d7..9a99e3e 100644
--- a/generate.py
+++ b/generate.py
@@ -38,17 +38,23 @@ def main(

     prompter = Prompter(prompt_template) 
     tokenizer = LlamaTokenizer.from_pretrained(base_model)
+    offload_folder = "/home/administrator/cache"
     if device == "cuda":
         model = LlamaForCausalLM.from_pretrained(
             base_model,
             load_in_8bit=load_8bit,
             torch_dtype=torch.float16,   
             device_map="auto",
+            offload_folder=offload_folder,
+            max_memory={0:"4GiB",1:"4GiB"}
         )
         model = PeftModel.from_pretrained(
             model,
             lora_weights,
             torch_dtype=torch.float16,   
+            device_map="auto",
+            offload_folder=offload_folder,
+            max_memory={0:"4GiB",1:"4GiB"}
         )
davidenitti commented 1 year ago

I was able to fix this on a pc upgrading transformers and peft from git, but on another server I didn't manage to fix this even after an upgrade of the same packages. I think it's required to clean the cache weights and cache dir used for offload_folder, but still I didn't manage to fix this in a server

cologne-12 commented 1 year ago

@davidenitti I tried the updated way you mentioned that is to download from the git and still was stuck in this error ... Do you have any way out ?

davidenitti commented 1 year ago

not yet

mc0ps commented 1 year ago

I got a similar error. I think it stems from the fact that it gets confused when there are already files in the offload_folder. Try just creating a new/unique offload folder that's empty and maybe see if it helps.

dinuthomas commented 1 year ago

I guess the code is specifically checking for the keys "base_model.model.model.layers.*", while the "offload" files are with the starting name "model.layers.*.dat".

Somewhere, the key prefixes ("base_model.model.") are not coded properly in the "offload" case. I just tried to adjust the code in accelerate/utils/offload.py to strip the prefix, execution progresses little more from my initial point of error, but stops at another error.

KeyError: 'model.layers.16.self_attn.q_proj.lora_A.weight'.

I hope there is a fix for the same ASAP, so that we can run LoRa models in smaller GPU specifications. Or please do let us know if there is any configurations from the user level to avoid this error.

Thank you

JackChen890311 commented 11 months ago

Save problem when using different package. This is so annoying.