uukuguy / multi_loras

Load multiple LoRA modules simultaneously and automatically switch the appropriate combination of LoRA modules to generate the best answer based on user queries.
MIT License
137 stars 9 forks source link

AttributeError: 'Linear' object has no attribute 'lora_A' #3

Open flozi00 opened 9 months ago

flozi00 commented 9 months ago
python3 -m multi_loras \
    extract_lora \
    --base_model_name_or_path "mistralai/Mistral-7B-v0.1" \
    --tuned_model_name_or_path "HuggingFaceH4/zephyr-7b-beta" \
    --save_path "./mistral-zephyr-lora" \
    --fp16 \
    --bits 16 \
    --lora_r 128

errors in:

Run SVD:   0%| | 1/673 [00:03<43:49,  3.91s/it, layer=model.layers.0.self_attn.q_proj.lora_A.default, shape=torch.Size([
Traceback (most recent call last):
  File "/usr/lib/python3.8/runpy.py", line 194, in _run_module_as_main
    return _run_code(code, main_globals, None,
  File "/usr/lib/python3.8/runpy.py", line 87, in _run_code
    exec(code, run_globals)
  File "/usr/local/lib/python3.8/dist-packages/multi_loras/__main__.py", line 48, in <module>
    main()
  File "/usr/local/lib/python3.8/dist-packages/multi_loras/__main__.py", line 43, in main
    cmd_func(args)
  File "/usr/local/lib/python3.8/dist-packages/multi_loras/extract_lora.py", line 137, in do_extract_lora
    assert lora_base.lora_A.default.weight.shape == Vh.shape
  File "/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py", line 1695, in __getattr__
    raise AttributeError(f"'{type(self).__name__}' object has no attribute '{name}'")
AttributeError: 'Linear' object has no attribute 'lora_A'

not depeding on quant or precision

uukuguy commented 9 months ago

try —bits 4 or 8

flozi00 commented 9 months ago

does not depends on this params, still the same error but with 4bit linear instead of linear

uukuguy commented 9 months ago

Use --bf16 instead of --fp16 The torch_dtype of the Mistral-7B-v0.1 or the zephyr-7b-beta is "bfloat16". Check the model's config.json file.

flozi00 commented 9 months ago

same error, the attribute still not found which versions are you using ?

flozi00 commented 9 months ago

I did some experiments and can see that excactly 1 of 3 linear layers are lora layers only. I added an filter and now its running need to check if its possible to load it successfully too.

could you maybe add an gradio demo on huggingface spaces or docker container where the current script is running successfully ?

uukuguy commented 9 months ago

I have been using the GitHub dev branch all along, and synchronizing it with the main branch in a timely manner. The PyPI version updates may be slower. Let me check if it is indeed a version issue. The Docker version will be added soon. Thank you for your suggestions and support.

uukuguy commented 9 months ago

I noticed that the correct number of model modules should be 224 instead of 673 when there is an error. I am investigating possible reasons for the error, and it seems that it should not be a program issue but possibly due to certain reasons related to loading the model.

flozi00 commented 9 months ago

Okay, then the filter I added is correct Should I open an PR?

uukuguy commented 9 months ago

I'm very glad you did this.

rbollampally commented 7 months ago

@flozi00 I'm getting the same error. Could you create a patch of your changes? @uukuguy Would this work with mixtral models as well?

uukuguy commented 7 months ago

@flozi00 I'm getting the same error. Could you create a patch of your changes? @uukuguy Would this work with mixtral models as well?

Would you please provide the script that encountered an error?

python3 -m multi_loras \
    extract_lora \
    --base_model_name_or_path "mistralai/Mistral-7B-v0.1" \
    --tuned_model_name_or_path "HuggingFaceH4/zephyr-7b-beta" \
    --save_path "./mistral-zephyr-lora" \
    --bf16 \
    --bits 4 \
    --lora_r 128
rbollampally commented 7 months ago

python -m multi_loras extract_lora --base_model_name_or_path /home/ubuntu/models/Mixtral/Mixtral-8x7B-Instruct-v0.1/ --tuned_model_name_or_path /home/ubuntu/models/meetkai/functionary-medium-v2.2/ --save_path /home/ubuntu/models/loras/ --bf16 --bits 8 --lora_r 128 BTW, I managed to run the script after posting this like this:

    for (name_base, lora_base), (name_tuned, lora_tune) in pbar:
        assert name_base == name_tuned, f"name_base={name_base} != name_tuned={name_tuned}"

        residual = lora_tune.weight.data - lora_base.weight.data
        pbar.set_postfix({"layer": name_base.replace("base_model.model.", ""), "shape": residual.shape})

        try:
            assert lora_base.lora_A
        except:
            print("no lora_A here")
        else:
            # SVD on residual
            U, Vh = svd_distill(residual, rank=rank, clamp_quantile=clamp_quantile)
            assert lora_base.lora_A.default.weight.shape == Vh.shape, f"{lora_base=}"
            assert lora_base.lora_B.default.weight.shape == U.shape, f"{lora_base=}"
            lora_base.lora_A.default.weight.data = Vh.to(device=device, dtype=dtype)
            lora_base.lora_B.default.weight.data = U.to(device=device, dtype=dtype)

The question about Mixtral still stands... I'm not able to test the lora just yet as lorax is throwing error for me. I will have to switch to other program and try

There is also another minor issue when installed using pip as stated in readme.md: in multi_loras/multi_loras/main.py from .merge_peft_adapters import do_merge_lora

To make it work, I had to modify multi_loras/merge_peft_adapters.py (untested)

.....
def get_args():
    import argparse
    parser = argparse.ArgumentParser()

    parser.add_argument("--base_model_name_or_path", type=str)
    # parser.add_argument("--peft_model_path", type=str)
    parser.add_argument("--lora_model_path", type=str)
    parser.add_argument("--merged_model_name_or_path", type=str, default=None)
    parser.add_argument("--push_to_hub", action="store_true", default=False)

    args = parser.parse_args()

def do_merge_lora(args):
    merge_peft_adapters(base_model_name_or_path=args.base_model_name_or_path, 
                        # peft_model_path=args.peft_model_path, 
                        peft_model_path=args.lora_model_path, 
                        merged_model_name_or_path=args.merged_model_name_or_path,
                        push_to_hub=args.push_to_hub
                        )

def main():
    args = get_args()
    do_extract_lora(args)

if __name__ == "__main__" :
    main()