ymcui / Chinese-LLaMA-Alpaca-2

中文LLaMA-2 & Alpaca-2大模型二期项目 + 64K超长上下文模型 (Chinese LLaMA-2 & Alpaca-2 LLMs with 64K long context models)
Apache License 2.0
7.04k stars 581 forks source link

Manually merge issue: Two parameters should be deleted when using hfl/chinese-llama-2-7b LoRA file #408

Closed hsyodyssey closed 9 months ago

hsyodyssey commented 9 months ago

Check before submitting issues

Type of Issue

Model conversion and merging

Base Model

Chinese-LLaMA-2 (7B/13B)

Operating System

Linux

Describe your issue in detail

# Please copy-and-paste your command here.

According to the issue here, parameters " "enable_lora": null," and "merge_weights": false," in adapter_config.json file are dismissed in the latest huggingface/peft repo.

Would be better to delete them when doing manually merge locally. Otherwise, shall meet the following issues:

- TypeError: LoraConfig.__init__() got an unexpected keyword argument 'enable_lora'
- TypeError: LoraConfig.__init__() got an unexpected keyword argument 'merge_weights'

Dependencies (must be provided for code-related issues)

# Please copy-and-paste your dependencies here.

Execution logs or screenshots

One

================================================================================
Base model: Llama-2-7b-hf
LoRA model: ch
Loading ch
You are using the legacy behaviour of the <class 'transformers.models.llama.tokenization_llama.LlamaTokenizer'>. This means that tokens that come after special tokens will not be properly handled. We recommend you to read the related pull request available at https://github.com/huggingface/transformers/pull/24565
Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained.
Traceback (most recent call last):
  File "/home/xx/workspace/ai/Chinese-LLaMA-Alpaca-2/scripts/merge_llama2_with_chinese_lora_low_mem.py", line 240, in <module>
    lora_config = peft.LoraConfig.from_pretrained(lora_model_path)
                  ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/xx/anaconda3/envs/lora-test/lib/python3.11/site-packages/peft/config.py", line 134, in from_pretrained
    config = config_cls(**kwargs)
             ^^^^^^^^^^^^^^^^^^^^
TypeError: LoraConfig.__init__() got an unexpected keyword argument 'enable_lora'

Two

================================================================================
Base model: Llama-2-7b-hf
LoRA model: ch
Loading ch
You are using the legacy behaviour of the <class 'transformers.models.llama.tokenization_llama.LlamaTokenizer'>. This means that tokens that come after special tokens will not be properly handled. We recommend you to read the related pull request available at https://github.com/huggingface/transformers/pull/24565
Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained.
Traceback (most recent call last):
  File "/home/xx/workspace/ai/Chinese-LLaMA-Alpaca-2/scripts/merge_llama2_with_chinese_lora_low_mem.py", line 240, in <module>
    lora_config = peft.LoraConfig.from_pretrained(lora_model_path)
                  ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/xx/anaconda3/envs/lora-test/lib/python3.11/site-packages/peft/config.py", line 134, in from_pretrained
    config = config_cls(**kwargs)
             ^^^^^^^^^^^^^^^^^^^^
TypeError: LoraConfig.__init__() got an unexpected keyword argument 'merge_weights'
iMountTai commented 9 months ago

Please try using the peft version in our repo, as it is strongly bound.

hsyodyssey commented 9 months ago

Please try using the peft version in our repo, as it is strongly bound.

Yep, I just created a new env and run $ pip install -r requirements.txt directly.

I think it is because in requirements.txt only requiring  peft>=0.3.0. Thus, the latest version would be downloaded.

Collecting peft>=0.3.0 (from -r requirements.txt (line 1))
  Downloading peft-0.6.2-py3-none-any.whl.metadata (23 kB)
Collecting torch==2.0.1 (from -r requirements.txt (line 2))
  Using cached torch-2.0.1-cp311-cp311-manylinux1_x86_64.whl (619.9 MB)
Collecting transformers==4.35.0 (from -r requirements.txt (line 3))
  Downloading transformers-4.35.0-py3-none-any.whl.metadata (123 kB)
     ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 123.1/123.1 kB 7.3 MB/s eta 0:00:00
Collecting sentencepiece==0.1.99 (from -r requirements.txt (line 4))
  Downloading sentencepiece-0.1.99-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (1.3 MB)
     ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 1.3/1.3 MB 50.0 MB/s eta 0:00:00
Collecting bitsandbytes==0.41.1 (from -r requirements.txt (line 5))
iMountTai commented 9 months ago

You are right, some parameters of the old version and the new version of peft are not compatible, you can delete the relevant parameters or install the earlier version of peft.

github-actions[bot] commented 9 months ago

This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your consideration.

github-actions[bot] commented 9 months ago

Closing the issue, since no updates observed. Feel free to re-open if you need any further assistance.