meta-llama / llama-recipes

Scripts for fine-tuning Meta Llama3 with composable FSDP & PEFT methods to cover single/multi-node GPUs. Supports default & custom datasets for applications such as summarization and Q&A. Supporting a number of candid inference solutions such as HF TGI, VLLM for local or cloud deployment. Demo apps to showcase Meta Llama3 for WhatsApp & Messenger.
9.95k stars 1.4k forks source link

IImportError: cannot import name 'prepare_model_for_int8_training' from 'peft' (/usr/local/lib/python3.10/dist-packages/peft/__init__.py) #508

Closed Tizzzzy closed 2 weeks ago

Tizzzzy commented 2 weeks ago

System Info

  1. python: 3.10.12
  2. nvcc:
    nvcc --version
    nvcc: NVIDIA (R) Cuda compiler driver
    Copyright (c) 2005-2024 NVIDIA Corporation
    Built on Thu_Mar_28_02:18:24_PDT_2024
    Cuda compilation tools, release 12.4, V12.4.131
    Build cuda_12.4.r12.4/compiler.34097967_0
  3. peft: 0.10.0

All other packages are the same as the requirments.txt

Information

🐛 Describe the bug

I am new to llama-recipe, I am trying to finetune llama3 on a huggingface dataset "openbookqa". I used this command to run: python -m llama_recipes.finetuning --dataset "openbookqa" --custom_dataset.file "datasets/openbookqa_dataset.py" --batching_strategy "packing". However I got this error:

(llama3) root@Dong:/mnt/c/Users/super/OneDrive/Desktop/research/llama-recipes# python -m llama_recipes.finetuning --dataset "openbookqa" --custom_dataset.file "datasets/openbookqa_dataset.py" --batching_strategy "packing"
Traceback (most recent call last):
  File "/usr/lib/python3.10/runpy.py", line 196, in _run_module_as_main
    return _run_code(code, main_globals, None,
  File "/usr/lib/python3.10/runpy.py", line 86, in _run_code
    exec(code, run_globals)
  File "/usr/local/lib/python3.10/dist-packages/llama_recipes/finetuning.py", line 11, in <module>
    from peft import get_peft_model, prepare_model_for_int8_training
ImportError: cannot import name 'prepare_model_for_int8_training' from 'peft' (/usr/local/lib/python3.10/dist-packages/peft/__init__.py)

I followed the README instruction. I git clone the repo. I did pip install llama-recipes. I also did pip install -r requirements.txt

I did some research on this error, and some people said prepare_model_for_int8_training has been deprecated for quite some time, with PEFT v0.10.0, use prepare_model_for_kbit_training instead.

However, if this is the case, I don't know which file I need to change.

Error logs

(llama3) root@Dong:/mnt/c/Users/super/OneDrive/Desktop/research/llama-recipes# python -m llama_recipes.finetuning --dataset "openbookqa" --custom_dataset.file "datasets/openbookqa_dataset.py" --batching_strategy "packing"
Traceback (most recent call last):
  File "/usr/lib/python3.10/runpy.py", line 196, in _run_module_as_main
    return _run_code(code, main_globals, None,
  File "/usr/lib/python3.10/runpy.py", line 86, in _run_code
    exec(code, run_globals)
  File "/usr/local/lib/python3.10/dist-packages/llama_recipes/finetuning.py", line 11, in <module>
    from peft import get_peft_model, prepare_model_for_int8_training
ImportError: cannot import name 'prepare_model_for_int8_training' from 'peft' (/usr/local/lib/python3.10/dist-packages/peft/__init__.py)  

Expected behavior

I expect I can finetune llama3

mreso commented 2 weeks ago

Hi, seems like you're using an old llama-recipes version (pypi releases are sadly lagging behind quite a bit) as we've switched to prepare_model_for_kbit_training some time ago https://github.com/meta-llama/llama-recipes/blob/fb7dd3a3270031e407338027e3f6fbea2b8e431e/src/llama_recipes/finetuning.py#L11

Please update llama-recipes from source by running:

git checkout main && git pull && pip install -U .

in the repo main directory.

mreso commented 2 weeks ago

Closing this issue , feel free to reopen if there are more questions. btw we just updated the pypi package.