johnsmith0031 / alpaca_lora_4bit

MIT License
533 stars 84 forks source link

ImportError: cannot import name '_get_submodules' from 'peft.utils' #114

Closed saber258 closed 1 year ago

saber258 commented 1 year ago

Hi, thanks for your great job!

I'm new to this area and meet some problems, maybe these problems are stupid......

I have used finetune.py finetune my data and want to use inference.py to test. I have add the code to promise my finetune weights will work just like issue #48 .

However, the inference.py do not work: CUDA SETUP: CUDA runtime path found: /root/conda/anaconda3/envs/yuki/lib/libcudart.so.11.0 CUDA SETUP: Highest compute capability among GPUs detected: 7.5 CUDA SETUP: Detected CUDA version 117 CUDA SETUP: Loading binary /root/conda/anaconda3/envs/yuki/lib/python3.10/site-packages/bitsandbytes/libbitsandbytes_cuda117.so... Traceback (most recent call last): File "/root/Model/alpaca_lora_4bit/inference.py", line 6, in from monkeypatch.peft_tuners_lora_monkey_patch import replace_peft_model_with_gptq_lora_model File "/root/Model/alpaca_lora_4bit/monkeypatch/peft_tuners_lora_monkey_patch.py", line 8, in from peft.utils import _get_submodules, PeftType ImportError: cannot import name '_get_submodules' from 'peft.utils' (/root/conda/anaconda3/envs/yuki/lib/python3.10/site-packages/peft/utils/init.py)

I think maybe it's the peft version's fault?

johnsmith0031 commented 1 year ago

Maybe it's because of the version of peft mismatch. You can use this version: git+https://github.com/huggingface/peft.git@70af02a2bca5a63921790036b2c9430edf4037e2 And you can create a new virtual env so that it'll not break other projects which depends on new peft.

saber258 commented 1 year ago

Maybe it's because of the version of peft mismatch. You can use this version: git+https://github.com/huggingface/peft.git@70af02a2bca5a63921790036b2c9430edf4037e2 And you can create a new virtual env so that it'll not break other projects which depends on new peft.

Thanks for your reply. I solved the problem. But there is another:

Target module Autograd4bitQuantLinear() is not supported. Currently, only torch.nn.Linear and Conv1D are supported.

I assure I have import custom_monkey_patch to server.py. Is there any method?

johnsmith0031 commented 1 year ago

I think it would work as long as this line is applied:

from monkeypatch.peft_tuners_lora_monkey_patch import replace_peft_model_with_gptq_lora_model, Linear4bitLt
replace_peft_model_with_gptq_lora_model()
saber258 commented 1 year ago

I think it would work as long as this line is applied:

from monkeypatch.peft_tuners_lora_monkey_patch import replace_peft_model_with_gptq_lora_model, Linear4bitLt
replace_peft_model_with_gptq_lora_model()

Maybe it's my misunderstanding, but I assure that in the custom_monkey_patch, there are these two lines:

from monkeypatch.peft_tuners_lora_monkey_patch import replace_peft_model_with_gptq_lora_model, Linear4bitLt replace_peft_model_with_gptq_lora_model()

And I have imported custom_monkey_patch to server.py. But it still exits this problem.

johnsmith0031 commented 1 year ago

Can you provide more detailed information about the error? I think the error should be in peft_tuners_lora_monkey_patch.py line 162 if the monkeypatch is applied.

saber258 commented 1 year ago

Can you provide more detailed information about the error? I think the error should be in peft_tuners_lora_monkey_patch.py line 162 if the monkeypatch is applied.

I check the code carefully and finally find there is little spelling error in the "monkeypatch" when I use "from monkeypatch.peft_tuners_lora_monkey_patch import replace_peft_model_with_gptq_lora_model, Linear4bitLt". I'm sorry for my stupid.

But I meet another problem:

ImportError: cannot import name 'Linear4bitLt' from 'peft.tuners.lora' (/root/conda/anaconda3/envs/yuki/lib/python3.10/site-packages/peft/tuners/lora.py).

I checked peft.tuners.lora.py and found there is just Linear8bitLt but not Linear4bitLt.(I used peft you give me: git+https://github.com/huggingface/peft.git@70af02a2bca5a63921790036b2c9430edf4037e2). Maybe this version do not have Linear4bitLt?

johnsmith0031 commented 1 year ago

Please add more information of the stacks so I can know which code cause the problem,

saber258 commented 1 year ago

Please add more information of the stacks so I can know which code cause the problem,

Ok, does the more information of the stacks mean the following?>>

===================================BUG REPORT=================================== Welcome to bitsandbytes. For bug reports, please run

python -m bitsandbytes

and submit this information together with your error trace to: https://github.com/TimDettmers/bitsandbytes/issues

bin /root/conda/anaconda3/envs/yuki/lib/python3.10/site-packages/bitsandbytes/libbitsandbytes_cuda117.so CUDA SETUP: CUDA runtime path found: /root/conda/anaconda3/envs/yuki/lib/libcudart.so.11.0 CUDA SETUP: Highest compute capability among GPUs detected: 7.5 CUDA SETUP: Detected CUDA version 117 CUDA SETUP: Loading binary /root/conda/anaconda3/envs/yuki/lib/python3.10/site-packages/bitsandbytes/libbitsandbytes_cuda117.so... Monkey Patch Completed. Loading GPT4-x-Vicuna-13b-4bit/gpt4-x-vicuna-13b-GPTQ4bit-g128.pt ... Loading Model ... LOADING LORA Traceback (most recent call last): File "/root/Model/text-generation-webui/server.py", line 1090, in shared.model, shared.tokenizer = load_model(shared.model_name) File "/root/Model/text-generation-webui/custom_monkey_patch.py", line 21, in load_model_llama model, tokenizer = load_llama_model_4bit_low_ram(config_path, model_path, lora_path, groupsize=-1, is_v1_model=False) File "/root/Model/text-generation-webui/autograd_4bit.py", line 214, in load_llama_model_4bit_low_ram from peft.tuners.lora import Linear4bitLt ImportError: cannot import name 'Linear4bitLt' from 'peft.tuners.lora' (/root/conda/anaconda3/envs/yuki/lib/python3.10/site-packages/peft/tuners/lora.py)

johnsmith0031 commented 1 year ago

Why do you have a autograd_4bit.py file under text-generation-webui path? And the version of autograd_4bit.py is out of date.

saber258 commented 1 year ago

Why do you have a autograd_4bit.py file under text-generation-webui path? And the version of autograd_4bit.py is out of date.

I use the webui of oobabooga, so the autograd_4bit.py file is under text-generation-webui of oobabooga. I changed the new version of autograd_4bit.py and now everything is normal.

Thanks again for your help and patience! Thank you very much!