Closed LAXnACE closed 8 months ago
llama.cpp can't make lora unless something has changed recently.
Having the same problem.
Not the exact same error message, but a very similar one at the same line, see the issue I just opened.
llama.cpp can't make lora unless something has changed recently.
so what should I use ??
llama.cpp can't make lora unless something has changed recently.
so what should I use ??
Figure anything out?
llama.cpp can't make lora unless something has changed recently.
so what should I use ??
Try switch to LlamacppHF
model loader, also was necessary download oobabooga/llama-tokenizer
(described here #3499)
Model used: TheBloke/Llama-2-7b-Chat-GGUF
From my console:
2023-09-21 11:05:45 WARNING:LoRA training has only currently been validated for LLaMA, OPT, GPT-J, and GPT-NeoX models. (Found model type: LlamacppHF)
2023-09-21 11:05:50 INFO:Loading JSON datasets...
Map: 100%|βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 5/5 [00:00<00:00, 87.15 examples/s]
2023-09-21 11:05:51 INFO:Getting model ready...
2023-09-21 11:05:51 INFO:Preparing for training...
2023-09-21 11:05:51 INFO:Creating LoRA model...
But in interface I got the following:
Traceback (most recent call last):
File β/home/kallebysantos/projects/machine-learning/oobabooga_linux/text-generation-webui/modules/training.pyβ, line 505, in do_train
lora_model = get_peft_model(shared.model, config) File β/home/kallebysantos/projects/machine-learning/oobabooga_linux/installer_files/env/lib/python3.10/site-packages/peft/mapping.pyβ, line 106, in get_peft_model
return MODEL_TYPE_TO_PEFT_MODEL_MAPPING[peft_config.task_type](model, peft_config, adapter_name=adapter_name) File β/home/kallebysantos/projects/machine-learning/oobabooga_linux/installer_files/env/lib/python3.10/site-packages/peft/peft_model.pyβ, line 889, in init
super().init(model, peft_config, adapter_name) File β/home/kallebysantos/projects/machine-learning/oobabooga_linux/installer_files/env/lib/python3.10/site-packages/peft/peft_model.pyβ, line 111, in init
self.base_model = PEFT_TYPE_TO_MODEL_MAPPING[peft_config.peft_type]( File β/home/kallebysantos/projects/machine-learning/oobabooga_linux/installer_files/env/lib/python3.10/site-packages/peft/tuners/lora.pyβ, line 274, in init
super().init(model, config, adapter_name) File β/home/kallebysantos/projects/machine-learning/oobabooga_linux/installer_files/env/lib/python3.10/site-packages/peft/tuners/tuners_utils.pyβ, line 88, in init
self.inject_adapter(self.model, adapter_name) File β/home/kallebysantos/projects/machine-learning/oobabooga_linux/installer_files/env/lib/python3.10/site-packages/peft/tuners/tuners_utils.pyβ, line 222, in inject_adapter
raise ValueError( ValueError: Target modules [βq_projβ, βv_projβ] not found in the base model. Please check the target modules and try again.
Any updates on this?
I'm trying to do the same also with TheBloke/Llama-2-7b-Chat-GGUF
On normal training I'm getting TypeError: LlamaCppModel.encode() got an expected keyword argument 'truncation', regardless of the loader, then I read that someone had success with using Training_PRO, unfortunately that also throws me an error, which is TypeError: LlamaCppModel.decode() got an unexpected keyword argument 'skip_special_tokens'. If I switch to the llamacpp_HF loader and download the llama_tokenizer however (successfully) training will fail with the following error: AttributeError: 'NoneType' object has no attribute 'pad_token_id'
Anybody got any ideas?
Same errors here, i don't think that it's possible to train a lora of a gguf model
Same here. I made it to the error in the UI... What other models should we try to train a lora on if this is so bugged?
You can train a HF format (bnb int8, int4,etc) model or gptq. Or train with llama.cpp itself.
Training LoRAs, atm, is only supported for Model Loader is:
To find the most recent information on this, visit the following official wiki page of the text-generation-webui repo: https://github.com/oobabooga/text-generation-webui/wiki .
This issue has been closed due to inactivity for 6 weeks. If you believe it is still relevant, please leave a comment below. You can tag a developer in your comment.
Describe the bug
I'm trying to finetune llama2, but an error occurs when calling the encoder
Is there an existing issue for this?
Reproduction
Try to finetune llama2 model loaded via llama.cpp
Screenshot
No response
Logs
System Info