Closed ioma8 closed 1 year ago
I am not aware that llama.cpp is able to be used for finetuning.
Hi, I think I may have found the answer to why this is not working over here:
I'm still not sure on how to go about getting around this error however.
Not sure what the answer is until llama.cpp updates to allow for training.
Please also take a look at my issue here at privateGPT
I got this ERROR llama_tokenize_with_model: too many tokens
as well...
This issue has been closed due to inactivity for 6 weeks. If you believe it is still relevant, please leave a comment below. You can tag a developer in your comment.
Describe the bug
I am trying to finetune Llama-2 with raw textfile data.
Is there an existing issue for this?
Reproduction
My llama file is this: llama-2-7b-chat.ggmlv3.q4_1.bin
Text generation works. I want to finetune it on custom raw textfile data. I go to the training tab, put all the parametrs, load the dataset file, press start LoRA training. I get error. In console is the following log. Can llama-2 be fineduned this way?
Screenshot
Logs
System Info