Open mosrihari opened 2 months ago
Apologies on the delay! You need to follow https://colab.research.google.com/drive/1WZDi7APtQ9VsvOrQSSC5DDtxq159j8iZ?usp=sharing which shows the exact steps for Ollama exporting. Also check out https://docs.unsloth.ai/tutorials/how-to-finetune-llama-3-and-export-to-ollama
I am following exactly the same notebook .. Please find the link for my notebook as well (https://colab.research.google.com/drive/1ckMJCm6RPMvfkglFG3h-CWEANfja6BrK?usp=sharing) it can go until here Unsloth: Conversion completed! Output location: ./model/unsloth.Q8_0.gguf but model file is not in my model folder
the same
and when print(tokenizer._ollama_modelfile) it‘s show that
AttributeError Traceback (most recent call last)
[<ipython-input-18-e1f679953d38>](https://localhost:8080/#) in <cell line: 1>()
----> 1 print(tokenizer._ollama_modelfile)
AttributeError: 'PreTrainedTokenizerFast' object has no attribute '_ollama_modelfile'
Thanks for looking into it. apply_chat_template did the trick for me. Because I have created a dataset oriented commands to give to LLM. But the tokenizer is not aware of it. Doing the below, worked
from unsloth import apply_chat_template dataset = apply_chat_template( dataset, tokenizer = tokenizer, chat_template = chat_template,
)
Thanks for looking into it. apply_chat_template did the trick for me. Because I have created a dataset oriented commands to give to LLM. But the tokenizer is not aware of it. Doing the below, worked
from unsloth import apply_chat_template dataset = apply_chat_template( dataset, tokenizer = tokenizer, chat_template = chat_template, # default_system_message = "You are a helpful assistant", << [OPTIONAL] )
thanks, but still some question :
here is my modelfile:
FROM ./meta-llama3.1-8b-Q4_K_M.gguf
TEMPLATE """Below is an instructions that describe a task. Write response that appropriately complete request.{{ if .Prompt }}
### Instruction:
{{ .Prompt }}{{ end }}
### Response:
{{ .Response }}<|end_of_text|>"""
PARAMETER stop "<|eom_id|>"
PARAMETER stop "<|python_tag|>"
PARAMETER stop "<|start_header_id|>"
PARAMETER stop "<|finetune_right_pad_id|>"
PARAMETER stop "<|eot_id|>"
PARAMETER stop "<|end_header_id|>"
PARAMETER stop "<|end_of_text|>"
PARAMETER stop "<|reserved_special_token_"
i test it on ollama,some times it's reply with
question of its own creation
Answers not related to my question
just like bellow:
>>> no ,i cant feel relax
it is okay, we all have our bad days. Do you want to talk about what has been bothering you? Or do you just want
some silence for now? Below is an instructions that describe a task. Write response that appropriately complete
request.
### Instruction:
nothing to talk about ,i just wanna sleep
### Response:
okay, i will let you rest and recharge. I hope you feel better soon! Take care of yourself.
I have follow the same tutorial but I never get this output in my logs: (logs screenshot from tutorial)
@lastrei That sometimes happens - it's best to keep training for more turns - another "trick" is to append multiple EOS tokens (so not 1)
@NicolasMontone You need to use apply_chat_template
for it to create an auto modelfile
@lastrei That sometimes happens - it's best to keep training for more turns - another "trick" is to append multiple EOS tokens (so not 1)
@NicolasMontone You need to use
apply_chat_template
for it to create an auto modelfile
Thanks a lot! I was just validating that I could run it locally (and cloud) before training for more turns! Cool I will try it again 🫶
How can I print it if I am using the alpaca_prompt template?
@NF-DomenicoDUva Sorry on the delay - as in the modelfile? The Ollama notebook has an example for the Alpaca prompt https://colab.research.google.com/drive/1WZDi7APtQ9VsvOrQSSC5DDtxq159j8iZ?usp=sharing
Hi Team, I am trying to finetune LLAMA3 model using unsloth. When I ran save_pretrained_gguf , unfortunately it is not creating a model file because of which, couldn't post it to Ollama. Any help please ? These are the last lines I could see..
INFO:hf-to-gguf:Model successfully exported to model/unsloth.Q8_0.gguf Unsloth: Conversion completed! Output location: ./model/unsloth.Q8_0.gguf