unslothai / unsloth

Finetune Llama 3.2, Mistral, Phi, Qwen 2.5 & Gemma LLMs 2-5x faster with 80% less memory
https://unsloth.ai
Apache License 2.0
18.4k stars 1.29k forks source link

Fine tuned Llama3.1 does not support tools #1239

Open darkroasted opened 2 weeks ago

darkroasted commented 2 weeks ago

This might be a silly question, but when using the Llama3.1 base model I can effortlessly pass in tools when running it in Ollama.

        response = ollama.chat(
             model='llama3.1',
            messages=messages,
            tools=tools
        )

However after fine tuning the unsloth/Meta-Llama-3.1-8B model, exporting it to ollama and plugging it into my code, I can't use tools anymore, I get the following error: fine-tuned-llama3.1 does not support tools.

Thanks in advance for reading, please let me know if you have any idea how to solve this issue.

shimmyshimmer commented 2 weeks ago

It's related to the chat template. Technically we do support it but you will need to manually edit the chat template to make it work.