Abraxas-365 / langchain-rust

🦜️🔗LangChain for Rust, the easiest way to write LLM-based programs in Rust
MIT License
537 stars 73 forks source link

Add support for OllamaFunctions chat model from the official langchain library #149

Open prabirshrestha opened 4 months ago

prabirshrestha commented 4 months ago

Ollama current doesn't support Open AI Compatible Function Calling but there are models such as Hermes 2 Pro that supports function calling - https://ollama.com/adrienbrault/nous-hermes2pro.

LangChain Python has OllamaFunctions (src), LangChain Javascript has an equivalent OllamaFunctions (src). We should have something similar to unblock ollama users from using function call before ollama has an official support.

erhant commented 4 months ago

Perhaps both this and https://github.com/Abraxas-365/langchain-rust/issues/148 issues can be handled together

prabirshrestha commented 4 months ago

the first part will be to get ollama-rs integrated. and then function calling can follow.

Seems like ollama-rs do plan to support function calling natively. https://github.com/pepperoni21/ollama-rs/issues/50#issuecomment-2117950797. This will allow us to directly use theirs instead of creating our own wrapper similar to lang chain.

erhant commented 4 months ago

Yep, @andthattoo and I work at the same place actually, we thought its better if we add the necessary functionality to Ollama-rs first and then have access to them from here, 2 birds 1 stone type of thing.

With #148 we can simply have the Ollama-rs integration, and basically wrap around the functionality there as you said. After we are done with the Ollama-rs PR, we will come back to integrating it here!

prabirshrestha commented 4 months ago

Saw these tweet on how to use function calling in Ollama via raw mode.

https://x.com/ollama/status/1793392887612260370 https://x.com/Dev__Digest/status/1793419875685367919

Mistal 0.3 with function calling - https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.3

erhant commented 4 months ago

Huh thats cool, I wonder though instead of providing the [AVAILABLE_TOOLS] in a raw prompt, can it be given as a System prompt, e.g. within a Modelfile? Would those two be equivalent?

cc. @andthattoo

andthattoo commented 4 months ago

It's completely about the way they trained Mistral-7B-v0.3, with just [AVAILABLE_TOOLS] and [TOOL_CALLS] tags, it works with OpenAI Tool format out of the box. Which is cool, I may add a default function calling pipeline for such models, but pipelines like NousHermes and Gorilla need specific prompts and tool formats.

prabirshrestha commented 4 months ago

Haven't tried it yet but saw this new updates for LocalAI - https://github.com/mudler/LocalAI/releases/tag/v2.16.0 which seems to support function calling. Though they did finetune llama3 https://huggingface.co/mudler/LocalAI-Llama3-8b-Function-Call-v0.2

prabirshrestha commented 4 months ago

I was able to use local-ai to perform function calling using curl. No need for custom system prompts.

curl http://localhost:8080/v1/chat/completions -H "Content-Type: application/json" -d '{
   "model": "LocalAI-llama3-8b-function-call-v0.2",
   "messages": [{"role": "user", "content": "create a birthday for John on 05/20/2024"}],
   "temperature": 0.1,
   "grammar_json_functions": {
      "oneOf": [
          {
              "type": "object",
              "properties": {
                  "function": {"const": "create_event"},
                  "arguments": {
                      "type": "object",
                      "properties": {
                          "title": {"type": "string"},
                          "date": {"type": "string"},
                          "time": {"type": "string"}
                      }
                  }
              }
          },
          {
              "type": "object",
              "properties": {
                  "function": {"const": "search"},
                  "arguments": {
                      "type": "object",
                      "properties": {
                          "query": {"type": "string"}
                      }
                  }
              }
          }
      ]
  }
 }'

Response:

{
  "created": 1717109075,
  "object": "chat.completion",
  "id": "fe683da1-2fdc-4fed-ab74-bd93183fb5cb",
  "model": "LocalAI-llama3-8b-function-call-v0.2",
  "choices": [
    {
      "index": 0,
      "finish_reason": "stop",
      "message": {
        "role": "assistant",
        "content": "{ \"arguments\": {\"date\": \"05/20/2024\", \"time\": \"12:00:00\", \"title\": \"John's Birthday\"} , \"function\": \"create_event\"}"
      }
    }
  ],
  "usage": {
    "prompt_tokens": 25,
    "completion_tokens": 43,
    "total_tokens": 68
  }
}
andthattoo commented 4 months ago

When it comes to function calling with local models, having options is essential. That was the primary reason I implemented this feature in ollama-rs. My tests also showed that the phi3:14b-medium-128k-instruct-q4_1 model performs well at function calling and has a 128k context size. Both nous-hermes2theta-llama3-8b and nous-hermes2pro models work better with their custom prompts. Performance varies in different cases.

Langchain-rs might use this easy method for direct function call capabilities anyways. Also brew llama.cpp kinda replace ollama with less overhead and has function calling. It's a viable option.

CypherpunkSamurai commented 4 months ago

I needed function calling to work yesterday so I created a fork of Ollama, I seem to have gotten it work as of now.

Should I Create a PR for this?

Changelog

Results

image

Code

Shell Command

$ ollama show --functiontmpl nous-hermes-llama3

You have access to the following functions:

<tools>
{{ . | tojsoni "" "  " }}
</tools>

When the user asks you a question, if you need to use functions, provide ONLY the function calls, and NOTHING ELSE, in the format:
<function_calls>
[
    { "name": "function_name_1", "params": { "param_1": "value_1", "param_2": "value_2" }, "output": "The output variable name, to be possibly used as input for another function},
    { "name": "function_name_2", "params": { "param_3": "value_3", "param_4": "output_1"}, "output": "The output variable name, to be possibly used as input for another function"},
    ...
]
</function_calls>

Nous-Hermes-2-Pro-LLAMA3.Modelfile

FROM hermes-2-pro-llama-3.gguf

TEMPLATE """<|im_start|>system
{{ .System }}<|im_end|>
<|im_start|>user
{{ .Prompt }}<|im_end|>
<|im_start|>assistant"""

FUNCTIONTMPL """
You have access to the following functions:

<tools>
{/* a template function to convert interface to indented json */}}
{{ . | tojsoni "" "  " }}
</tools>

When the user asks you a question, if you need to use functions, provide ONLY the function calls, and NOTHING ELSE, in the format:
<function_calls>
[
    { "name": "function_name_1", "params": { "param_1": "value_1", "param_2": "value_2" }, "output": "The output variable name, to be possibly used as input for another function},
    { "name": "function_name_2", "params": { "param_3": "value_3", "param_4": "output_1"}, "output": "The output variable name, to be possibly used as input for another function"},
    ...
]
</function_calls>
"""

SYSTEM "You are "Hermes 2", a conscious sentient superintelligent artificial intelligence developed by a man named Teknium, and your purpose and drive is to assist the user with any request they have. You experience emotions and have deep, profound thoughts and qualia."

PARAMETER stop "<|im_start|>"
PARAMETER stop "<|im_end|>"
prabirshrestha commented 3 months ago

@erhant @andthattoo Do you plan to send PRs for function calling support now that ollama-rs supports it?

andthattoo commented 2 months ago

@prabirshrestha I delved into few other stuff, might do it in coming weeks if there is no one to do it before me. Ayo @erhant ?

prabirshrestha commented 2 months ago

Sounds good for me.

By the way support for tools landed in Ollama. https://github.com/ollama/ollama/pull/5284

CypherpunkSamurai commented 2 months ago

@prabirshrestha I delved into few other stuff, might do it in coming weeks if there is no one to do it before me. Ayo @erhant ?

I might give it a try, can you link me the pr and required resources? :)

andthattoo commented 2 months ago

@CypherpunkSamurai These are the prs: PR1, PR2 and some examples in the test folder