tmc / langchaingo

LangChain for Go, the easiest way to write LLM-based programs in Go
https://tmc.github.io/langchaingo/
MIT License
4.15k stars 577 forks source link

Langchainjs formally supports Ollama functions. Can you support that too? #965

Open atljoseph opened 1 month ago

atljoseph commented 1 month ago

Can you support Ollama function calling in the same way as lang chain? Thanks.

Link to relevant docs: https://js.langchain.com/v0.2/docs/integrations/chat/ollama#tools

tmc commented 1 month ago

Yes this is very much a goal, I've been a little low on OSS maintenance time but would also love to see this go in soon.

jonathanhecl commented 1 month ago

This isn't working :(

I'm testing with...

r, err := llms.GenerateFromSinglePrompt(ctx, model, prompt, llms.WithTools([]llms.Tool{tool_weather}))

var tool_weather = llms.Tool{
    Type: "function",
    Function: &llms.FunctionDefinition{
        Name:        "get_current_weather",
        Description: "Get the current weather for a city",
        Parameters: json.RawMessage(`{
            "type": "object",
            "properties": {
                "city": {"type": "string", "description": "The name of the city"}
            },
            "required": ["city"]
        }`),
    },
}

and

r, err := llms.GenerateFromSinglePrompt(ctx, model, prompt, llms.WithTools([]llms.Tool{tool_weather}))

var tool_weather = llms.Tool{
    Type: "function",
    Function: &llms.FunctionDefinition{
        Name:        "get_current_weather",
        Description: "Get the current weather for a city",
        Parameters: map[string]any{
            "type": "object",
            "properties": map[string]any{
                "city": map[string]any{
                    "type":        "string",
                    "description": "The name of the city",
                },
            },
            "required": []string{"city"},
        }},
}
atljoseph commented 1 month ago

@jonathanhecl I’m not sure if this is the right thread to post your comment on. This is a post about supporting Ollama models with function calling using this library. The posted code does not have a reference to a model, so who knows if you are referring to ChatGPT, or anything else (this thread is explicitly about Ollama). The code also says “This isn’t working”, and has no error, and contains minimal context clues. I’d suggest you make a bug thread for your specific issue, and add errors and observations. Right now, it’s just like saying “the sky is blue”, which is … vague. And posting random unrelated comments in threads is a bit distracting for others.

jonathanhecl commented 1 month ago

I'm using Ollama (0.3.0)! but the tools are not working

model, err := ollama.New([]ollama.Option{ollama.WithModel("llama3.1")})
if err != nil {
     log.Fatal(err)
}

prompt:= "what is the weather in london?"

var tool_weather = llms.Tool{
    Type: "function",
    Function: &llms.FunctionDefinition{
        Name:        "get_current_weather",
        Description: "Get the current weather for a city",
        Parameters: map[string]any{
            "type": "object",
            "properties": map[string]any{
                "city": map[string]any{
                    "type":        "string",
                    "description": "The name of the city",
                },
            },
            "required": []string{"city"},
        }},
}

r, err := llms.GenerateFromSinglePrompt(ctx, model, prompt, llms.WithTools([]llms.Tool{tool_weather}))

Response:

However, I'm a large language model, I don't have real-time access to current weather conditions. But I can suggest ways for you to find out the weather in London.

You can try:

1. **Check online weather websites**: Websites like AccuWeather, Weather.com, or BBC Weather provide up-to-date and accurate information about the weather in different locations.
2. **Use a mobile app**: There are many weather apps available on both iOS and Android that you can download to get current weather conditions.
3. **Tune into local news**: Watch a UK-based news channel or listen to a London-focused radio station for the latest weather forecast.

Please try one of these options, and I hope you find out what the weather is like in London!
jonathanhecl commented 1 month ago

I'm traying too, using GenerateContent, like this:

model, err := ollama.New([]ollama.Option{ollama.WithModel("llama3.1")})
if err != nil {
     log.Fatal(err)
}

prompt:= "what is the weather in london?"

var tool_weather = llms.Tool{
    Type: "function",
    Function: &llms.FunctionDefinition{
        Name:        "get_current_weather",
        Description: "Get the current weather for a city",
        Parameters: map[string]any{
            "type": "object",
            "properties": map[string]any{
                "city": map[string]any{
                    "type":        "string",
                    "description": "The name of the city",
                },
            },
            "required": []string{"city"},
        }},
}

r, err := model.GenerateContent(ctx, []llms.MessageContent{llms.TextParts(llms.ChatMessageTypeHuman, prompt)}, llms.WithTools([]llms.Tool{tool_weather}))
if err != nil {
    log.Fatal(err)
}

for i := range r.Choices {
    fmt.Println(r.Choices[i].Content)
    fmt.Println("FuncCall: ", r.Choices[i].FuncCall)
    fmt.Println("GenerationInfo: ", r.Choices[i].GenerationInfo)
    fmt.Println("StopReason: ", r.Choices[i].StopReason)
    fmt.Println("ToolCalls: ", r.Choices[i].ToolCalls)
}

Return:

I'm a large language model, I don't have real-time access..... bla bla
FuncCall:  <nil>
GenerationInfo:  map[CompletionTokens:154 PromptTokens:22 TotalTokens:176]
StopReason:
ToolCalls:  []
atljoseph commented 1 month ago

Is it running locally or are you using a service? Are you using JSON mode? Even if so, Ollama as a server does not just out of the box support tool calling in the same way as OpenAI. Try this same thing against a grok “tool use” llama model. You might get better results. It’s all in (1) how the output from the model gets parsed (largely depends on your service or the service hosted by others - Ollama json mode can help here but it’s no magic bullet), (2) how well trained the model itself is to handle functions, and (3) whether the system supports multiple concurrent tool calling or single tool at a time.

So, it’s a little more nuanced than “works” and “doesn’t work”.

On Wed, Jul 31, 2024 at 1:48 AM Jonathan Hecl @.***> wrote:

I'm traying too, using GenerateContent, like this:

model, err := ollama.New([]ollama.Option{ollama.WithModel("llama3.1")}) if err != nil { log.Fatal(err) }

prompt:= "what is the weather in london?"

var tool_weather = llms.Tool{ Type: "function", Function: &llms.FunctionDefinition{ Name: "get_current_weather", Description: "Get the current weather for a city", Parameters: map[string]any{ "type": "object", "properties": map[string]any{ "city": map[string]any{ "type": "string", "description": "The name of the city", }, }, "required": []string{"city"}, }}, }

r, err := model.GenerateContent(ctx, []llms.MessageContent{llms.TextParts(llms.ChatMessageTypeHuman, prompt)}, llms.WithTools([]llms.Tool{tool_weather})) if err != nil { log.Fatal(err) }

for i := range r.Choices { fmt.Println(r.Choices[i].Content) fmt.Println("FuncCall: ", r.Choices[i].FuncCall) fmt.Println("GenerationInfo: ", r.Choices[i].GenerationInfo) fmt.Println("StopReason: ", r.Choices[i].StopReason) fmt.Println("ToolCalls: ", r.Choices[i].ToolCalls) }



Return:

I'm a large language model, I don't have real-time access..... bla bla
FuncCall:
GenerationInfo: map[CompletionTokens:154 PromptTokens:22 TotalTokens:176]
StopReason:
ToolCalls: []

—
Reply to this email directly, view it on GitHub
<https://github.com/tmc/langchaingo/issues/965#issuecomment-2259719930>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AF632FOTNWSJTWJEQA4PBB3ZPB3BNAVCNFSM6AAAAABLMYFXGWVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDENJZG4YTSOJTGA>
.
You are receiving this because you authored the thread.Message ID:
***@***.***>
atljoseph commented 1 month ago

The “hype cycle” says “Ollama supports tool calling”. But it’s a lot of hype at this point. Paid providers have tool use all over the place, but not Ollama yet. Ollama is free, so it’ll continue to be behind for just a while longer.

On Wed, Jul 31, 2024 at 9:17 AM Joseph Gill @.***> wrote:

Is it running locally or are you using a service? Are you using JSON mode? Even if so, Ollama as a server does not just out of the box support tool calling in the same way as OpenAI. Try this same thing against a grok “tool use” llama model. You might get better results. It’s all in (1) how the output from the model gets parsed (largely depends on your service or the service hosted by others - Ollama json mode can help here but it’s no magic bullet), (2) how well trained the model itself is to handle functions, and (3) whether the system supports multiple concurrent tool calling or single tool at a time.

So, it’s a little more nuanced than “works” and “doesn’t work”.

On Wed, Jul 31, 2024 at 1:48 AM Jonathan Hecl @.***> wrote:

I'm traying too, using GenerateContent, like this:

model, err := ollama.New([]ollama.Option{ollama.WithModel("llama3.1")}) if err != nil { log.Fatal(err) }

prompt:= "what is the weather in london?"

var tool_weather = llms.Tool{ Type: "function", Function: &llms.FunctionDefinition{ Name: "get_current_weather", Description: "Get the current weather for a city", Parameters: map[string]any{ "type": "object", "properties": map[string]any{ "city": map[string]any{ "type": "string", "description": "The name of the city", }, }, "required": []string{"city"}, }}, }

r, err := model.GenerateContent(ctx, []llms.MessageContent{llms.TextParts(llms.ChatMessageTypeHuman, prompt)}, llms.WithTools([]llms.Tool{tool_weather})) if err != nil { log.Fatal(err) }

for i := range r.Choices { fmt.Println(r.Choices[i].Content) fmt.Println("FuncCall: ", r.Choices[i].FuncCall) fmt.Println("GenerationInfo: ", r.Choices[i].GenerationInfo) fmt.Println("StopReason: ", r.Choices[i].StopReason) fmt.Println("ToolCalls: ", r.Choices[i].ToolCalls) }



Return:

I'm a large language model, I don't have real-time access..... bla bla
FuncCall:
GenerationInfo: map[CompletionTokens:154 PromptTokens:22 TotalTokens:176]
StopReason:
ToolCalls: []

—
Reply to this email directly, view it on GitHub
<https://github.com/tmc/langchaingo/issues/965#issuecomment-2259719930>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AF632FOTNWSJTWJEQA4PBB3ZPB3BNAVCNFSM6AAAAABLMYFXGWVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDENJZG4YTSOJTGA>
.
You are receiving this because you authored the thread.Message ID:
***@***.***>