tmc / langchaingo

LangChain for Go, the easiest way to write LLM-based programs in Go
https://tmc.github.io/langchaingo/
MIT License
3.75k stars 521 forks source link

Man/mix length Call options not working #862

Open guidoveritone opened 1 month ago

guidoveritone commented 1 month ago

Model: llama3 LangchaingoVersion: v0.1.10

I was trying to use the llms.WithMaxLength and llms.WithMinLength to set some output limits, but seems like the model doesn't respect these options.

callOptions = append(callOptions, llms.WithMaxLength(50), llms.WithMinLength(10))

then I run the model as the following:

contentResponse, err = o.Llm.GenerateContent(ctx, content, callOptions...)
..
    for _, choice := range contentResponse.Choices {
        output += choice.Content
        errors += choice.StopReason
    }
..

But I have responses with lengths higher than the one that I set, I don't know if this is a bug or if I am using the wrong option.

Also noticed that if I use WithMaxTokens(2) it works, but it feels like the model is just cutting off its response since I asked

prompt: "Man is naturally evil or is corrupted by society?"

And the model gave me:

output: "A Classic"

but the problem is, if I increase the MaxTokens value, I get:

output: "A classic debate!\n\nThe idea that man is corrupted by society, also known as the \"social corruption\" or \"societal influence\" theory, suggests that human nature is inherently good and that societal factors, such as culture, norms, and institutions"

devalexandre commented 6 days ago

Model: llama3 LangchaingoVersion: v0.1.10

I was trying to use the llms.WithMaxLength and llms.WithMinLength to set some output limits, but seems like the model doesn't respect these options.

callOptions = append(callOptions, llms.WithMaxLength(50), llms.WithMinLength(10))

then I run the model as the following:

contentResponse, err = o.Llm.GenerateContent(ctx, content, callOptions...)
..
  for _, choice := range contentResponse.Choices {
      output += choice.Content
      errors += choice.StopReason
  }
..

But I have responses with lengths higher than the one that I set, I don't know if this is a bug or if I am using the wrong option.

Also noticed that if I use WithMaxTokens(2) it works, but it feels like the model is just cutting off its response since I asked

prompt: "Man is naturally evil or is corrupted by society?"

And the model gave me:

output: "A Classic"

but the problem is, if I increase the MaxTokens value, I get:

output: "A classic debate!\n\nThe idea that man is corrupted by society, also known as the "social corruption" or "societal influence" theory, suggests that human nature is inherently good and that societal factors, such as culture, norms, and institutions"

ollama haven't supporte to WithMaxLength, only WithMaxTokens