tmc / langchaingo

LangChain for Go, the easiest way to write LLM-based programs in Go
https://tmc.github.io/langchaingo/
MIT License
3.72k stars 517 forks source link

[Bug] Response Format of json is not working in openai #931

Closed gowthamkishore3799 closed 2 days ago

gowthamkishore3799 commented 3 days ago

While using openai.WithResponseFormat with the response format set to JSON, i could see the req.ResponseFormat as nil coz of improper handling in code

Upon inspecting the code, I noticed that there is a check for an unused key in the JSON format at the following line:

Upon inspecting the code, I noticed that there is a check for an unused key in the JSON format at the following line: https://github.com/tmc/langchaingo/blob/7eb662b22a5919b8e9daaa4c20100f26b9096b69/llms/openai/openaillm.go#L114,

ctx := context.Background()
responseFormat := &openai.ResponseFormat{Type: "json_object"}
client, err := openai.New(openai.WithModel("gpt-3.5-turbo"), openai.WithResponseFormat(responseFormat))

    content := []llms.MessageContent{
        llms.TextParts(llms.ChatMessageTypeSystem, "You are a company branding design wizard."),
        llms.TextParts(llms.ChatMessageTypeHuman, "What would be a good company name a company that makes colorful socks?"),
    }

    resp, err := client.GenerateContent(ctx, content)

    fmt.Println(resp.Choices)

    for _, message := range resp.Choices {
        fmt.Println(message.Content)
    }

    return err

Whereas its working as expected in llm package

client, err := openai.New(openai.WithModel("gpt-3.5-turbo"), openai.WithResponseFormat(responseFormat))

    completion, err := llms.GenerateFromSinglePrompt(ctx,
        client,
        "Who was first man to walk on the moon? Respond in json format, include `first_man` in response keys.",
        llms.WithTemperature(0.0),
        llms.WithJSONMode(),
    )
    if err != nil {
        log.Fatal(err)
    }

    fmt.Println(completion)
devalexandre commented 2 days ago

ctx := context.Background()

client, err := openai.New(openai.WithModel("gpt-3.5-turbo")))

    content := []llms.MessageContent{
        llms.TextParts(llms.ChatMessageTypeSystem, "You are a company branding design wizard."),
        llms.TextParts(llms.ChatMessageTypeHuman, "What would be a good company name a company that makes colorful socks?"),
    }

//add type in request
    opts := llms.CallOptions{
        JSONMode: true,
    }

    resp, err := client.GenerateContent(ctx, content,opts)

    fmt.Println(resp.Choices)

    for _, message := range resp.Choices {
        fmt.Println(message.Content)
    }

    return err
    ``
gowthamkishore3799 commented 2 days ago

@devalexandre Yes, I was able to achieve it with the following code:

response, err := client.GenerateContent(ctx, content, llms.WithJSONMode())

However, I noticed that there are unused keys for every OpenAI LLM, specifically "openai.WithResponseFormat(responseFormat)", which isn't being utilized. I wanted to report this issue here.

devalexandre commented 2 days ago

@devalexandre Yes, I was able to achieve it with the following code:

response, err := client.GenerateContent(ctx, content, llms.WithJSONMode())

However, I noticed that there are unused keys for every OpenAI LLM, specifically "openai.WithResponseFormat(responseFormat)", which isn't being utilized. I wanted to report this issue here.

this is not BUG as we have option base but for each LLM, use some options, never use all.

maybe we could return a error if some option aren't used.