ksylvest / omniai-openai

An implementation of the OmniAI interface for OpenAI.
https://omniai-openai.ksylvest.com
MIT License
2 stars 1 forks source link

Client.chat w/ format: :json gets an error response #18

Closed MadBomber closed 3 months ago

MadBomber commented 3 months ago

I got the example message layout from the README file of the omniai gem.

Running Ruby v3.4.0-preview1 on MacOS. omniai (1.3.1) omniai-anthropic (1.3.0) omniai-google (1.3.0) omniai-mistral (1.3.0) omniai-openai (1.3.3)

#!/usr/bin/env ruby
# File: json_format_error.rb*

require 'omniai'
require 'omniai/openai'

client = OmniAI::OpenAI::Client.new(timeout: {
  read: 2, # i.e. 2 seconds
  write: 3, # i.e. 3 seconds
  connect: 4, # i.e. 4 seconds
})

messages = [
  {
    role: OmniAI::Chat::Role::SYSTEM,
    content: 'You are a helpful assistant with an expertise in geography.',
  },
  'What is the capital of Canada?'
]

completion = client.chat(messages, model: 'gpt-4o-2024-05-13', temperature: 0.7, format: :json)

puts completion.inspect

__END__

20:34:01 3.4.0preview1 master nibiru:OmniAI $ ./json_format_error.rb
/Users/dewayne/.rbenv/versions/3.4.0-preview1/lib/ruby/gems/3.4.0+0/gems/omniai-1.3.1/lib/omniai/chat.rb:63:in 'OmniAI::Chat#process!': status=#<HTTP::Response::Status 400 Bad Request> headers=#<HTTP::Headers {"Date"=>"Sun, 30 Jun 2024 01:34:04 GMT", "Content-Type"=>"application/json", "Content-Length"=>"219", "Connection"=>"keep-alive", "openai-organization"=>"user-ifrawzdluvy3lyawe5koawx0", "openai-processing-ms"=>"14", "openai-version"=>"2020-10-01", "strict-transport-security"=>"max-age=31536000; includeSubDomains", "x-ratelimit-limit-requests"=>"5000", "x-ratelimit-limit-tokens"=>"600000", "x-ratelimit-remaining-requests"=>"4999", "x-ratelimit-remaining-tokens"=>"599958", "x-ratelimit-reset-requests"=>"12ms", "x-ratelimit-reset-tokens"=>"4ms", "x-request-id"=>"req_91f056cd34313bee75c0ea11b8d19cf2", "CF-Cache-Status"=>"DYNAMIC", "Set-Cookie"=>["__cf_bm=Ea_qGz9Y75xOQS82_nT5poW5ocAiJ0D5YMC3NnSF9g4-1719711244-1.0.1.1-JzGJM4afR1ch_lkAyC3IGngEFs1ePq7m6lCyRzmaF2xoP_mGBCl6zr7n1YOfpcIOQniNN8lkxmOiIZ0Xr4tyKA; path=/; expires=Sun, 30-Jun-24 02:04:04 GMT; domain=.api.openai.com; HttpOnly; Secure; SameSite=None", "_cfuvid=MZTJV2HTUg8E1Ni9hHPC.6KwGo70Mhgqzp6CHyxY2vI-1719711244072-0.0.1.1-604800000; path=/; domain=.api.openai.com; HttpOnly; Secure; SameSite=None"], "Server"=>"cloudflare", "CF-RAY"=>"89ba6969da836bf2-DFW", "alt-svc"=>"h3=\":443\"; ma=86400"}> body={ (OmniAI::HTTPError)
  "error": {
    "message": "'messages' must contain the word 'json' in some form, to use 'response_format' of type 'json_object'.",
    "type": "invalid_request_error",
    "param": "messages",
    "code": null
  }
}
  from /Users/dewayne/.rbenv/versions/3.4.0-preview1/lib/ruby/gems/3.4.0+0/gems/omniai-1.3.1/lib/omniai/chat.rb:41:in 'OmniAI::Chat.process!'
  from /Users/dewayne/.rbenv/versions/3.4.0-preview1/lib/ruby/gems/3.4.0+0/gems/omniai-openai-1.3.3/lib/omniai/openai/client.rb:73:in 'OmniAI::OpenAI::Client#chat'
  from ./json_format_error.rb:20:in '<main>'
MadBomber commented 3 months ago

I was able to get the request to complete the round trip by adding the word "JSON" in from of the word "assistant" in the content component of the messages object just like the error response suggested.

I think your intent with the format: parameter was to look up the value to get a text string to insert somewhere within the prompt text. It does not look like that is happening.

ksylvest commented 3 months ago

@MadBomber thanks for the submission. The documentation for the format argument includes a note on JSON usage:

https://github.com/ksylvest/omniai-openai?tab=readme-ov-file#format

Specifically, it contains:

completion = client.chat([
  { role: OmniAI::Chat::Role::SYSTEM, content: OmniAI::Chat::JSON_PROMPT },
  { role: OmniAI::Chat::Role::USER, content: 'What is the name of the drummer for the Beatles?' }
], format: :json)
JSON.parse(completion.choice.message.content) # { "name": "Ringo" }

When using JSON mode, you must also instruct the model to produce JSON yourself via a system or user message.

Anytime generating JSON I'd either suggest using the built in constant prompt or alternatively using a prompt that explicitly calls out JSON (sounds like you wound up using that version). That constant simply includes the following text as a system message:

"Respond with valid JSON. Do not include any non-JSON in the response."

To be clear, the intent of format isn't to do a lookup. It instead passes through the response_format to be json_object. If you feel like the documentation for any of this can be cleaned up happy to accept a PR, but going to close for now as sounds like the behaviour is as expected fi the example didn't follow the inclusion of the JSON_PROMPT.