mgallo / openai.ex

community-maintained OpenAI API Wrapper written in Elixir.
MIT License
321 stars 70 forks source link

Intermittent Jason.DecodeError while streaming output #60

Closed bfolkens closed 3 months ago

bfolkens commented 7 months ago

During periods of high volume, and in particular when using some of the gpt-3.5 series models, OpenAI will occasionally split events into multiple chunks. The current approach of splitting each line with "\n" assumes the chunks are complete events. However, this is not always the case.

** (Jason.DecodeError) unexpected end of input at position 18
    (jason 1.4.0) lib/jason.ex:92: Jason.decode!/2
    (elixir 1.15.6) lib/enum.ex:1693: Enum."-map/2-lists^map/1-1-"/2
    (elixir 1.15.6) lib/enum.ex:1693: Enum."-map/2-lists^map/1-1-"/2
    (openai 0.6.1) lib/openai/stream.ex:57: anonymous fn/1 in OpenAI.Stream.new/1
    (elixir 1.15.6) lib/stream.ex:1626: Stream.do_resource/5
    (elixir 1.15.6) lib/stream.ex:690: Stream.run/1
JoaoSetas commented 6 months ago

Same problem here when using streaming with assistent API. For me it seems to happen when i reach a certain size with the assistent instructions text. For now i reduced the instruction size and the problem doesn't seem to happen.

thiagomajesk commented 5 months ago

I'm having a similar issue with the latest models (4 and 4o) while using the assistants API, I get the following error if I try to stream the response of the threads_create_and_run function:

[error] Unexpected message: {#Reference<0.3374531620.451411971.196574>, :stream_next}
bfolkens commented 5 months ago

@JoaoSetas , @thiagomajesk - have you tried with PR #61 ? I'm using that patch successfully in production at high volume and the problem no longer occurs.

thiagomajesk commented 5 months ago

Hi, @bfolkens! I'm not sure if this is yet another issue on top of that one or a separate issue, but the problem persists even with your branch. Check this code out:

OpenAI.threads_create_and_run([
  assistant_id: @id, 
  model: "gpt-4o",  
  stream: true,
  thread: %{messages: [%{role: "user", content: "Hi"}]}
])
 |> Enum.to_list()

The same problem happens using both assistants beta API V1 and V2

bfolkens commented 4 months ago

@thiagomajesk sorry for delay - it seems like that "Unexpected message ... :stream_next" issue you mentioned above is a separate issue. Also, I looked at the openai.ex source and threads_create_and_run/1 uses the same underlying function calls as completion, so PR #61 should cover the threads API as well.

Additionally, the error looks like it might be generated outside of this library. Are you able to locate the code path in your application that is generating that log message?

stuartjohnpage commented 3 months ago

I am also experiencing this issue intermittently and it's quite frustrating. I'm looking forward to the fix being merged!

nickgnd commented 3 months ago

Jumping here to thank @mgallo for all the hard work done on implementing the OpenAI client, thank youuu 🙇 and @bfolkens for patch that fixes the described issue, lovely ❤️

mgallo commented 3 months ago

Hey everyone, Sorry I haven't been able to maintain the project as it deserves; life and work have been super busy. Big thanks to @bfolkens (again) for your contribution! I'll review the PR and if it's all ok publish the fix in the next few hours. I'll post here when its released

mgallo commented 3 months ago

The new patch (v0.6.2) has been released! 🎉 If you notice anything not working properly, please keep us posted. Thanks @bfolkens for your efforts!