Closed kevin1sMe closed 4 months ago
readChat@@path@@
I don’t know where this is coming from, but it’s not from my code in the workflow. If you’ve modified the workflow, supporting those changes becomes up to you.
i had this problem when using other OpenAI-compatible models, and when I switched back to a normal open i model like gpt-3.5-turbo, it seemed to work fine.
As per the FAQ (emphasis added):
The workflow offers the ability to change the API end points and override model names in the Workflow Environment Variables. This requires advanced configuration and is not something we can provide support for, but our community are doing it with great success and can help you on a different thread.
There are too many alternative models and their behaviour may be inconsistent despite claiming OpenAI-compatibility. They may for example not support streaming (real example, though I forget the exact model). The workflow supports local models on a best-effort basis but debugging those is up to users who choose to go that path. The linked thread exists precisely to help with those. You can also try a different workflow geared specifically for those models.
Have a nice weekend.
I think I understand the reason now. The issue is caused by the omission of the finish_reason
field in the response from the OpenAI-compatible API. Here is an example of the response:
data: {"id":"chatcmpl-0fe0e3b0243745e0b4105773509f79d9","object":"chat.completion.chunk","created":1718345898,"model":"tencent-hunyuan","choices":[{"index":0,"delta":{"content":"I"}}]}
This omission leads to the following condition not being met:
// If response is not finished, continue loop
if (finishReason === null) return JSON.stringify({
rerun: 0.1,
variables: { streaming_now: true },
response: responseText,
behaviour: { response: "replacelast", scroll: "end" }
})
To make it compatible, I modified it as follows:
// If response is not finished, continue loop
if (finishReason === null || finishReason == undefined) return JSON.stringify({
rerun: 0.1,
variables: { streaming_now: true },
response: responseText,
behaviour: { response: "replacelast", scroll: "end" }
})
Thank you for your response. I'm not sure if others are experiencing the same issue. I am using this universal API (https://github.com/songquanpeng/one-api), which has over 15k+ stars. I believe supporting it would be beneficial to many users. Should I submit a PR for this?
Have a great weekend!
Should I submit a PR for this?
No, but thank you for asking. The fix isn’t that straightforward because it needs to prioritise and be robust for the main use case. A workaround could be added but not without complicating the code and interface, and a specific goal of this workflow is to be focused and not become overwhelming. APIs which claim to be compatible but aren’t are out of scope.
Frequently Asked Questions
Workflow version
v2024.11
Alfred version
5.5
macOS version
14.5
Debugger output
More details
Steam output is interrupted. I tried debugging and found that the content in the steam.txt file is complete. Directly using the curl stream mode to fetch data also yields a complete result. I'm unsure of what caused the interruption in output. Please help, thank you!
BYW: i had this problem when using other OpenAI-compatible models, and when I switched back to a normal open i model like gpt-3.5-turbo, it seemed to work fine.
I have add some log for debug, the steam.txt like this: