Closed schucan closed 1 month ago
May you activate the debug option in the addon preferences, run again and post the last lines of the log? Thanks.
Yes - will post as soon as it happens again.
Here we go:
20:43:33.493 [ThunderAI Logger | mzta-background] [ThunderAI] Prompt length: 890 [mzta-logger.js:35:44](moz-extension://79bdee5f.../js/mzta-logger.js)
20:43:34.236 [ThunderAI Logger | mzta-background] [OpenAI ChatGPI] Connection succeded! [mzta-logger.js:35:44](moz-extension://79bdee5f-0e0d-44ba-97c3-5bf8da3c199e/js/mzta-logger.js)
20:43:35.977 SyntaxError: JSON.parse: unterminated string at line 1 column 154 of the JSON data
[model-worker-openai.js:132:37](moz-extension://79bdee5f.../api_webchat/model-worker-openai.js)
This now happened with this prompt:
Expand these notes into a full email with a greeting and a salutation. You must use the same language as is used in the notes (example: English notes must be made into an English email!). Try to make it formal or informal according to the content. If German, write it for Switzerland, without "scharfes S" (ß), and make it in "Du Form" for informal email, and in polite "Sie Form" for a formal matter. Make the email polite, but simple and clear. The author is the man X Y. End with just the first name X for informal, with the full name X Y for a formal email. Do not add a subject line - just the message text in Markdown format. Here are the notes: "[some text]"
The output was just a bit less than one line of text, interrupted in the middle of a sentence.
It's not the prompt. I can get full response without problems. Does it happen all the time with any email?
As I said: it is non-deterministic: Whenever it happens, I can just repeat the same thing and it works. It is just inconvenient and doubles the cost (via the tokens).
You're right, sorry.
May I give you a version with more logs to use, so we can try to catch the problem? I've tried the prompt may times, but never got the error.
Yes please do.
While you are at it: is there a way to place the text string inside a prompt? It would be fantastic (and actually required) to be able to have some shortcode that can be used inside a prompt text. For example [t]: "answer the following email: [t] and express this answer: " and then check the "Ask for additional text"... I'll test this, too, if you like ;-)
Ok, I'll post here the version to try. It will be based on the version 2.2.0 that I'm working on.
Your suggestion is great, may you file a separated issue? Thank you.
You can find here version 2.2.0_i145_v1.
Please follow this steps:
This version prints to the console log the lines received from ChatGPT. Before posting here the log check for any sensitive information that could be printed with the reply.
If you need the source code, you can find it in the branch: https://github.com/micz/ThunderAI/tree/issue_145 (full diff)
Thank you.
I finally got it once. The connection has been closed by ChatGPT, this is the line received:
>>>>>>>>>>>>> [ThunderAI] line: "{\"id\":\"xxxxxxxxxxxxx\",\"object\":\"chat.completion.chunk\",\"created\":1727813166,\"model\":\"gpt-4o-mini-2024-07-18\",\"system_fingerprint\":\"fp_f85bea6784\",\"choices\":[{\"in"
The one before, that was complete, was like this:
>>>>>>>>>>>>> [ThunderAI] line: "{\"id\":\"xxxxxxxxxxxxxxxxxxxxxx\",\"object\":\"chat.completion.chunk\",\"created\":1727813166,\"model\":\"gpt-4o-mini-2024-07-18\",\"system_fingerprint\":\"fp_f85bea6784\",\"choices\":[{\"index\":0,\"delta\":{\"content\":\" me\"},\"logprobs\":null,\"finish_reason\":null}]}"
I will give an error message in this case, but I can't do anything to prevent it.
Your suggestion is great, may you file a separated issue? Thank you.
I finally got it once. The connection has been closed by ChatGPT, this is the line received:
... The one before, that was complete, was like this:
...
I will give an error message in this case, but I can't do anything to prevent it.
Here's a quick answer from Kagi Search. I am guessing it must be 3 or 4, I can pretty much rule out 1 and 2:
Yes, the OpenAI API can sometimes interrupt chunked output. This issue may arise due to several factors:
To mitigate these issues, consider breaking down requests into smaller parts or adjusting your input to stay within token limits. References
Are you setting a token limit? I don't think that's a reason because the times it broke were both with long and short prompts.
I'm not setting a token limit, but I got the error with a very small prompt. I'm trying to keep the truncated line and check if the missing part is being resent from ChatGPT, but now I never get the error.
This version tries to get the chunk missing part. I never got the error after I implemented this. Please, try this one. Thank you!
The problem will be fixes in 2.2.0, see #147.
I've released version 2.2.0pre5, if you want to test it. See the changelog: https://github.com/micz/ThunderAI/releases/tag/v2.2.0pre5 Thank you.
Fixed in version 2.2.0.
The problem
Expected behavior:
Wrong behavior:
The wrong behavior occurs about half of the times I use the add-on.
At first, I thought it could be related to the number of tokens. But it happens randomly, without any correlation with length of input or output. Completely non-deterministic for now.
Which Operative System are you using?
Windows 10
Which version of Thunderbird are you using?
128.2.3esr (64-bit)
Which version of ThunderAI has the issue?
2.1.5
Which integration are you using?
OpenAI ChatGPT API
Anything in the Thunderbird console logs that might be useful?
Will add when it happens next...
Additional information
No response