Closed goofansu closed 11 months ago
Thanks for the PR! Sorry, didn't have a chance to try it out until now.
I noticed we're losing error handling. For example, setting the wrong model (3.5t-wrong
) now fails silently:
Before:
ChatGPT(3.5t-wrong)> hello
The model `gpt-3.5-turbo-wrong` does not exist
ChatGPT(3.5t-wrong)>
After:
ChatGPT(3.5t-wrong)> hello
ChatGPT(3.5t-wrong)>
Lemme see if I can give some suggestions to handle both cases...
The following cases need to be considered for the Azure GPT streaming response: choices is empty No delta key No content key in delta
This PR returns an empty string for above cases.
Looks like we were already getting empty strings? Or did I miss a different case?
choices is empty
(let-alist (list)
(or (let-alist (seq-first .choices)
(or .delta.content
.message.content))
.error.message
"")) => ""
No delta key No content key in delta
(let-alist (list (cons 'choices
(list (cons 'delta nil))))
(or (let-alist (seq-first .choices)
(or .delta.content
.message.content))
.error.message
"")) => ""
An example of the streaming response:
data: {"id":"","object":"","created":0,"model":"","prompt_filter_results":[{"prompt_index":0,"content_filter_results":{"hate":{"filtered":false,"severity":"safe"},"self_harm":{"filtered":false,"severity":"safe"},"sexual":{"filtered":false,"severity":"safe"},"violence":{"filtered":false,"severity":"safe"}}}],"choices":[]}
data: {"id":"chatcmpl-8V1mzvFANVW81eY6AMkrEgfIMuFKN","object":"chat.completion.chunk","created":1702405737,"model":"gpt-4","choices":[{"index":0,"finish_reason":null,"delta":{"role":"assistant"},"content_filter_results":{}}],"system_fingerprint":"fp_50a4261de5"}
May be good to include an error case too if possible.
Looks like we were already getting empty strings? Or did I miss a different case?
Nevermind. I got it failing.
(let-alist (seq-first (car (shell-maker--preparse-json "data: {\"id\":\"\",\"object\":\"\",\"created\":0,\"model\":\"\",\"prompt_filter_results\":[{\"prompt_index\":0,\"content_filter_results\":{\"hate\":{\"filtered\":false,\"severity\":\"safe\"},\"self_harm\":{\"filtered\":false,\"severity\":\"safe\"},\"sexual\":{\"filtered\":false,\"severity\":\"safe\"},\"violence\":{\"filtered\":false,\"severity\":\"safe\"}}}],\"choices\":[]}")))
(or (let-alist (seq-first .choices)
(or .delta.content
.message.content))
.error.message
"")) => Error: Args out of range: []
Does the following work for Azure?
- (or (let-alist (seq-first .choices)
- (or .delta.content
- .message.content))
+ (or (unless (seq-empty-p .choices)
+ (let-alist (seq-first .choices)
+ (or .delta.content
+ .message.content)))
.error.message
""))
(if-let (parsed (shell-maker--json-parse-string json))
(string-trim
(let-alist parsed
- (let-alist (seq-first .choices)
- .message.content)))
+ (unless (seq-empty-p .choices)
+ (let-alist (seq-first .choices)
+ .message.content))))
@xenodium It took me a moment to understand that this was against the unmodified chatgpt-shell.el and not changes from @goofansu :-). I tried out your diff above and streaming was working with that change too. So, it seems that your suggested fix works too. I would still suggest taking @goofansu's README changes as it seems that using ChatGPT on an Azure instance is becoming more popular.
Regardless, thanks for this. It is more useful (for me) to do chatgpt things in emacs than in a browser window.
@NorwegianRockCat thanks for verifying.
I would still suggest taking @goofansu's README changes as it seems that using ChatGPT on an Azure instance is becoming more popular.
Certainly. I have no knowledge in this space. Best to document :)
I'll wait for @goofansu to tweak the PR and I'll merge.
Does the following work for Azure?
- (or (let-alist (seq-first .choices) - (or .delta.content - .message.content)) + (or (unless (seq-empty-p .choices) + (let-alist (seq-first .choices) + (or .delta.content + .message.content))) .error.message "")) (if-let (parsed (shell-maker--json-parse-string json)) (string-trim (let-alist parsed - (let-alist (seq-first .choices) - .message.content))) + (unless (seq-empty-p .choices) + (let-alist (seq-first .choices) + .message.content))))
@xenodium @NorwegianRockCat Thank you for checking the PR. I updated the code according to the suggestion, and confirm it works.
@goofansu Thanks for the PR and driving the changes.
Fix #155.
@xenodium Also need to test against OpenAI GPT before merging.
The following cases need to be considered for the Azure GPT streaming response:
choices
is emptydelta
keycontent
key indelta
This PR returns an empty string for above cases.
An example of the streaming response: