Closed johnd0e closed 10 months ago
It's a pity to hear that. This should be so easy to fix on shell_gpt
end.
The model I use is not self-hosted, it is OpenAI-compatible web service, so I do not see how LacalAI could solve the issue.
It's a pity to hear that. This should be so easy to fix on shell_gpt end. The model I use is not self-hosted, it is OpenAI-compatible web service, so I do not see how LacalAI could solve the issue.
I think the right way to resolve the issue in this case is to fix the response of your custom model/backend so it doesn't return empty choices
. Or just change ShellGPT code on your end to ignore empty choices
.
I have no control on that backend.
And that response with empty choices is not an error, it also serves own purpose, bringing extra info in other fields.
Honestly I have no idea where exactly those extra fields (like prompt_filter_results
) are used, but as I see it - that is just minor extension over standard OpenAI output, and it would be great if such minor differences would not crash shell_gpt
.
It would have been beneficial for shell_gpt
to be robust in processing different data: in the end - that is all valid json, with correct structure.
You can check yourself:
api base: https://one.caifree.com/v1
token: sk-oR2hYL4yYPeFKip96c6a8256C05d4d628bE7E526336718Ff
With my custom model the first chunk always contains empty choices list. There is no problems with other utilities, but
shell_gpt
does not expect such case: