Closed pengtb closed 4 months ago
I'm seeing the exact same thing when using BerriAI/litellm as the backend. Litellm would be particularly useful in combination with shell_gpt because it would allow switching between a local model and OpenAI using just the --model switch.
I didn't test sgpt
with any other backends except OpenAI and LocalAI, but as I can see the error you are getting is related to cache. Try to disable caching sgpt --no-cache "test"
.
xusenlinzy/api-for-open-llm is another backend similar to LocalAI that can provide OpenAI-like API for local LLMs. However it is wierd that its generated completions end with a
Maybe better to add type checks before concatenating?
Places are:
None
, then shell_gpt would show a TypeError.