Open Wladastic opened 5 months ago
I have experimented a bit with context length. Cranking up Alpha value seems to have helped a ton
Found the error in the log finally: 2024-03-30 19:54:10,788 [llm_connection.py:516 - stream_gpt_completion() ] ERROR: Unable to decode line: : ping - 2024-03-30 18:54:10.748186 Expecting value
issue solved, therefore closing
Also not solved, I fixed it locally and will push code changes later this week
sry for confusion, was tryin to bring order to the chaos. Waitin for your pull request :)
did any progress ever happen on this issue?
PR was rejected I think. Also I stopped using gpt-pilot as I got annoyed of the forced updates that kept breaking my changes.
Version
VisualStudio Code extension
Operating System
Windows 11
What happened?
When using Mac, Linux or Windows 11 with WSL2 Ubuntu, I get the following Bug.
Whenever a longer output is expected from an Agent, GPT-Pilot forces it to go way beyond its token limit. With Hermes-Mistral-7B-Pro for example Outputting:
i have no idea how to fix this yet. Oobabooga for example is showing 17k tokens although .env says 8192. Same with LMStudio and Ollama