Pythagora-io / gpt-pilot

The first real AI developer
Other
29.87k stars 2.96k forks source link

[Bug]: Reprompting broken for local llm #823

Open Wladastic opened 5 months ago

Wladastic commented 5 months ago

Version

VisualStudio Code extension

Operating System

Windows 11

What happened?

When using Mac, Linux or Windows 11 with WSL2 Ubuntu, I get the following Bug.

Whenever a longer output is expected from an Agent, GPT-Pilot forces it to go way beyond its token limit. With Hermes-Mistral-7B-Pro for example Outputting:

                },
                "human_intervention_description": "Create a directory in root.",
            },
        },
    ]
}
}

"
"Tower
##
##

##
## 1/
##  "Add  "Rem
##
    "Data
​
###  "Redissh
##  "L1
   "Depar---designer
    "and
  "Designb1 0 proposed
##
   Dynamic showed
"
    "Current 
    "F###
"D
  "##
#### 

##
## <dummy00014>w4 1
d  ion
##
##
   A------- - -F

## 
    Accid0   Custom
        "
    */

##  b   B This
##

   A 

i have no idea how to fix this yet. Oobabooga for example is showing 17k tokens although .env says 8192. Same with LMStudio and Ollama

Wladastic commented 5 months ago

I have experimented a bit with context length. Cranking up Alpha value seems to have helped a ton

image
Wladastic commented 5 months ago

Found the error in the log finally: 2024-03-30 19:54:10,788 [llm_connection.py:516 - stream_gpt_completion() ] ERROR: Unable to decode line: : ping - 2024-03-30 18:54:10.748186 Expecting value

techjeylabs commented 5 months ago

issue solved, therefore closing

Wladastic commented 5 months ago

Also not solved, I fixed it locally and will push code changes later this week

techjeylabs commented 5 months ago

sry for confusion, was tryin to bring order to the chaos. Waitin for your pull request :)

Sophrinix commented 3 weeks ago

did any progress ever happen on this issue?

Wladastic commented 3 weeks ago

PR was rejected I think. Also I stopped using gpt-pilot as I got annoyed of the forced updates that kept breaking my changes.