rhohndorf / Auto-Llama-cpp

Uses Auto-GPT with Llama.cpp
MIT License
384 stars 68 forks source link

Error Message after "thinking" #10

Open Alexander220799 opened 1 year ago

Alexander220799 commented 1 year ago

Duplicates

Steps to reproduce 🕹

You start the Program by executing the main.py File. Then you press "y" and Enter. After a couple of minutes you will get an Error Message. I am using the Vicuna 13b Model.

Current behavior 😯

I press enter and this is the Output after letting it "think".

AutoGPT INFO Error: : Traceback (most recent call last): File "C:\Users\alexr\Documents\Auto-Llama-cpp\scripts\main.py", line 79, in print_assistant_thoughts assistant_reply_json = fix_and_parse_json(assistant_reply) File "C:\Users\alexr\Documents\Auto-Llama-cpp\scripts\json_parser.py", line 52, in fix_and_parse_json brace_index = json_str.index("{") ValueError: substring not found

Expected behavior 🤔

I think it should continue with the Process.

Your prompt 📝

# Paste your prompt here
thomasfifer commented 1 year ago

it will likely be some trial and error finding model and prompt combinations that work well together. for the vicuna 13b model i got a little further with the prompt edits made in the prompt composition discussion but it still has some issues further down the line

try this https://github.com/thomasfifer/Auto-Llama-cpp/commit/4ccf2de27594e310a734e5caa9d59c693c2bb74b

also, if you turn on debug mode in config.py you will get more information about the input/output strings before running into the error

Alexander220799 commented 1 year ago

Thank you for your Help. Is there a way to integrate the Changes directly into the Code?

thomasfifer commented 1 year ago

it's all on github... should just be able to use the normal git commands to check it out

git remote add thomasfifer https://github.com/thomasfifer/Auto-Llama-cpp.git git fetch thomasfifer git checkout thomasfifer/vicuna-13b

Alexander220799 commented 1 year ago

Thank you. I will try this.

Alexander220799 commented 1 year ago

I cloned your Repo and tried with the standard prompt (Entrepreneur GPT.. Standard Role etc.). Same Error as last Time. When I let it start "thinking" it drops an other Error message too.

"Warning: model not found. Using cl100k_base encoding."

The general Output is this here:

"Using memory of type: LocalCache / Thinking... llama_print_timings: load time = 2277.07 ms llama_print_timings: sample time = 0.00 ms / 1 runs ( 0.00 ms per run) llama_print_timings: prompt eval time = 2276.86 ms / 2 tokens ( 1138.43 ms per token) llama_print_timings: eval time = 0.00 ms / 1 runs ( 0.00 ms per run) llama_print_timings: total time = 2277.28 ms Warning: model not found. Using cl100k_base encoding. Warning: model not found. Using cl100k_base encoding. " In the .env file I pasted the Full Path to the Model (SMART_LLM_MODEL=C:\Users\username\Documents\KI-Models\Vicuna_13b_40\ggml-vicuna-13b-1.1-q4_0.bin)

thomasfifer commented 1 year ago

with the updated prompt.txt, i'm really surprised you're still not getting any json output (you should at least have a '{' character in there somewhere). although, i have been using the legacy vicuna-13b instead of the newer 1.1 version that you're using so there might be a different prompt that works better on 1.1. i did push up another commit to the vicuna-13b branch on my fork that helps parse the output a bit better to find the start of the json.

you could try the legacy version with my latest commit to at least get a few steps in to the ai trying to perform things to see what it's capable of. there's still some work to be done on modifying the prompt or trying other models to get a reliable output where it doesn't break after a few commands though.

rhohndorf commented 1 year ago

Since different models will need different prompts we should make the prompt configurable and add them all in data/prompts similar to how llama.cpp does it.

MatchTerm commented 1 year ago

Had the same issue as well: so hopefully it can be looked into

swisspitbull commented 1 year ago

Same issue, testing different models... same result.

jchen-sinai commented 1 year ago

Same issue here: Error: Traceback (most recent call last): File "/data/Auto-Llama-cpp/scripts/main.py", line 79, in print_assistant_thoughts assistant_reply_json = fix_and_parse_json(assistant_reply) File "/data/Auto-Llama-cpp/scripts/json_parser.py", line 57, in fix_and_parse_json brace_index = json_str.index("{") ValueError: substring not found NEXT ACTION: COMMAND = Error: ARGUMENTS = substring not found

swisspitbull commented 1 year ago

This is what GPT explain about this error:

This Python error message indicates that there is an issue with finding a substring within a JSON string. Specifically, the code is attempting to find the index of the first occurrence of the character '{' within the JSON string, but this character is not present in the string. As a result, a ValueError exception is raised with the message "substring not found".

To fix this error, you will need to investigate why the expected JSON string is not being generated or passed correctly to the relevant code. One possible solution is to add some error handling code to the fix_and_parse_json function to handle cases where the expected substring is not found within the JSON string. Another approach could be to modify the code that generates or passes the JSON string to ensure that it always contains the expected substring.

dillfrescott commented 1 year ago

Same issue

DevMentat commented 1 year ago

Same issues with both legacy 13B 7B and the newest one

thestumonkey commented 1 year ago

I don't get an error after thinking, it just sits there forever, I do get the Warning: model not found. Using cl100k_base encoding error though