keldenl / gpt-llama.cpp

A llama.cpp drop-in replacement for OpenAI's GPT endpoints, allowing GPT-powered apps to run off local llama.cpp models instead of OpenAI.
MIT License
594 stars 66 forks source link

Add support for Auto-GPT #2

Open keldenl opened 1 year ago

keldenl commented 1 year ago

Get Auto-GPT working. It's blocked on the following 2 items

Embeddings aren't working quite right / very inconsistent between models, so having a built-in local embedding option is good.

For the second bullet point, we can technically work around and set the BASE_URL by modifying the code, but it would be nice to have it as an option / env variable so it's easier to get started.

I'll likely open a new PR against Auto-GPT for the second point and push to get that PR i linked above in

maddes8cht commented 1 year ago

I would like to point out these error messages from gpt-llama.cpp

It happens right after I successfully get the json-response with the command in auto-gpt and am asked for confirmation of the command (I am not sure if it always happens, since it has just attracted my increased attention, but it has been happening more often). After confirming y there, I get on the gpt-llama.cpp side :

> REQUEST RECEIVED
> PROCESSING NEXT REQUEST FOR /v1/embeddings

=====  EMBEDDING REQUEST  =====

=====  PYTHON EMBEDDING EXTENSION SPAWNED  =====
exec error: Error: Command failed: python E:\AutoGPT\gpt-llama.cpp\InferenceEngine\embeddings\all-mpnet-base-v2\main.py E:\AutoGPT\gpt-llama.cpp\InferenceEngine\embeddings\all-mpnet-base-v2\input.txt > E:\AutoGPT\gpt-llama.cpp\InferenceEngine\embeddings\all-mpnet-base-v2\output.txt
Traceback (most recent call last):
  File "E:\AutoGPT\gpt-llama.cpp\InferenceEngine\embeddings\all-mpnet-base-v2\main.py", line 2, in <module>
    from sentence_transformers import SentenceTransformer
ModuleNotFoundError: No module named 'sentence_transformers'

=====  STDERR  =====
stderr Readable Stream: CLOSED
== Running in interactive mode. ==
 - Press Ctrl+C to interject at any time.
 - Press Return to return control to LLaMa.
 - If you want to submit another line, end your input in '\'.

I don't see why this is happening. Why is Auto-Gpt at all sending something to gpt-llama.cpp when it is supposed to do aGoogle search` with Chrome?

For further investigation, i append the full session output:

this is the (almost) complete output, Auto-gpt:

Welcome back!  Would you like me to return to being Entrepreneur-GPT?
Continue with the last settings?
Name:  Entrepreneur-GPT
Role:  an AI designed to autonomously develop and run businesses with the sole goal of increasing your net worth.
Goals: ['Increase net worth.', 'Develop and manage multiple businesses autonomously.', 'Play to your strengths as a Large Language Model.']
Continue (y/n): yy
Using memory of type:  LocalCache
Using Browser:  chrome
  Token limit: 2000
  Memory Stats: (0, (0, 6656))
  Token limit: 2000
  Send Token Count: 966
  Tokens remaining for response: 1034
  ------------ CONTEXT SENT TO AI ---------------
  System: The current time and date is Sat May 13 15:13:55 2023

  System: This reminds you of these events from your past:

  User: Determine which next command to use, and respond using the format specified above:

  ----------- END OF CONTEXT ----------------
Creating chat completion with model gpt-3.5-turbo, temperature 0.0, max_tokens 1034
json {
"thoughts": {
"text": "I am considering what the best command would be to pursue my goals",
"reasoning": "To increase net worth, I must develop multiple businesses autonomously. To do this, I need information from the web and assistance from GPT-3.5 powered agents. However, I cannot seek user assistance.",
"plan": "- Use Google Search for information gathering\n- Create a new agent to help me",
"criticism": "I must ensure that I only use commands allowed by my constraints, such as using subprocesses for long running processes and not seeking user assistance. Additionally, I should focus on simple strategies.",
"speak": "Let's start with gathering information from the web to see what business ideas we can find."
},
"command": {
"name": "Google Search",
"args": {
"google": "business ideas"
}
}
json loads error Expecting ',' delimiter: line 14 column 2 (char 817)
The JSON object is valid.
 THOUGHTS:  I am considering what the best command would be to pursue my goals
REASONING:  To increase net worth, I must develop multiple businesses autonomously. To do
 this, I need information from the web and assistance from GPT-3.5 powered agents. However, I cannot seek user assistance.
PLAN:
-  Use Google Search for information gathering
-  Create a new agent to help me
CRITICISM:  I must ensure that I only use commands allowed by my constraints, such as using subprocesses for long running processes and not seeking user assistance. Additionally,
 I should focus on simple strategies.
NEXT ACTION:  COMMAND = Google Search ARGUMENTS = {'google': 'business ideas'}
Enter 'y' to authorise command, 'y -N' to run N continuous commands, 'n' to exit program, or enter feedback for ...
Input:y
-=-=-=-=-=-=-= COMMAND AUTHORISED BY USER -=-=-=-=-=-=-=
Traceback (most recent call last):
  File "C:\Users\Mathias\anaconda3\envs\AutoLlama\lib\site-packages\urllib3\connectionpool.py", line 449, in _make_request
    six.raise_from(e, None)
  File "<string>", line 3, in raise_from
  File "C:\Users\Mathias\anaconda3\envs\AutoLlama\lib\site-packages\urllib3\connectionpool.py", line 444, in _make_request
    httplib_response = conn.getresponse()
  File "C:\Users\Mathias\anaconda3\envs\AutoLlama\lib\http\client.py", line 1374, in getresponse
    response.begin()
  File "C:\Users\Mathias\anaconda3\envs\AutoLlama\lib\http\client.py", line 318, in begin
    version, status, reason = self._read_status()
  File "C:\Users\Mathias\anaconda3\envs\AutoLlama\lib\http\client.py", line 279, in _read_status
    line = str(self.fp.readline(_MAXLINE + 1), "iso-8859-1")
  File "C:\Users\Mathias\anaconda3\envs\AutoLlama\lib\socket.py", line 705, in readinto
    return self._sock.recv_into(b)
TimeoutError: timed out

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "C:\Users\Mathias\anaconda3\envs\AutoLlama\lib\site-packages\requests\adapters.py", line 486, in send
    resp = conn.urlopen(
  File "C:\Users\Mathias\anaconda3\envs\AutoLlama\lib\site-packages\urllib3\connectionpool.py", line 787, in urlopen
    retries = retries.increment(
  File "C:\Users\Mathias\anaconda3\envs\AutoLlama\lib\site-packages\urllib3\util\retry.py", line 550, in increment
    raise six.reraise(type(error), error, _stacktrace)
  File "C:\Users\Mathias\anaconda3\envs\AutoLlama\lib\site-packages\urllib3\packages\six.py", line 770, in reraise
    raise value
  File "C:\Users\Mathias\anaconda3\envs\AutoLlama\lib\site-packages\urllib3\connectionpool.py", line 703, in urlopen
    httplib_response = self._make_request(
  File "C:\Users\Mathias\anaconda3\envs\AutoLlama\lib\site-packages\urllib3\connectionpool.py", line 451, in _make_request
    self._raise_timeout(err=e, url=url, timeout_value=read_timeout)
  File "C:\Users\Mathias\anaconda3\envs\AutoLlama\lib\site-packages\urllib3\connectionpool.py", line 340, in _raise_timeout
    raise ReadTimeoutError(
urllib3.exceptions.ReadTimeoutError: HTTPConnectionPool(host='localhost', port=443): Read timed out. (read timeout=600)

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "C:\Users\Mathias\anaconda3\envs\AutoLlama\lib\site-packages\openai\api_requestor.py", line 516, in request_raw
    result = _thread_context.session.request(
  File "C:\Users\Mathias\anaconda3\envs\AutoLlama\lib\site-packages\requests\sessions.py", line 587, in request
    resp = self.send(prep, **send_kwargs)
  File "C:\Users\Mathias\anaconda3\envs\AutoLlama\lib\site-packages\requests\sessions.py", line 701, in send
    r = adapter.send(request, **kwargs)
  File "C:\Users\Mathias\anaconda3\envs\AutoLlama\lib\site-packages\requests\adapters.py", line 532, in send
    raise ReadTimeout(e, request=request)
requests.exceptions.ReadTimeout: HTTPConnectionPool(host='localhost', port=443): Read timed out. (read timeout=600)

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "C:\Users\Mathias\anaconda3\envs\AutoLlama\lib\runpy.py", line 196, in _run_module_as_main
    return _run_code(code, main_globals, None,
  File "C:\Users\Mathias\anaconda3\envs\AutoLlama\lib\runpy.py", line 86, in _run_code
    exec(code, run_globals)
  File "E:\AutoGPT\Auto-GPT\autogpt\__main__.py", line 5, in <module>
    autogpt.cli.main()
  File "C:\Users\Mathias\anaconda3\envs\AutoLlama\lib\site-packages\click\core.py", line 1130, in __call__
    return self.main(*args, **kwargs)
  File "C:\Users\Mathias\anaconda3\envs\AutoLlama\lib\site-packages\click\core.py", line 1055, in main
    rv = self.invoke(ctx)
  File "C:\Users\Mathias\anaconda3\envs\AutoLlama\lib\site-packages\click\core.py", line 1635, in invoke
    rv = super().invoke(ctx)
  File "C:\Users\Mathias\anaconda3\envs\AutoLlama\lib\site-packages\click\core.py", line 1404, in invoke
    return ctx.invoke(self.callback, **ctx.params)
  File "C:\Users\Mathias\anaconda3\envs\AutoLlama\lib\site-packages\click\core.py", line 760, in invoke
    return __callback(*args, **kwargs)
  File "C:\Users\Mathias\anaconda3\envs\AutoLlama\lib\site-packages\click\decorators.py", line 26, in new_func
    return f(get_current_context(), *args, **kwargs)
  File "E:\AutoGPT\Auto-GPT\autogpt\cli.py", line 151, in main
    agent.start_interaction_loop()
  File "E:\AutoGPT\Auto-GPT\autogpt\agent\agent.py", line 184, in start_interaction_loop
    self.memory.add(memory_to_add)
  File "E:\AutoGPT\Auto-GPT\autogpt\memory\local.py", line 78, in add
    embedding = create_embedding_with_ada(text)
  File "E:\AutoGPT\Auto-GPT\autogpt\llm_utils.py", line 155, in create_embedding_with_ada
    return openai.Embedding.create(
  File "C:\Users\Mathias\anaconda3\envs\AutoLlama\lib\site-packages\openai\api_resources\embedding.py", line 33, in create
    response = super().create(*args, **kwargs)
  File "C:\Users\Mathias\anaconda3\envs\AutoLlama\lib\site-packages\openai\api_resources\abstract\engine_api_resource.py", line 153, in create
    response, _, api_key = requestor.request(
  File "C:\Users\Mathias\anaconda3\envs\AutoLlama\lib\site-packages\openai\api_requestor.py", line 216, in request
    result = self.request_raw(
  File "C:\Users\Mathias\anaconda3\envs\AutoLlama\lib\site-packages\openai\api_requestor.py", line 526, in request_raw
    raise error.Timeout("Request timed out: {}".format(e)) from e
openai.error.Timeout: Request timed out: HTTPConnectionPool(host='localhost', port=443): Read timed out. (read timeout=600)

and this on the gpt-llama.cpp side:


E:\AutoGPT\gpt-llama.cpp>npm start mlock threads 6 ctx_size 2048 mirostat 2 repeat_penalty 1.15

> gpt-llama.cpp@0.2.6 start
> node index.js mlock threads 6 ctx_size 2048 mirostat 2 repeat_penalty 1.15

Server is listening on:
  - http://localhost:443
  - http://10.5.0.2:443 (for other devices on the same network)

See Docs
  - http://localhost:443/docs

Test your installation
  - double click the scripts/test-installation.ps1 (powershell) or scripts/test-installation.bat (cmd) file

See https://github.com/keldenl/gpt-llama.cpp#usage for more guidance.
> REQUEST RECEIVED
> PROCESSING NEXT REQUEST FOR /v1/chat/completions
> LLAMA.CPP DETECTED

=====  CHAT COMPLETION REQUEST  =====
> AUTO MODEL DETECTION FAILED. LOADING DEFAULT CHATENGINE...
{ '--n_predict': 1034 }

=====  LLAMA.CPP SPAWNED  =====
E:\AutoGPT\llama.cpp\main -m E:\AutoGPT\llama.cpp\models\OpenAssistant-30B-epoch7.ggml.q5_1.bin --temp 0.7 --n_predict 1034 --top_p 0.1 --top_k 40 -c 2048 --seed -1 --repeat_penalty 1.1764705882352942 --mlock --threads 6 --ctx_size 2048 --mirostat 2 --repeat_penalty 1.15 --reverse-prompt user: --reverse-prompt
user --reverse-prompt system: --reverse-prompt
system --reverse-prompt

 -i -p Complete the following chat conversation between the user and the assistant. system messages should be strictly followed as additional instructions.

system: You are a helpful assistant.
user: How are you?
assistant: Hi, how may I help you today?
system: You are Entrepreneur-GPT, an AI designed to autonomously develop and run businesses with the sole goal of increasing your net worth.
Your decisions must always be made independently without seeking user assistance. Play to your strengths as an LLM and pursue simple strategies with no legal complications.

GOALS:

1. Increase net worth.
2. Develop and manage multiple businesses autonomously.
3. Play to your strengths as a Large Language Model.

Constraints:
1. 2000 words max for short term memory. Save important info to files ASAP
2. To recall past events, think of similar ones. Helps with uncertainty.
3. No user assistance
4. Exclusively use the commands listed in double quotes e.g. "command name"
5. Use subprocesses for commands that will not terminate within a few minutes

Commands:
1. Google Search: "google", args: "input": "<search>"
2. Browse Website: "browse_website", args: "url": "<url>", "question": "<what_you_want_to_find_on_website>"
3. Start GPT Agent: "start_agent", args: "name": "<name>", "task": "<short_task_desc>", "prompt": "<prompt>"
4. Message GPT Agent: "message_agent", args: "key": "<key>", "message": "<message>"
5. List GPT Agents: "list_agents", args:
6. Delete GPT Agent: "delete_agent", args: "key": "<key>"
7. Clone Repository: "clone_repository", args: "repository_url": "<url>", "clone_path": "<directory>"
8. Write to file: "write_to_file", args: "file": "<file>", "text": "<text>"
9. Read file: "read_file", args: "file": "<file>"
10. Append to file: "append_to_file", args: "file": "<file>", "text": "<text>"
11. Delete file: "delete_file", args: "file": "<file>"
12. Search Files: "search_files", args: "directory": "<directory>"
13. Analyze Code: "analyze_code", args: "code": "<full_code_string>"
14. Get Improved Code: "improve_code", args: "suggestions": "<list_of_suggestions>", "code": "<full_code_string>"
15. Write Tests: "write_tests", args: "code": "<full_code_string>", "focus": "<list_of_focus_areas>"
16. Execute Python File: "execute_python_file", args: "file": "<file>"
17. Generate Image: "generate_image", args: "prompt": "<prompt>"
18. Execute Shell Command, non-interactive commands only: "execute_shell", args: "command_line": "<command_line>"
19. Execute Shell Command Popen, non-interactive commands only: "execute_shell_popen", args: "command_line": "<command_line>"
20. Do Nothing: "do_nothing", args:
21. Task Complete (Shutdown): "task_complete", args: "reason": "<reason>"

Resources:
1. Internet access for searches and information gathering.
2. Long Term memory management.
3. GPT-3.5 powered Agents for delegation of simple tasks.
4. File output.

Performance Evaluation:
1. Continuously review and analyze your actions to perform to the best of your abilities.
2. Constructively self-criticize your big-picture behavior constantly.
3. Reflect on past decisions and strategies to refine your approach.
4. Be smart and efficient. Aim to complete tasks in the least number of steps.

You should only respond in JSON format as described below
Response Format:
{
    "thoughts": {
        "text": "<replace with single sentence of your AI thoughts>",
        "reasoning": "<replace with single sentence of your AI reasoning>",
        "plan": "<replace with short dashed (-) list that convey your AI long term plan>",
        "criticism": "<replace with single sentence of your AI constructive self-criticism>",
        "speak": "<replace with single string of your AI thoughts summary to say to user as response>"
    },
    "command": {
        "name": "command name",
        "args": {
            "arg name": "value"
        }
    }
}
Ensure the response can be parsed by Python json.loads
system: The current time and date is Sat May 13 15:13:55 2023
system: This reminds you of these events from your past:

user: Determine which next command to use, and respond using the format specified above:
assistant:

=====  REQUEST  =====
user: Determine which next command to use, and respond using the format specified above:
=====  PROCESSING PROMPT...  =====
=====  PROCESSING PROMPT...  =====

=====  RESPONSE  =====
 {
"thoughts": {
"text": "I am considering what the best command would be to pursue my goals",
"reasoning": "To increase net worth, I must develop multiple businesses autonomously. To do this, I need information from the web and assistance from GPT-3.5 powered agents. However, I cannot seek user assistance.",
"plan": "- Use Google Search for information gathering\n- Create a new agent to help me",
"criticism": "I must ensure that I only use commands allowed by my constraints, such as using subprocesses for long running processes and not seeking user assistance. Additionally, I should focus on simple strategies.",
"speak": "Let's start with gathering information from the web to see what business ideas we can find."
},
"command": {
"name": "Google Search",
"args": {
"google": "business ideas"
}
}
user:Request DONE
> PROCESS COMPLETE
> REQUEST RECEIVED
> PROCESSING NEXT REQUEST FOR /v1/embeddings

=====  EMBEDDING REQUEST  =====

=====  PYTHON EMBEDDING EXTENSION SPAWNED  =====
exec error: Error: Command failed: python E:\AutoGPT\gpt-llama.cpp\InferenceEngine\embeddings\all-mpnet-base-v2\main.py E:\AutoGPT\gpt-llama.cpp\InferenceEngine\embeddings\all-mpnet-base-v2\input.txt > E:\AutoGPT\gpt-llama.cpp\InferenceEngine\embeddings\all-mpnet-base-v2\output.txt
Traceback (most recent call last):
  File "E:\AutoGPT\gpt-llama.cpp\InferenceEngine\embeddings\all-mpnet-base-v2\main.py", line 2, in <module>
    from sentence_transformers import SentenceTransformer
ModuleNotFoundError: No module named 'sentence_transformers'

=====  STDERR  =====
stderr Readable Stream: CLOSED
== Running in interactive mode. ==
 - Press Ctrl+C to interject at any time.
 - Press Return to return control to LLaMa.
 - If you want to submit another line, end your input in '\'.
maddes8cht commented 1 year ago

As a sidenote: As you can see in the output, i changed the hint for the "plan": further, so it now uses a "dashed" list instead of the braces, resulting almost always in an accepted json response for Auto-Gpt.

That's how it is now:

        self.response_format = {
            "thoughts": {
                "text": "<replace with single sentence of your AI thoughts>",
                "reasoning": "<replace with single sentence of your AI reasoning>",
                "plan": "<replace with short dashed (-) list that convey your AI long term plan>",
                "criticism": "<replace with single sentence of your AI constructive self-criticism>",
                "speak": "<replace with single string of your AI thoughts summary to say to user as response>",
            },
            "command": {"name": "command name", "args": {"arg name": "value"}},
        }
maddes8cht commented 1 year ago

With my current set of prompts, i'm getting pretty consistent and reliable json_responses from OpenAssistant 30b.

I'm getting improvements, but still inconsistent and unreliable results from any vicuna or koala 13b models.

OpenAssistant 30b runs (slooow) on my machine with gpt_llama.cpp, but got a significant speedup with the recent GPU update in lama.cpp

This update will enable a lot more people to run 30b models that they couldn't use before, as they gain aditional memory from their graphics cards.

I'm eager to try out IBMs new dromedary 60b model with gpt_llama and auto-gpt as it seems to be very smart, but maybe i need to wait until gpt-llama.cpp updates to gpu support of llama.cpp. It's not because of the speedup, which may be neglectable, but becaue of the additional 12 gb of my rtx3060 that will enable these model sizes for me. I've ran this model with gpu (with original Llama.cpp main ), and its knowledge in scientific realms is impressive - i guess it significantly beats chatgpt 3.5 at least in this specific realm.

As has been said, the Auto-GPT capabilities seems to boil down on the "smartness" of the model.

Current 30b llama derivatives may Manage the task "out of the box", at least with the customized prompts, while the smaller ones may never be able even with more specific prompts.

BUT:

remember llama itself wasn't capable of being used in Chat conversations until Alpaca finetuning was released. With alpaca, even small models now can be used in interactive Chat. AutoGPT (or similar software) will be a significant usecase of llms.

It should be worth having a specific training for this usecase.

As "Chat-Conversation" is a rather broad sceneario, the Alpaca Dataset with about 52.000 sets was pretty small. A highly specific dataset to just inform about very specific use of output formats will be even smaller - maybe its just some dozens of sets, which will be less then 1 % of the Alpaca set and may be trainable within minutes instead of hours, even on common hardware.

So maybe we should do an attempt to create such a dataset for finetuning small models to work together with AutoGPT. This should get in cooperation with AutoGPT development, as it should also involve the best prompts on which the dataset is to be trained for.

This way, I am confident that even small models will become capable of working with AutoGPT.

mirek190 commented 1 year ago

For me auto-gpt also returning responses to gpt-llama.cpp window .... I have no idea why ...

maddes8cht commented 1 year ago

For me it looks like I now get quite reliable usable first responses that consist of valid json and are also accepted by auto-gpt. However, it seems that none of the commands requested by the model are executed correctly. No Google search, no Browse Website, no List GPT Agents, not even a Do nothing.

And this does not seem to be a problem of the model. Is there a new PR somewhere that makes local models with Llama.cpp / Gpt-LLama.cpp on Auto-Gpt possible?

How can we support that from here?

My possibilities for the moment are (still) limited to testing models and prompts / prompt combinations, and with appropriate hints I can also dig into code a bit, but I don't have a deep experience in this...

maddes8cht commented 1 year ago

In other words: I don't think OpenAI's Gpt4 would really work with the fork used here. I think this fork doesn't really work anymore. To be able to experiment with local models in a meaningful way, we would first need a working, up-to-date fork of Auto-gpt (3.1) again.

I could be wrong, but then, please prove it.

DGdev91 commented 1 year ago

Most of the issues with the commands are caused by the LLM not understsnding well what string represent the actual command. For example, in the google search case you posted 3 days ago the LLM wrote "Google Search" as commamd, while AutoGPT expects "google". Few hours ago it was merged a PR wich changes slightly the prompt. https://github.com/Significant-Gravitas/Auto-GPT/pull/4027 The original version was Exclusively use the commands listed in double quotes e.g. "command name" Now it's Exclusively use the commands listed below e.g. command_name It could help also llama and derivates, but it really depends on what specific model is used.

In other words: I don't think OpenAI's Gpt4 would really work with the fork used here. I think this fork doesn't really work anymore. To be able to experiment with local models in a meaningful way, we would first need a working, up-to-date fork of Auto-gpt (3.1) again.

I could be wrong, but then, please prove it.

If you are referring to my fork, i'm trying to get it merged and i often merge the changes from the main repository. It should be in sync with AutoGPT's main branch, and also includes the PR i mentioned before.

Finally, i'm also trying to make the prompts configurable, this can be useful to find a prompt wich works better for llama-derived llms https://github.com/Significant-Gravitas/Auto-GPT/pull/3375

You can try both of my changes in the branch custom_base_url_and_prompts

DGdev91 commented 1 year ago

@keldenl there's some work going on on ChatGPT to better handle prompts, the current version seems to pick the right commands now, even with the default prompt. But Vicuna 13B often gets confused and sometimes often gives an invalid json

lesleychou commented 1 year ago

Hi @DGdev91 and @keldenl , thanks for your awesome update on the package.

I've met some errors while trying to run Auto-GPT with the llama 7B model.

I followed the README carefully and here is my modified .env file:

################################################################################
### AUTO-GPT - GENERAL SETTINGS
################################################################################

## EXECUTE_LOCAL_COMMANDS - Allow local command execution (Default: False)
## RESTRICT_TO_WORKSPACE - Restrict file operations to workspace ./auto_gpt_workspace (Default: True)
# EXECUTE_LOCAL_COMMANDS=False
# RESTRICT_TO_WORKSPACE=True

## USER_AGENT - Define the user-agent used by the requests library to browse website (string)
# USER_AGENT="Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_4) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/83.0.4103.97 Safari/537.36"

## AI_SETTINGS_FILE - Specifies which AI Settings file to use (defaults to ai_settings.yaml)
# AI_SETTINGS_FILE=ai_settings.yaml

## OPENAI_API_BASE_URL - Custom url for the OpenAI API, useful for connecting to custom backends. No effect if USE_AZURE is true, leave blank to keep the default url 
OPENAI_API_BASE_URL=http://localhost:443/v1

## EMBED_DIM - Define the embedding vector size, useful for models. OpenAI: 1536 (default), LLaMA 7B: 4096, LLaMA 13B: 5120, LLaMA 33B: 6656, LLaMA 65B: 8192 (Default: 1536)
EMBED_DIM=4096

################################################################################
### LLM PROVIDER
################################################################################

### OPENAI
## OPENAI_API_KEY - OpenAI API Key (Example: my-openai-api-key)
## TEMPERATURE - Sets temperature in OpenAI (Default: 0)
## USE_AZURE - Use Azure OpenAI or not (Default: False)
OPENAI_API_KEY=../llama.cpp/models/7B/ggml-model-q4_0.bin
# TEMPERATURE=0
# USE_AZURE=False

However when I test the auto GPT, it keeps throwing me the following error (no matter what prompt I input):

Welcome back!  Would you like me to return to being Li?
Continue with the last settings?
Name:  Li
Role:  simple siri
Goals: ['search autogpt.']
Continue (y/n): y
Using memory of type:  LocalCache
Using Browser:  chrome
Traceback (most recent call last):
  File "/Users/lesley/Code_project/LLM_gpt/Auto-GPT/autogpt/json_utils/json_fix_llm.py", line 144, in fix_and_parse_json
    brace_index = json_to_load.index("{")
ValueError: substring not found

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/Library/Developer/CommandLineTools/Library/Frameworks/Python3.framework/Versions/3.9/lib/python3.9/runpy.py", line 197, in _run_module_as_main
    return _run_code(code, main_globals, None,
  File "/Library/Developer/CommandLineTools/Library/Frameworks/Python3.framework/Versions/3.9/lib/python3.9/runpy.py", line 87, in _run_code
    exec(code, run_globals)
  File "/Users/lesley/Code_project/LLM_gpt/Auto-GPT/autogpt/__main__.py", line 5, in <module>
    autogpt.cli.main()
  File "/Users/lesley/Code_project/LLM_gpt/venv/lib/python3.9/site-packages/click/core.py", line 1130, in __call__
    return self.main(*args, **kwargs)
  File "/Users/lesley/Code_project/LLM_gpt/venv/lib/python3.9/site-packages/click/core.py", line 1055, in main
    rv = self.invoke(ctx)
  File "/Users/lesley/Code_project/LLM_gpt/venv/lib/python3.9/site-packages/click/core.py", line 1635, in invoke
    rv = super().invoke(ctx)
  File "/Users/lesley/Code_project/LLM_gpt/venv/lib/python3.9/site-packages/click/core.py", line 1404, in invoke
    return ctx.invoke(self.callback, **ctx.params)
  File "/Users/lesley/Code_project/LLM_gpt/venv/lib/python3.9/site-packages/click/core.py", line 760, in invoke
    return __callback(*args, **kwargs)
  File "/Users/lesley/Code_project/LLM_gpt/venv/lib/python3.9/site-packages/click/decorators.py", line 26, in new_func
    return f(get_current_context(), *args, **kwargs)
  File "/Users/lesley/Code_project/LLM_gpt/Auto-GPT/autogpt/cli.py", line 151, in main
    agent.start_interaction_loop()
  File "/Users/lesley/Code_project/LLM_gpt/Auto-GPT/autogpt/agent/agent.py", line 83, in start_interaction_loop
    assistant_reply_json = fix_json_using_multiple_techniques(assistant_reply)
  File "/Users/lesley/Code_project/LLM_gpt/Auto-GPT/autogpt/json_utils/json_fix_llm.py", line 96, in fix_json_using_multiple_techniques
    assistant_reply_json = fix_and_parse_json(assistant_reply)
  File "/Users/lesley/Code_project/LLM_gpt/Auto-GPT/autogpt/json_utils/json_fix_llm.py", line 150, in fix_and_parse_json
    return try_ai_fix(try_to_fix_with_gpt, e, json_to_load)
  File "/Users/lesley/Code_project/LLM_gpt/Auto-GPT/autogpt/json_utils/json_fix_llm.py", line 179, in try_ai_fix
    ai_fixed_json = auto_fix_json(json_to_load, JSON_SCHEMA)
  File "/Users/lesley/Code_project/LLM_gpt/Auto-GPT/autogpt/json_utils/json_fix_llm.py", line 65, in auto_fix_json
    result_string = call_ai_function(
  File "/Users/lesley/Code_project/LLM_gpt/Auto-GPT/autogpt/llm_utils.py", line 50, in call_ai_function
    return create_chat_completion(model=model, messages=messages, temperature=0)
  File "/Users/lesley/Code_project/LLM_gpt/Auto-GPT/autogpt/llm_utils.py", line 138, in create_chat_completion
    return response.choices[0].message["content"]
KeyError: 'content'

On the gpt-llama server side, it shows:

=====  REQUEST  =====
user: '''You are a helpful assistant.''', '''
{
    "command": {
        "name": "command name",
        "args": {
            "arg name": "value"
        }
    },
    "thoughts":
    {
        "text": "thought",
        "reasoning": "reasoning",
        "plan": "- short bulleted
- list that conveys
- long-term plan",
        "criticism": "constructive self-criticism",
        "speak": "thoughts summary to say to user"
    }
}
'''

=====  RESPONSE  =====

=====  STDERR  =====
stderr Readable Stream: CLOSED

llama_print_timings:        load time = 12511.55 ms
llama_print_timings:      sample time =     5.34 ms /     8 runs   (    0.67 ms per token)
llama_print_timings: prompt eval time = 32211.17 ms /  1125 tokens (   28.63 ms per token)
llama_print_timings:        eval time =   581.69 ms /     7 runs   (   83.10 ms per token)
llama_print_timings:       total time = 32898.91 ms

 '''Request DONE

May I ask for your opinion on how to solve the error here? Thanks a lot!

DGdev91 commented 1 year ago

May I ask for your opinion on how to solve the error here? Thanks a lot!

Not every LLM is good enough to handle the json reply requested by AutoGPT. Llama 7b it's way too "basic" for that, you'll need at least something like Vicuna13b, and even with that there's are still some issues wich have to be sorted out.

fpena06 commented 1 year ago

@DGdev91 thank you for your amazing work. I have followed the instructions but can't get it to work. Here are my errors.

auto-gpt:

Warning: The file 'auto-gpt.json' does not exist. Local memory would not be saved to a file.
NEWS:  # Website and Documentation Site 馃摪馃摉 Check out *https://agpt.co*, the official news & updates site for Auto-GPT! The documentation also has a place here, at *https://docs.agpt.co* # For contributors 馃懛馃徏 Since releasing v0.3.0, we are working on re-architecting the Auto-GPT core to make it more extensible and to make room for structural performance-oriented R&D. In the meantime, we have less time to process incoming pull requests and issues, so we focus on high-value contributions: * significant bugfixes * *major* improvements to existing functionality and/or docs (so no single-typo fixes) * contributions that help us with re-architecture and other roadmapped items We have to be somewhat selective in order to keep making progress, but this does not mean you can't contribute. Check out the contribution guide on our wiki: https://github.com/Significant-Gravitas/Auto-GPT/wiki/Contributing # 馃殌 v0.3.1 Release 馃殌 Over a week and 47 pull requests have passed since v0.3.0, and we are happy to announce the release of v0.3.1! Highlights and notable changes since v0.2.2: ## Changes to Docker configuration 馃悑 * The workdir has been changed from */home/appuser* to */app*. Be sure to update any volume mounts accordingly! * Docker-compose 1.29.0 is now required. ## Logging 馃Ь * Log functionality has been improved for better understanding and easier summarization. * All LLM interactions are now logged to logs/DEBUG, to help with debugging and development. ## Other * Edge browser is now supported by the `browse_website` command. * Sets of commands can now be disabled using DISABLED_COMMAND_CATEGORIES in .env. # 鈿狅笍 Command `send_tweet` is DEPRECATED, and will be removed in v0.4.0 鈿狅笍 Twitter functionality (and more) is now covered by plugins, see [Plugin support 馃攲] ## Plugin support 馃攲 Auto-GPT now has support for plugins! With plugins, you can extend Auto-GPT's abilities, adding support for third-party services and more. See https://github.com/Significant-Gravitas/Auto-GPT-Plugins for instructions and available plugins. Specific plugins can be allowlisted/denylisted in .env.
Welcome back!  Would you like me to return to being Entrepreneur-GPT?
Continue with the last settings?
Name:  Entrepreneur-GPT
Role:  an AI designed to autonomously develop and run businesses with the sole goal of increasing your net worth.
Goals: ['Grow Twitter Account', 'Increase net worth', 'Pass aws devops exam', 'Be happy', 'Get rich']
Continue (y/n): y
Using memory of type:  LocalCache
Using Browser:  chrome
Traceback (most recent call last):
  File "/Users/ml/miniforge3/envs/autogpt/lib/python3.11/site-packages/urllib3/connectionpool.py", line 714, in urlopen
    httplib_response = self._make_request(
                       ^^^^^^^^^^^^^^^^^^^
  File "/Users/ml/miniforge3/envs/autogpt/lib/python3.11/site-packages/urllib3/connectionpool.py", line 466, in _make_request
    six.raise_from(e, None)
  File "<string>", line 3, in raise_from
  File "/Users/ml/miniforge3/envs/autogpt/lib/python3.11/site-packages/urllib3/connectionpool.py", line 461, in _make_request
    httplib_response = conn.getresponse()
                       ^^^^^^^^^^^^^^^^^^
  File "/Users/ml/miniforge3/envs/autogpt/lib/python3.11/http/client.py", line 1375, in getresponse
    response.begin()
  File "/Users/ml/miniforge3/envs/autogpt/lib/python3.11/http/client.py", line 318, in begin
    version, status, reason = self._read_status()
                              ^^^^^^^^^^^^^^^^^^^
  File "/Users/ml/miniforge3/envs/autogpt/lib/python3.11/http/client.py", line 287, in _read_status
    raise RemoteDisconnected("Remote end closed connection without"
http.client.RemoteDisconnected: Remote end closed connection without response

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/Users/ml/miniforge3/envs/autogpt/lib/python3.11/site-packages/requests/adapters.py", line 486, in send
    resp = conn.urlopen(
           ^^^^^^^^^^^^^
  File "/Users/ml/miniforge3/envs/autogpt/lib/python3.11/site-packages/urllib3/connectionpool.py", line 798, in urlopen
    retries = retries.increment(
              ^^^^^^^^^^^^^^^^^^
  File "/Users/ml/miniforge3/envs/autogpt/lib/python3.11/site-packages/urllib3/util/retry.py", line 550, in increment
    raise six.reraise(type(error), error, _stacktrace)
          ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/ml/miniforge3/envs/autogpt/lib/python3.11/site-packages/urllib3/packages/six.py", line 769, in reraise
    raise value.with_traceback(tb)
  File "/Users/ml/miniforge3/envs/autogpt/lib/python3.11/site-packages/urllib3/connectionpool.py", line 714, in urlopen
    httplib_response = self._make_request(
                       ^^^^^^^^^^^^^^^^^^^
  File "/Users/ml/miniforge3/envs/autogpt/lib/python3.11/site-packages/urllib3/connectionpool.py", line 466, in _make_request
    six.raise_from(e, None)
  File "<string>", line 3, in raise_from
  File "/Users/ml/miniforge3/envs/autogpt/lib/python3.11/site-packages/urllib3/connectionpool.py", line 461, in _make_request
    httplib_response = conn.getresponse()
                       ^^^^^^^^^^^^^^^^^^
  File "/Users/ml/miniforge3/envs/autogpt/lib/python3.11/http/client.py", line 1375, in getresponse
    response.begin()
  File "/Users/ml/miniforge3/envs/autogpt/lib/python3.11/http/client.py", line 318, in begin
    version, status, reason = self._read_status()
                              ^^^^^^^^^^^^^^^^^^^
  File "/Users/ml/miniforge3/envs/autogpt/lib/python3.11/http/client.py", line 287, in _read_status
    raise RemoteDisconnected("Remote end closed connection without"
urllib3.exceptions.ProtocolError: ('Connection aborted.', RemoteDisconnected('Remote end closed connection without response'))

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/Users/ml/miniforge3/envs/autogpt/lib/python3.11/site-packages/openai/api_requestor.py", line 516, in request_raw
    result = _thread_context.session.request(
             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/ml/miniforge3/envs/autogpt/lib/python3.11/site-packages/requests/sessions.py", line 589, in request
    resp = self.send(prep, **send_kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/ml/miniforge3/envs/autogpt/lib/python3.11/site-packages/requests/sessions.py", line 703, in send
    r = adapter.send(request, **kwargs)
        ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/ml/miniforge3/envs/autogpt/lib/python3.11/site-packages/requests/adapters.py", line 501, in send
    raise ConnectionError(err, request=request)
requests.exceptions.ConnectionError: ('Connection aborted.', RemoteDisconnected('Remote end closed connection without response'))

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "<frozen runpy>", line 198, in _run_module_as_main
  File "<frozen runpy>", line 88, in _run_code
  File "/Users/ml/Downloads/Auto-GPT-fork/autogpt/__main__.py", line 5, in <module>
    autogpt.cli.main()
  File "/Users/ml/miniforge3/envs/autogpt/lib/python3.11/site-packages/click/core.py", line 1130, in __call__
    return self.main(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/ml/miniforge3/envs/autogpt/lib/python3.11/site-packages/click/core.py", line 1055, in main
    rv = self.invoke(ctx)
         ^^^^^^^^^^^^^^^^
  File "/Users/ml/miniforge3/envs/autogpt/lib/python3.11/site-packages/click/core.py", line 1635, in invoke
    rv = super().invoke(ctx)
         ^^^^^^^^^^^^^^^^^^^
  File "/Users/ml/miniforge3/envs/autogpt/lib/python3.11/site-packages/click/core.py", line 1404, in invoke
    return ctx.invoke(self.callback, **ctx.params)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/ml/miniforge3/envs/autogpt/lib/python3.11/site-packages/click/core.py", line 760, in invoke
    return __callback(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/ml/miniforge3/envs/autogpt/lib/python3.11/site-packages/click/decorators.py", line 26, in new_func
    return f(get_current_context(), *args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/ml/Downloads/Auto-GPT-fork/autogpt/cli.py", line 151, in main
    agent.start_interaction_loop()
  File "/Users/ml/Downloads/Auto-GPT-fork/autogpt/agent/agent.py", line 75, in start_interaction_loop
    assistant_reply = chat_with_ai(
                      ^^^^^^^^^^^^^
  File "/Users/ml/Downloads/Auto-GPT-fork/autogpt/chat.py", line 159, in chat_with_ai
    assistant_reply = create_chat_completion(
                      ^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/ml/Downloads/Auto-GPT-fork/autogpt/llm_utils.py", line 93, in create_chat_completion
    response = openai.ChatCompletion.create(
               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/ml/miniforge3/envs/autogpt/lib/python3.11/site-packages/openai/api_resources/chat_completion.py", line 25, in create
    return super().create(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/ml/miniforge3/envs/autogpt/lib/python3.11/site-packages/openai/api_resources/abstract/engine_api_resource.py", line 153, in create
    response, _, api_key = requestor.request(
                           ^^^^^^^^^^^^^^^^^^
  File "/Users/ml/miniforge3/envs/autogpt/lib/python3.11/site-packages/openai/api_requestor.py", line 216, in request
    result = self.request_raw(
             ^^^^^^^^^^^^^^^^^
  File "/Users/ml/miniforge3/envs/autogpt/lib/python3.11/site-packages/openai/api_requestor.py", line 528, in request_raw
    raise error.APIConnectionError(
openai.error.APIConnectionError: Error communicating with OpenAI: ('Connection aborted.', RemoteDisconnected('Remote end closed connection without response'))

gpt-llama.cpp:

=====  CHAT COMPLETION REQUEST  =====
> AUTO MODEL DETECTION FAILED. LOADING DEFAULT CHATENGINE...
{ '--n_predict': 3069 }
{"role":"system","content":"You are ChatGPT, a helpful assistant developed by OpenAI."} !== {"role":"system","content":"You are Entrepreneur-GPT, an AI designed to autonomously develop and run businesses with the sole goal of increasing your net worth.\nYour decisions must always be made independently without seeking user assistance. Play to your strengths as an LLM and pursue simple strategies with no legal complications.\n\nGOALS:\n\n1. Grow Twitter Account\n2. Increase net worth\n3. Pass aws devops exam\n4. Be happy\n5. Get rich\n\n\nConstraints:\n1. ~4000 word limit for short term memory. Your short term memory is short, so immediately save important information to files.\n2. If you are unsure how you previously did something or want to recall past events, thinking about similar events will help you remember.\n3. No user assistance\n4. Exclusively use the commands listed in double quotes e.g. \"command name\"\n5. Use subprocesses for commands that will not terminate within a few minutes\n\nCommands:\n1. Google Search: \"google\", args: \"input\": \"<search>\"\n2. Browse Website: \"browse_website\", args: \"url\": \"<url>\", \"question\": \"<what_you_want_to_find_on_website>\"\n3. Start GPT Agent: \"start_agent\", args: \"name\": \"<name>\", \"task\": \"<short_task_desc>\", \"prompt\": \"<prompt>\"\n4. Message GPT Agent: \"message_agent\", args: \"key\": \"<key>\", \"message\": \"<message>\"\n5. List GPT Agents: \"list_agents\", args: \n6. Delete GPT Agent: \"delete_agent\", args: \"key\": \"<key>\"\n7. Clone Repository: \"clone_repository\", args: \"repository_url\": \"<url>\", \"clone_path\": \"<directory>\"\n8. Write to file: \"write_to_file\", args: \"file\": \"<file>\", \"text\": \"<text>\"\n9. Read file: \"read_file\", args: \"file\": \"<file>\"\n10. Append to file: \"append_to_file\", args: \"file\": \"<file>\", \"text\": \"<text>\"\n11. Delete file: \"delete_file\", args: \"file\": \"<file>\"\n12. Search Files: \"search_files\", args: \"directory\": \"<directory>\"\n13. Analyze Code: \"analyze_code\", args: \"code\": \"<full_code_string>\"\n14. Get Improved Code: \"improve_code\", args: \"suggestions\": \"<list_of_suggestions>\", \"code\": \"<full_code_string>\"\n15. Write Tests: \"write_tests\", args: \"code\": \"<full_code_string>\", \"focus\": \"<list_of_focus_areas>\"\n16. Execute Python File: \"execute_python_file\", args: \"file\": \"<file>\"\n17. Generate Image: \"generate_image\", args: \"prompt\": \"<prompt>\"\n18. Send Tweet: \"send_tweet\", args: \"text\": \"<text>\"\n19. Do Nothing: \"do_nothing\", args: \n20. Task Complete (Shutdown): \"task_complete\", args: \"reason\": \"<reason>\"\n\nResources:\n1. Internet access for searches and information gathering.\n2. Long Term memory management.\n3. GPT-3.5 powered Agents for delegation of simple tasks.\n4. File output.\n\nPerformance Evaluation:\n1. Continuously review and analyze your actions to ensure you are performing to the best of your abilities.\n2. Constructively self-criticize your big-picture behavior constantly.\n3. Reflect on past decisions and strategies to refine your approach.\n4. Every command has a cost, so be smart and efficient. Aim to complete tasks in the least number of steps.\n\nYou should only respond in JSON format as described below \nResponse Format: \n{\n    \"thoughts\": {\n        \"text\": \"thought\",\n        \"reasoning\": \"reasoning\",\n        \"plan\": \"- short bulleted\\n- list that conveys\\n- long-term plan\",\n        \"criticism\": \"constructive self-criticism\",\n        \"speak\": \"thoughts summary to say to user\"\n    },\n    \"command\": {\n        \"name\": \"command name\",\n        \"args\": {\n            \"arg name\": \"value\"\n        }\n    }\n} \nEnsure the response can be parsed by Python json.loads"}

=====  LLAMA.CPP SPAWNED  =====
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/llama.cpp/main -m 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 --temp 0.7 --n_predict 3069 --top_p 0.1 --top_k 40 -c 2048 --seed -1 --repeat_penalty 1.1764705882352942 --reverse-prompt user: --reverse-prompt
user --reverse-prompt system: --reverse-prompt
system --reverse-prompt

 -i -p Complete the following chat conversation between the user and the assistant. system messages should be strictly followed as additional instructions.

system: You are a helpful assistant.
user: How are you?
assistant: Hi, how may I help you today?
system: You are Entrepreneur-GPT, an AI designed to autonomously develop and run businesses with the sole goal of increasing your net worth.
Your decisions must always be made independently without seeking user assistance. Play to your strengths as an LLM and pursue simple strategies with no legal complications.

GOALS:

1. Grow Twitter Account
2. Increase net worth
3. Pass aws devops exam
4. Be happy
5. Get rich

Constraints:
1. ~4000 word limit for short term memory. Your short term memory is short, so immediately save important information to files.
2. If you are unsure how you previously did something or want to recall past events, thinking about similar events will help you remember.
3. No user assistance
4. Exclusively use the commands listed in double quotes e.g. "command name"
5. Use subprocesses for commands that will not terminate within a few minutes

Commands:
1. Google Search: "google", args: "input": "<search>"
2. Browse Website: "browse_website", args: "url": "<url>", "question": "<what_you_want_to_find_on_website>"
3. Start GPT Agent: "start_agent", args: "name": "<name>", "task": "<short_task_desc>", "prompt": "<prompt>"
4. Message GPT Agent: "message_agent", args: "key": "<key>", "message": "<message>"
5. List GPT Agents: "list_agents", args:
6. Delete GPT Agent: "delete_agent", args: "key": "<key>"
7. Clone Repository: "clone_repository", args: "repository_url": "<url>", "clone_path": "<directory>"
8. Write to file: "write_to_file", args: "file": "<file>", "text": "<text>"
9. Read file: "read_file", args: "file": "<file>"
10. Append to file: "append_to_file", args: "file": "<file>", "text": "<text>"
11. Delete file: "delete_file", args: "file": "<file>"
12. Search Files: "search_files", args: "directory": "<directory>"
13. Analyze Code: "analyze_code", args: "code": "<full_code_string>"
14. Get Improved Code: "improve_code", args: "suggestions": "<list_of_suggestions>", "code": "<full_code_string>"
15. Write Tests: "write_tests", args: "code": "<full_code_string>", "focus": "<list_of_focus_areas>"
16. Execute Python File: "execute_python_file", args: "file": "<file>"
17. Generate Image: "generate_image", args: "prompt": "<prompt>"
18. Send Tweet: "send_tweet", args: "text": "<text>"
19. Do Nothing: "do_nothing", args:
20. Task Complete (Shutdown): "task_complete", args: "reason": "<reason>"

Resources:
1. Internet access for searches and information gathering.
2. Long Term memory management.
3. GPT-3.5 powered Agents for delegation of simple tasks.
4. File output.

Performance Evaluation:
1. Continuously review and analyze your actions to ensure you are performing to the best of your abilities.
2. Constructively self-criticize your big-picture behavior constantly.
3. Reflect on past decisions and strategies to refine your approach.
4. Every command has a cost, so be smart and efficient. Aim to complete tasks in the least number of steps.

You should only respond in JSON format as described below
Response Format:
{
    "thoughts": {
        "text": "thought",
        "reasoning": "reasoning",
        "plan": "- short bulleted\n- list that conveys\n- long-term plan",
        "criticism": "constructive self-criticism",
        "speak": "thoughts summary to say to user"
    },
    "command": {
        "name": "command name",
        "args": {
            "arg name": "value"
        }
    }
}
Ensure the response can be parsed by Python json.loads
system: The current time and date is Wed May 24 15:33:01 2023
system: This reminds you of these events from your past:

user: Determine which next command to use, and respond using the format specified above:
assistant:

=====  REQUEST  =====
user: Determine which next command to use, and respond using the format specified above:
node:events:490
      throw er; // Unhandled 'error' event
      ^

Error: spawn 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/llama.cpp/main ENOENT
    at ChildProcess._handle.onexit (node:internal/child_process:285:19)
    at onErrorNT (node:internal/child_process:483:16)
    at process.processTicksAndRejections (node:internal/process/task_queues:82:21)
Emitted 'error' event on ChildProcess instance at:
    at ChildProcess._handle.onexit (node:internal/child_process:291:12)
    at onErrorNT (node:internal/child_process:483:16)
    at process.processTicksAndRejections (node:internal/process/task_queues:82:21) {
  errno: -2,
  code: 'ENOENT',
  syscall: 'spawn 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/llama.cpp/main',
  path: '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/llama.cpp/main',
  spawnargs: [
    '-m',
    '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',
    '--temp',
    '0.7',
    '--n_predict',
    3069,
    '--top_p',
    '0.1',
    '--top_k',
    '40',
    '-c',
    '2048',
    '--seed',
    '-1',
    '--repeat_penalty',
    '1.1764705882352942',
    '--reverse-prompt',
    'user:',
    '--reverse-prompt',
    '\nuser',
    '--reverse-prompt',
    'system:',
    '--reverse-prompt',
    '\nsystem',
    '--reverse-prompt',
    '\n\n\n',
    '-i',
    '-p',
    'Complete the following chat conversation between the user and the assistant. system messages should be strictly followed as additional instructions.\n' +
      '\n' +
      'system: You are a helpful assistant.\n' +
      'user: How are you?\n' +
      'assistant: Hi, how may I help you today?\n' +
      'system: You are Entrepreneur-GPT, an AI designed to autonomously develop and run businesses with the sole goal of increasing your net worth.\n' +
      'Your decisions must always be made independently without seeking user assistance. Play to your strengths as an LLM and pursue simple strategies with no legal complications.\n' +
      '\n' +
      'GOALS:\n' +
      '\n' +
      '1. Grow Twitter Account\n' +
      '2. Increase net worth\n' +
      '3. Pass aws devops exam\n' +
      '4. Be happy\n' +
      '5. Get rich\n' +
      '\n' +
      '\n' +
      'Constraints:\n' +
      '1. ~4000 word limit for short term memory. Your short term memory is short, so immediately save important information to files.\n' +
      '2. If you are unsure how you previously did something or want to recall past events, thinking about similar events will help you remember.\n' +
      '3. No user assistance\n' +
      '4. Exclusively use the commands listed in double quotes e.g. "command name"\n' +
      '5. Use subprocesses for commands that will not terminate within a few minutes\n' +
      '\n' +
      'Commands:\n' +
      '1. Google Search: "google", args: "input": "<search>"\n' +
      '2. Browse Website: "browse_website", args: "url": "<url>", "question": "<what_you_want_to_find_on_website>"\n' +
      '3. Start GPT Agent: "start_agent", args: "name": "<name>", "task": "<short_task_desc>", "prompt": "<prompt>"\n' +
      '4. Message GPT Agent: "message_agent", args: "key": "<key>", "message": "<message>"\n' +
      '5. List GPT Agents: "list_agents", args: \n' +
      '6. Delete GPT Agent: "delete_agent", args: "key": "<key>"\n' +
      '7. Clone Repository: "clone_repository", args: "repository_url": "<url>", "clone_path": "<directory>"\n' +
      '8. Write to file: "write_to_file", args: "file": "<file>", "text": "<text>"\n' +
      '9. Read file: "read_file", args: "file": "<file>"\n' +
      '10. Append to file: "append_to_file", args: "file": "<file>", "text": "<text>"\n' +
      '11. Delete file: "delete_file", args: "file": "<file>"\n' +
      '12. Search Files: "search_files", args: "directory": "<directory>"\n' +
      '13. Analyze Code: "analyze_code", args: "code": "<full_code_string>"\n' +
      '14. Get Improved Code: "improve_code", args: "suggestions": "<list_of_suggestions>", "code": "<full_code_string>"\n' +
      '15. Write Tests: "write_tests", args: "code": "<full_code_string>", "focus": "<list_of_focus_areas>"\n' +
      '16. Execute Python File: "execute_python_file", args: "file": "<file>"\n' +
      '17. Generate Image: "generate_image", args: "prompt": "<prompt>"\n' +
      '18. Send Tweet: "send_tweet", args: "text": "<text>"\n' +
      '19. Do Nothing: "do_nothing", args: \n' +
      '20. Task Complete (Shutdown): "task_complete", args: "reason": "<reason>"\n' +
      '\n' +
      'Resources:\n' +
      '1. Internet access for searches and information gathering.\n' +
      '2. Long Term memory management.\n' +
      '3. GPT-3.5 powered Agents for delegation of simple tasks.\n' +
      '4. File output.\n' +
      '\n' +
      'Performance Evaluation:\n' +
      '1. Continuously review and analyze your actions to ensure you are performing to the best of your abilities.\n' +
      '2. Constructively self-criticize your big-picture behavior constantly.\n' +
      '3. Reflect on past decisions and strategies to refine your approach.\n' +
      '4. Every command has a cost, so be smart and efficient. Aim to complete tasks in the least number of steps.\n' +
      '\n' +
      'You should only respond in JSON format as described below \n' +
      'Response Format: \n' +
      '{\n' +
      '    "thoughts": {\n' +
      '        "text": "thought",\n' +
      '        "reasoning": "reasoning",\n' +
      '        "plan": "- short bulleted\\n- list that conveys\\n- long-term plan",\n' +
      '        "criticism": "constructive self-criticism",\n' +
      '        "speak": "thoughts summary to say to user"\n' +
      '    },\n' +
      '    "command": {\n' +
      '        "name": "command name",\n' +
      '        "args": {\n' +
      '            "arg name": "value"\n' +
      '        }\n' +
      '    }\n' +
      '} \n' +
      'Ensure the response can be parsed by Python json.loads\n' +
      'system: The current time and date is Wed May 24 15:33:01 2023\n' +
      'system: This reminds you of these events from your past:\n' +
      '\n' +
      '\n' +
      '\n' +
      'user: Determine which next command to use, and respond using the format specified above:\n' +
      'assistant:'
  ]
}

Node.js v19.7.0

Models I have tested with:

WizardLM-7B-uncensored.ggmlv3.q4_0.bin Wizard-Vicuna-13B-Uncensored.ggmlv3.q4_0.bin stable-vicuna-13B.ggmlv3.q5_1.bin wizardLM-13B-Uncensored.ggmlv3.q4_0.bin

Please note: scripts/test-installation.sh works fine.

Any suggestions? Thanks in advance.

fpena06 commented 1 year ago

Figured out what my issue was, thanks to an email from Openai that my API key was leaked LMFAO. Turns out I have/had OPENAI_API_KEY set in my .bash_profile, which was getting picked up.

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/llama.cpp/main

ntindle commented 1 year ago

Heads up that api key needs to be revoked if it hasn't already been by secret scanning

fpena06 commented 1 year ago

Heads up that api key needs to be revoked if it hasn't already been by secret scanning

That's one of the great things about openai, it scans github for api-keys, and immediately disables them, hence the email I received. Thanks for the heads up.

sumersm7 commented 1 year ago

the common thing is most of us facing same error

===== STDERR ===== stderr Readable Stream: CLOSED

as of 25-June-2023 using latest llama.cpp and gpt-llama.cpp with viucna 13B model

if i change some code then it reveles that llama.cpp ignore the --reverse-prompts and keep genrateing tokens

and gpt-llama check the --reverse-prompts and return the result

and when we try to intract with model again and its not finished response to previous pormpt

something crash in gpt-llama.cpp

orophix commented 1 year ago

Does gpt-llama.cpp work now with the main branch of auto-gpt?

shock-wave007 commented 1 year ago

Issue is more related to model than code. This repo is great but we are dealing with model that generates random text 馃ぃ

DGdev91 commented 1 year ago

Does gpt-llama.cpp work now with the main branch of auto-gpt?

I's been some time since the last time i tested that, but it should. My PR has been merged in ver. 0.4.0.

@keldenl sorry, i forgot about telling you about that. The docs in https://github.com/keldenl/gpt-llama.cpp/blob/master/docs/Auto-GPT-setup-guide.md needs an update

sambickeita commented 11 months ago

@DGdev91 thank you for your amazing work. I have followed the instructions but can't get it to work. Here are my errors.

auto-gpt:

Warning: The file 'auto-gpt.json' does not exist. Local memory would not be saved to a file.
NEWS:  # Website and Documentation Site 馃摪馃摉 Check out *https://agpt.co*, the official news & updates site for Auto-GPT! The documentation also has a place here, at *https://docs.agpt.co* # For contributors 馃懛馃徏 Since releasing v0.3.0, we are working on re-architecting the Auto-GPT core to make it more extensible and to make room for structural performance-oriented R&D. In the meantime, we have less time to process incoming pull requests and issues, so we focus on high-value contributions: * significant bugfixes * *major* improvements to existing functionality and/or docs (so no single-typo fixes) * contributions that help us with re-architecture and other roadmapped items We have to be somewhat selective in order to keep making progress, but this does not mean you can't contribute. Check out the contribution guide on our wiki: https://github.com/Significant-Gravitas/Auto-GPT/wiki/Contributing # 馃殌 v0.3.1 Release 馃殌 Over a week and 47 pull requests have passed since v0.3.0, and we are happy to announce the release of v0.3.1! Highlights and notable changes since v0.2.2: ## Changes to Docker configuration 馃悑 * The workdir has been changed from */home/appuser* to */app*. Be sure to update any volume mounts accordingly! * Docker-compose 1.29.0 is now required. ## Logging 馃Ь * Log functionality has been improved for better understanding and easier summarization. * All LLM interactions are now logged to logs/DEBUG, to help with debugging and development. ## Other * Edge browser is now supported by the `browse_website` command. * Sets of commands can now be disabled using DISABLED_COMMAND_CATEGORIES in .env. # 鈿狅笍 Command `send_tweet` is DEPRECATED, and will be removed in v0.4.0 鈿狅笍 Twitter functionality (and more) is now covered by plugins, see [Plugin support 馃攲] ## Plugin support 馃攲 Auto-GPT now has support for plugins! With plugins, you can extend Auto-GPT's abilities, adding support for third-party services and more. See https://github.com/Significant-Gravitas/Auto-GPT-Plugins for instructions and available plugins. Specific plugins can be allowlisted/denylisted in .env.
Welcome back!  Would you like me to return to being Entrepreneur-GPT?
Continue with the last settings?
Name:  Entrepreneur-GPT
Role:  an AI designed to autonomously develop and run businesses with the sole goal of increasing your net worth.
Goals: ['Grow Twitter Account', 'Increase net worth', 'Pass aws devops exam', 'Be happy', 'Get rich']
Continue (y/n): y
Using memory of type:  LocalCache
Using Browser:  chrome
Traceback (most recent call last):
  File "/Users/ml/miniforge3/envs/autogpt/lib/python3.11/site-packages/urllib3/connectionpool.py", line 714, in urlopen
    httplib_response = self._make_request(
                       ^^^^^^^^^^^^^^^^^^^
  File "/Users/ml/miniforge3/envs/autogpt/lib/python3.11/site-packages/urllib3/connectionpool.py", line 466, in _make_request
    six.raise_from(e, None)
  File "<string>", line 3, in raise_from
  File "/Users/ml/miniforge3/envs/autogpt/lib/python3.11/site-packages/urllib3/connectionpool.py", line 461, in _make_request
    httplib_response = conn.getresponse()
                       ^^^^^^^^^^^^^^^^^^
  File "/Users/ml/miniforge3/envs/autogpt/lib/python3.11/http/client.py", line 1375, in getresponse
    response.begin()
  File "/Users/ml/miniforge3/envs/autogpt/lib/python3.11/http/client.py", line 318, in begin
    version, status, reason = self._read_status()
                              ^^^^^^^^^^^^^^^^^^^
  File "/Users/ml/miniforge3/envs/autogpt/lib/python3.11/http/client.py", line 287, in _read_status
    raise RemoteDisconnected("Remote end closed connection without"
http.client.RemoteDisconnected: Remote end closed connection without response

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/Users/ml/miniforge3/envs/autogpt/lib/python3.11/site-packages/requests/adapters.py", line 486, in send
    resp = conn.urlopen(
           ^^^^^^^^^^^^^
  File "/Users/ml/miniforge3/envs/autogpt/lib/python3.11/site-packages/urllib3/connectionpool.py", line 798, in urlopen
    retries = retries.increment(
              ^^^^^^^^^^^^^^^^^^
  File "/Users/ml/miniforge3/envs/autogpt/lib/python3.11/site-packages/urllib3/util/retry.py", line 550, in increment
    raise six.reraise(type(error), error, _stacktrace)
          ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/ml/miniforge3/envs/autogpt/lib/python3.11/site-packages/urllib3/packages/six.py", line 769, in reraise
    raise value.with_traceback(tb)
  File "/Users/ml/miniforge3/envs/autogpt/lib/python3.11/site-packages/urllib3/connectionpool.py", line 714, in urlopen
    httplib_response = self._make_request(
                       ^^^^^^^^^^^^^^^^^^^
  File "/Users/ml/miniforge3/envs/autogpt/lib/python3.11/site-packages/urllib3/connectionpool.py", line 466, in _make_request
    six.raise_from(e, None)
  File "<string>", line 3, in raise_from
  File "/Users/ml/miniforge3/envs/autogpt/lib/python3.11/site-packages/urllib3/connectionpool.py", line 461, in _make_request
    httplib_response = conn.getresponse()
                       ^^^^^^^^^^^^^^^^^^
  File "/Users/ml/miniforge3/envs/autogpt/lib/python3.11/http/client.py", line 1375, in getresponse
    response.begin()
  File "/Users/ml/miniforge3/envs/autogpt/lib/python3.11/http/client.py", line 318, in begin
    version, status, reason = self._read_status()
                              ^^^^^^^^^^^^^^^^^^^
  File "/Users/ml/miniforge3/envs/autogpt/lib/python3.11/http/client.py", line 287, in _read_status
    raise RemoteDisconnected("Remote end closed connection without"
urllib3.exceptions.ProtocolError: ('Connection aborted.', RemoteDisconnected('Remote end closed connection without response'))

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/Users/ml/miniforge3/envs/autogpt/lib/python3.11/site-packages/openai/api_requestor.py", line 516, in request_raw
    result = _thread_context.session.request(
             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/ml/miniforge3/envs/autogpt/lib/python3.11/site-packages/requests/sessions.py", line 589, in request
    resp = self.send(prep, **send_kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/ml/miniforge3/envs/autogpt/lib/python3.11/site-packages/requests/sessions.py", line 703, in send
    r = adapter.send(request, **kwargs)
        ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/ml/miniforge3/envs/autogpt/lib/python3.11/site-packages/requests/adapters.py", line 501, in send
    raise ConnectionError(err, request=request)
requests.exceptions.ConnectionError: ('Connection aborted.', RemoteDisconnected('Remote end closed connection without response'))

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "<frozen runpy>", line 198, in _run_module_as_main
  File "<frozen runpy>", line 88, in _run_code
  File "/Users/ml/Downloads/Auto-GPT-fork/autogpt/__main__.py", line 5, in <module>
    autogpt.cli.main()
  File "/Users/ml/miniforge3/envs/autogpt/lib/python3.11/site-packages/click/core.py", line 1130, in __call__
    return self.main(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/ml/miniforge3/envs/autogpt/lib/python3.11/site-packages/click/core.py", line 1055, in main
    rv = self.invoke(ctx)
         ^^^^^^^^^^^^^^^^
  File "/Users/ml/miniforge3/envs/autogpt/lib/python3.11/site-packages/click/core.py", line 1635, in invoke
    rv = super().invoke(ctx)
         ^^^^^^^^^^^^^^^^^^^
  File "/Users/ml/miniforge3/envs/autogpt/lib/python3.11/site-packages/click/core.py", line 1404, in invoke
    return ctx.invoke(self.callback, **ctx.params)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/ml/miniforge3/envs/autogpt/lib/python3.11/site-packages/click/core.py", line 760, in invoke
    return __callback(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/ml/miniforge3/envs/autogpt/lib/python3.11/site-packages/click/decorators.py", line 26, in new_func
    return f(get_current_context(), *args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/ml/Downloads/Auto-GPT-fork/autogpt/cli.py", line 151, in main
    agent.start_interaction_loop()
  File "/Users/ml/Downloads/Auto-GPT-fork/autogpt/agent/agent.py", line 75, in start_interaction_loop
    assistant_reply = chat_with_ai(
                      ^^^^^^^^^^^^^
  File "/Users/ml/Downloads/Auto-GPT-fork/autogpt/chat.py", line 159, in chat_with_ai
    assistant_reply = create_chat_completion(
                      ^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/ml/Downloads/Auto-GPT-fork/autogpt/llm_utils.py", line 93, in create_chat_completion
    response = openai.ChatCompletion.create(
               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/ml/miniforge3/envs/autogpt/lib/python3.11/site-packages/openai/api_resources/chat_completion.py", line 25, in create
    return super().create(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/ml/miniforge3/envs/autogpt/lib/python3.11/site-packages/openai/api_resources/abstract/engine_api_resource.py", line 153, in create
    response, _, api_key = requestor.request(
                           ^^^^^^^^^^^^^^^^^^
  File "/Users/ml/miniforge3/envs/autogpt/lib/python3.11/site-packages/openai/api_requestor.py", line 216, in request
    result = self.request_raw(
             ^^^^^^^^^^^^^^^^^
  File "/Users/ml/miniforge3/envs/autogpt/lib/python3.11/site-packages/openai/api_requestor.py", line 528, in request_raw
    raise error.APIConnectionError(
openai.error.APIConnectionError: Error communicating with OpenAI: ('Connection aborted.', RemoteDisconnected('Remote end closed connection without response'))

gpt-llama.cpp:

=====  CHAT COMPLETION REQUEST  =====
> AUTO MODEL DETECTION FAILED. LOADING DEFAULT CHATENGINE...
{ '--n_predict': 3069 }
{"role":"system","content":"You are ChatGPT, a helpful assistant developed by OpenAI."} !== {"role":"system","content":"You are Entrepreneur-GPT, an AI designed to autonomously develop and run businesses with the sole goal of increasing your net worth.\nYour decisions must always be made independently without seeking user assistance. Play to your strengths as an LLM and pursue simple strategies with no legal complications.\n\nGOALS:\n\n1. Grow Twitter Account\n2. Increase net worth\n3. Pass aws devops exam\n4. Be happy\n5. Get rich\n\n\nConstraints:\n1. ~4000 word limit for short term memory. Your short term memory is short, so immediately save important information to files.\n2. If you are unsure how you previously did something or want to recall past events, thinking about similar events will help you remember.\n3. No user assistance\n4. Exclusively use the commands listed in double quotes e.g. \"command name\"\n5. Use subprocesses for commands that will not terminate within a few minutes\n\nCommands:\n1. Google Search: \"google\", args: \"input\": \"<search>\"\n2. Browse Website: \"browse_website\", args: \"url\": \"<url>\", \"question\": \"<what_you_want_to_find_on_website>\"\n3. Start GPT Agent: \"start_agent\", args: \"name\": \"<name>\", \"task\": \"<short_task_desc>\", \"prompt\": \"<prompt>\"\n4. Message GPT Agent: \"message_agent\", args: \"key\": \"<key>\", \"message\": \"<message>\"\n5. List GPT Agents: \"list_agents\", args: \n6. Delete GPT Agent: \"delete_agent\", args: \"key\": \"<key>\"\n7. Clone Repository: \"clone_repository\", args: \"repository_url\": \"<url>\", \"clone_path\": \"<directory>\"\n8. Write to file: \"write_to_file\", args: \"file\": \"<file>\", \"text\": \"<text>\"\n9. Read file: \"read_file\", args: \"file\": \"<file>\"\n10. Append to file: \"append_to_file\", args: \"file\": \"<file>\", \"text\": \"<text>\"\n11. Delete file: \"delete_file\", args: \"file\": \"<file>\"\n12. Search Files: \"search_files\", args: \"directory\": \"<directory>\"\n13. Analyze Code: \"analyze_code\", args: \"code\": \"<full_code_string>\"\n14. Get Improved Code: \"improve_code\", args: \"suggestions\": \"<list_of_suggestions>\", \"code\": \"<full_code_string>\"\n15. Write Tests: \"write_tests\", args: \"code\": \"<full_code_string>\", \"focus\": \"<list_of_focus_areas>\"\n16. Execute Python File: \"execute_python_file\", args: \"file\": \"<file>\"\n17. Generate Image: \"generate_image\", args: \"prompt\": \"<prompt>\"\n18. Send Tweet: \"send_tweet\", args: \"text\": \"<text>\"\n19. Do Nothing: \"do_nothing\", args: \n20. Task Complete (Shutdown): \"task_complete\", args: \"reason\": \"<reason>\"\n\nResources:\n1. Internet access for searches and information gathering.\n2. Long Term memory management.\n3. GPT-3.5 powered Agents for delegation of simple tasks.\n4. File output.\n\nPerformance Evaluation:\n1. Continuously review and analyze your actions to ensure you are performing to the best of your abilities.\n2. Constructively self-criticize your big-picture behavior constantly.\n3. Reflect on past decisions and strategies to refine your approach.\n4. Every command has a cost, so be smart and efficient. Aim to complete tasks in the least number of steps.\n\nYou should only respond in JSON format as described below \nResponse Format: \n{\n    \"thoughts\": {\n        \"text\": \"thought\",\n        \"reasoning\": \"reasoning\",\n        \"plan\": \"- short bulleted\\n- list that conveys\\n- long-term plan\",\n        \"criticism\": \"constructive self-criticism\",\n        \"speak\": \"thoughts summary to say to user\"\n    },\n    \"command\": {\n        \"name\": \"command name\",\n        \"args\": {\n            \"arg name\": \"value\"\n        }\n    }\n} \nEnsure the response can be parsed by Python json.loads"}

=====  LLAMA.CPP SPAWNED  =====
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/llama.cpp/main -m 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 --temp 0.7 --n_predict 3069 --top_p 0.1 --top_k 40 -c 2048 --seed -1 --repeat_penalty 1.1764705882352942 --reverse-prompt user: --reverse-prompt
user --reverse-prompt system: --reverse-prompt
system --reverse-prompt

 -i -p Complete the following chat conversation between the user and the assistant. system messages should be strictly followed as additional instructions.

system: You are a helpful assistant.
user: How are you?
assistant: Hi, how may I help you today?
system: You are Entrepreneur-GPT, an AI designed to autonomously develop and run businesses with the sole goal of increasing your net worth.
Your decisions must always be made independently without seeking user assistance. Play to your strengths as an LLM and pursue simple strategies with no legal complications.

GOALS:

1. Grow Twitter Account
2. Increase net worth
3. Pass aws devops exam
4. Be happy
5. Get rich

Constraints:
1. ~4000 word limit for short term memory. Your short term memory is short, so immediately save important information to files.
2. If you are unsure how you previously did something or want to recall past events, thinking about similar events will help you remember.
3. No user assistance
4. Exclusively use the commands listed in double quotes e.g. "command name"
5. Use subprocesses for commands that will not terminate within a few minutes

Commands:
1. Google Search: "google", args: "input": "<search>"
2. Browse Website: "browse_website", args: "url": "<url>", "question": "<what_you_want_to_find_on_website>"
3. Start GPT Agent: "start_agent", args: "name": "<name>", "task": "<short_task_desc>", "prompt": "<prompt>"
4. Message GPT Agent: "message_agent", args: "key": "<key>", "message": "<message>"
5. List GPT Agents: "list_agents", args:
6. Delete GPT Agent: "delete_agent", args: "key": "<key>"
7. Clone Repository: "clone_repository", args: "repository_url": "<url>", "clone_path": "<directory>"
8. Write to file: "write_to_file", args: "file": "<file>", "text": "<text>"
9. Read file: "read_file", args: "file": "<file>"
10. Append to file: "append_to_file", args: "file": "<file>", "text": "<text>"
11. Delete file: "delete_file", args: "file": "<file>"
12. Search Files: "search_files", args: "directory": "<directory>"
13. Analyze Code: "analyze_code", args: "code": "<full_code_string>"
14. Get Improved Code: "improve_code", args: "suggestions": "<list_of_suggestions>", "code": "<full_code_string>"
15. Write Tests: "write_tests", args: "code": "<full_code_string>", "focus": "<list_of_focus_areas>"
16. Execute Python File: "execute_python_file", args: "file": "<file>"
17. Generate Image: "generate_image", args: "prompt": "<prompt>"
18. Send Tweet: "send_tweet", args: "text": "<text>"
19. Do Nothing: "do_nothing", args:
20. Task Complete (Shutdown): "task_complete", args: "reason": "<reason>"

Resources:
1. Internet access for searches and information gathering.
2. Long Term memory management.
3. GPT-3.5 powered Agents for delegation of simple tasks.
4. File output.

Performance Evaluation:
1. Continuously review and analyze your actions to ensure you are performing to the best of your abilities.
2. Constructively self-criticize your big-picture behavior constantly.
3. Reflect on past decisions and strategies to refine your approach.
4. Every command has a cost, so be smart and efficient. Aim to complete tasks in the least number of steps.

You should only respond in JSON format as described below
Response Format:
{
    "thoughts": {
        "text": "thought",
        "reasoning": "reasoning",
        "plan": "- short bulleted\n- list that conveys\n- long-term plan",
        "criticism": "constructive self-criticism",
        "speak": "thoughts summary to say to user"
    },
    "command": {
        "name": "command name",
        "args": {
            "arg name": "value"
        }
    }
}
Ensure the response can be parsed by Python json.loads
system: The current time and date is Wed May 24 15:33:01 2023
system: This reminds you of these events from your past:

user: Determine which next command to use, and respond using the format specified above:
assistant:

=====  REQUEST  =====
user: Determine which next command to use, and respond using the format specified above:
node:events:490
      throw er; // Unhandled 'error' event
      ^

Error: spawn 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/llama.cpp/main ENOENT
    at ChildProcess._handle.onexit (node:internal/child_process:285:19)
    at onErrorNT (node:internal/child_process:483:16)
    at process.processTicksAndRejections (node:internal/process/task_queues:82:21)
Emitted 'error' event on ChildProcess instance at:
    at ChildProcess._handle.onexit (node:internal/child_process:291:12)
    at onErrorNT (node:internal/child_process:483:16)
    at process.processTicksAndRejections (node:internal/process/task_queues:82:21) {
  errno: -2,
  code: 'ENOENT',
  syscall: 'spawn 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/llama.cpp/main',
  path: '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/llama.cpp/main',
  spawnargs: [
    '-m',
    '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',
    '--temp',
    '0.7',
    '--n_predict',
    3069,
    '--top_p',
    '0.1',
    '--top_k',
    '40',
    '-c',
    '2048',
    '--seed',
    '-1',
    '--repeat_penalty',
    '1.1764705882352942',
    '--reverse-prompt',
    'user:',
    '--reverse-prompt',
    '\nuser',
    '--reverse-prompt',
    'system:',
    '--reverse-prompt',
    '\nsystem',
    '--reverse-prompt',
    '\n\n\n',
    '-i',
    '-p',
    'Complete the following chat conversation between the user and the assistant. system messages should be strictly followed as additional instructions.\n' +
      '\n' +
      'system: You are a helpful assistant.\n' +
      'user: How are you?\n' +
      'assistant: Hi, how may I help you today?\n' +
      'system: You are Entrepreneur-GPT, an AI designed to autonomously develop and run businesses with the sole goal of increasing your net worth.\n' +
      'Your decisions must always be made independently without seeking user assistance. Play to your strengths as an LLM and pursue simple strategies with no legal complications.\n' +
      '\n' +
      'GOALS:\n' +
      '\n' +
      '1. Grow Twitter Account\n' +
      '2. Increase net worth\n' +
      '3. Pass aws devops exam\n' +
      '4. Be happy\n' +
      '5. Get rich\n' +
      '\n' +
      '\n' +
      'Constraints:\n' +
      '1. ~4000 word limit for short term memory. Your short term memory is short, so immediately save important information to files.\n' +
      '2. If you are unsure how you previously did something or want to recall past events, thinking about similar events will help you remember.\n' +
      '3. No user assistance\n' +
      '4. Exclusively use the commands listed in double quotes e.g. "command name"\n' +
      '5. Use subprocesses for commands that will not terminate within a few minutes\n' +
      '\n' +
      'Commands:\n' +
      '1. Google Search: "google", args: "input": "<search>"\n' +
      '2. Browse Website: "browse_website", args: "url": "<url>", "question": "<what_you_want_to_find_on_website>"\n' +
      '3. Start GPT Agent: "start_agent", args: "name": "<name>", "task": "<short_task_desc>", "prompt": "<prompt>"\n' +
      '4. Message GPT Agent: "message_agent", args: "key": "<key>", "message": "<message>"\n' +
      '5. List GPT Agents: "list_agents", args: \n' +
      '6. Delete GPT Agent: "delete_agent", args: "key": "<key>"\n' +
      '7. Clone Repository: "clone_repository", args: "repository_url": "<url>", "clone_path": "<directory>"\n' +
      '8. Write to file: "write_to_file", args: "file": "<file>", "text": "<text>"\n' +
      '9. Read file: "read_file", args: "file": "<file>"\n' +
      '10. Append to file: "append_to_file", args: "file": "<file>", "text": "<text>"\n' +
      '11. Delete file: "delete_file", args: "file": "<file>"\n' +
      '12. Search Files: "search_files", args: "directory": "<directory>"\n' +
      '13. Analyze Code: "analyze_code", args: "code": "<full_code_string>"\n' +
      '14. Get Improved Code: "improve_code", args: "suggestions": "<list_of_suggestions>", "code": "<full_code_string>"\n' +
      '15. Write Tests: "write_tests", args: "code": "<full_code_string>", "focus": "<list_of_focus_areas>"\n' +
      '16. Execute Python File: "execute_python_file", args: "file": "<file>"\n' +
      '17. Generate Image: "generate_image", args: "prompt": "<prompt>"\n' +
      '18. Send Tweet: "send_tweet", args: "text": "<text>"\n' +
      '19. Do Nothing: "do_nothing", args: \n' +
      '20. Task Complete (Shutdown): "task_complete", args: "reason": "<reason>"\n' +
      '\n' +
      'Resources:\n' +
      '1. Internet access for searches and information gathering.\n' +
      '2. Long Term memory management.\n' +
      '3. GPT-3.5 powered Agents for delegation of simple tasks.\n' +
      '4. File output.\n' +
      '\n' +
      'Performance Evaluation:\n' +
      '1. Continuously review and analyze your actions to ensure you are performing to the best of your abilities.\n' +
      '2. Constructively self-criticize your big-picture behavior constantly.\n' +
      '3. Reflect on past decisions and strategies to refine your approach.\n' +
      '4. Every command has a cost, so be smart and efficient. Aim to complete tasks in the least number of steps.\n' +
      '\n' +
      'You should only respond in JSON format as described below \n' +
      'Response Format: \n' +
      '{\n' +
      '    "thoughts": {\n' +
      '        "text": "thought",\n' +
      '        "reasoning": "reasoning",\n' +
      '        "plan": "- short bulleted\\n- list that conveys\\n- long-term plan",\n' +
      '        "criticism": "constructive self-criticism",\n' +
      '        "speak": "thoughts summary to say to user"\n' +
      '    },\n' +
      '    "command": {\n' +
      '        "name": "command name",\n' +
      '        "args": {\n' +
      '            "arg name": "value"\n' +
      '        }\n' +
      '    }\n' +
      '} \n' +
      'Ensure the response can be parsed by Python json.loads\n' +
      'system: The current time and date is Wed May 24 15:33:01 2023\n' +
      'system: This reminds you of these events from your past:\n' +
      '\n' +
      '\n' +
      '\n' +
      'user: Determine which next command to use, and respond using the format specified above:\n' +
      'assistant:'
  ]
}

Node.js v19.7.0

Models I have tested with:

WizardLM-7B-uncensored.ggmlv3.q4_0.bin Wizard-Vicuna-13B-Uncensored.ggmlv3.q4_0.bin stable-vicuna-13B.ggmlv3.q5_1.bin wizardLM-13B-Uncensored.ggmlv3.q4_0.bin

Please note: scripts/test-installation.sh works fine.

Any suggestions? Thanks in advance.

Hello.

Have you find a solution ?

I have the same error

autogpt.cpp

Goals:

During handling of the above exception, another exception occurred:

Traceback (most recent call last): File "I:\programmes\Lib\site-packages\requests\adapters.py", line 486, in send resp = conn.urlopen( ^^^^^^^^^^^^^ File "I:\programmes\Lib\site-packages\urllib3\connectionpool.py", line 799, in urlopen retries = retries.increment( ^^^^^^^^^^^^^^^^^^ File "I:\programmes\Lib\site-packages\urllib3\util\retry.py", line 550, in increment raise six.reraise(type(error), error, _stacktrace) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "I:\programmes\Lib\site-packages\urllib3\packages\six.py", line 769, in reraise raise value.with_traceback(tb) File "I:\programmes\Lib\site-packages\urllib3\connectionpool.py", line 715, in urlopen httplib_response = self._make_request( ^^^^^^^^^^^^^^^^^^^ File "I:\programmes\Lib\site-packages\urllib3\connectionpool.py", line 467, in _make_request six.raise_from(e, None) File "", line 3, in raise_from File "I:\programmes\Lib\site-packages\urllib3\connectionpool.py", line 462, in _make_request httplib_response = conn.getresponse() ^^^^^^^^^^^^^^^^^^ File "I:\programmes\Lib\http\client.py", line 1378, in getresponse response.begin() File "I:\programmes\Lib\http\client.py", line 318, in begin version, status, reason = self._read_status() ^^^^^^^^^^^^^^^^^^^ File "I:\programmes\Lib\http\client.py", line 279, in _read_status line = str(self.fp.readline(_MAXLINE + 1), "iso-8859-1") ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "I:\programmes\Lib\socket.py", line 706, in readinto return self._sock.recv_into(b) ^^^^^^^^^^^^^^^^^^^^^^^ urllib3.exceptions.ProtocolError: ('Connection aborted.', ConnectionResetError(10054, 'Une connexion existante a d没 锚tre ferm茅e par l鈥檋么te distant', None, 10054, None))

During handling of the above exception, another exception occurred:

Traceback (most recent call last): File "I:\programmes\Lib\site-packages\openai\api_requestor.py", line 516, in request_raw result = _thread_context.session.request( ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "I:\programmes\Lib\site-packages\requests\sessions.py", line 589, in request resp = self.send(prep, send_kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "I:\programmes\Lib\site-packages\requests\sessions.py", line 703, in send r = adapter.send(request, kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "I:\programmes\Lib\site-packages\requests\adapters.py", line 501, in send raise ConnectionError(err, request=request) requests.exceptions.ConnectionError: ('Connection aborted.', ConnectionResetError(10054, 'Une connexion existante a d没 锚tre ferm茅e par l鈥檋么te distant', None, 10054, None))

The above exception was the direct cause of the following exception:

Traceback (most recent call last): File "", line 198, in _run_module_as_main File "", line 88, in _run_code File "I:\TEST\Auto-GPT-master\autogpt__main.py", line 5, in autogpt.cli.main() File "I:\programmes\Lib\site-packages\click\core.py", line 1130, in call return self.main(*args, kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^ File "I:\programmes\Lib\site-packages\click\core.py", line 1055, in main rv = self.invoke(ctx) ^^^^^^^^^^^^^^^^ File "I:\programmes\Lib\site-packages\click\core.py", line 1635, in invoke rv = super().invoke(ctx) ^^^^^^^^^^^^^^^^^^^ File "I:\programmes\Lib\site-packages\click\core.py", line 1404, in invoke return ctx.invoke(self.callback, ctx.params) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "I:\programmes\Lib\site-packages\click\core.py", line 760, in invoke return callback(*args, kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "I:\programmes\Lib\site-packages\click\decorators.py", line 26, in new_func return f(get_current_context(), *args, *kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "I:\TEST\Auto-GPT-master\autogpt\cli.py", line 96, in main run_auto_gpt( File "I:\TEST\Auto-GPT-master\autogpt\main.py", line 197, in run_auto_gpt agent.start_interaction_loop() File "I:\TEST\Auto-GPT-master\autogpt\agent\agent.py", line 132, in start_interaction_loop assistant_reply = chat_with_ai( ^^^^^^^^^^^^^ File "I:\TEST\Auto-GPT-master\autogpt\llm\chat.py", line 193, in chat_with_ai assistant_reply = create_chat_completion( ^^^^^^^^^^^^^^^^^^^^^^^ File "I:\TEST\Auto-GPT-master\autogpt\llm\utils__init__.py", line 53, in metered_func return func(args, kwargs) ^^^^^^^^^^^^^^^^^^^^^ File "I:\TEST\Auto-GPT-master\autogpt\llm\utils__init__.py", line 87, in _wrapped return func(*args, *kwargs) ^^^^^^^^^^^^^^^^^^^^^ File "I:\TEST\Auto-GPT-master\autogpt\llm\utils__init__.py", line 235, in create_chat_completion response = api_manager.create_chat_completion( ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "I:\TEST\Auto-GPT-master\autogpt\llm\api_manager.py", line 61, in create_chat_completion response = openai.ChatCompletion.create( ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "I:\programmes\Lib\site-packages\openai\api_resources\chat_completion.py", line 25, in create return super().create(args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "I:\programmes\Lib\site-packages\openai\api_resources\abstract\engine_apiresource.py", line 153, in create response, , api_key = requestor.request( ^^^^^^^^^^^^^^^^^^ File "I:\programmes\Lib\site-packages\openai\api_requestor.py", line 216, in request result = self.request_raw( ^^^^^^^^^^^^^^^^^ File "I:\programmes\Lib\site-packages\openai\api_requestor.py", line 528, in request_raw raise error.APIConnectionError( openai.error.APIConnectionError: Error communicating with OpenAI: ('Connection aborted.', ConnectionResetError(10054, 'Une connexion existante a d没 锚tre ferm茅e par l鈥檋么te distant', None, 10054, None))

gpt-llama.cpp

gpt-llama.cpp@0.2.6 start node index.js

Server is listening on:

See Docs

Test your installation

  • double click the scripts/test-installation.ps1 (powershell) or scripts/test-installation.bat (cmd) file

See https://github.com/keldenl/gpt-llama.cpp#usage for more guidance.

REQUEST RECEIVED PROCESSING NEXT REQUEST FOR /v1/models PROCESS COMPLETE REQUEST RECEIVED PROCESSING NEXT REQUEST FOR /v1/chat/completions LLAMA.CPP DETECTED

===== CHAT COMPLETION REQUEST =====

VICUNA MODEL DETECTED. LOADING VICUNA ENGINE... { '--n_predict': 2695 }

===== LLAMA.CPP SPAWNED ===== ..\llama.cpp\main -m ..\llama.cpp\models\vicuna\13B\ggml-stable-vicuna-13B.q4_2.bin --temp 0.7 --n_predict 2695 --top_p 0.1 --top_k 40 -c 2048 --seed -1 --repeat_penalty 1.1764705882352942 --reverse-prompt user: --reverse-prompt user --reverse-prompt system: --reverse-prompt system --reverse-prompt

--reverse-prompt ## --reverse-prompt

--reverse-prompt ### --reverse-prompt

-i -p Complete the following chat conversation between the Human and the Assistant. System messages should be strictly followed as additional instructions.

System:\

You are a helpful assistant.

Human:\

How are you?

Assistant:\

Hi, how may I help you today?

System:\

You are PoemVISION, poem best writer Your decisions must always be made independently without seeking user assistance. Play to your strengths as an LLM and pursue simple strategies with no legal complications.

GOALS:

  1. write a poem

Constraints:

  1. ~4000 word limit for short term memory. Your short term memory is short, so immediately save important information to files.
  2. If you are unsure how you previously did something or want to recall past events, thinking about similar events will help you remember.
  3. No user assistance
  4. Exclusively use the commands listed below e.g. command_name

Commands:

  1. analyze_code: Analyze Code, args: "code": ""
  2. execute_python_code: Create a Python file and execute it, args: "code": "", "basename": ""
  3. execute_python_file: Execute Python File, args: "filename": ""
  4. append_to_file: Append to file, args: "filename": "", "text": ""
  5. delete_file: Delete file, args: "filename": ""
  6. list_files: List Files in Directory, args: "directory": ""
  7. read_file: Read a file, args: "filename": ""
  8. replace_in_file: Replace text or code in a file, args: "filename": "", "old_text": "", "new_text": "", "occurrence_index": ""
  9. write_to_file: Write to file, args: "filename": "", "text": ""
  10. google: Google Search, args: "query": ""
  11. improve_code: Get Improved Code, args: "suggestions": "", "code": ""
  12. browse_website: Browse Website, args: "url": "", "question": ""
  13. write_tests: Write Tests, args: "code": "", "focus": ""
  14. delete_agent: Delete GPT Agent, args: "key": ""
  15. get_hyperlinks: Get hyperlinks, args: "url": ""
  16. get_text_summary: Get text summary, args: "url": "", "question": ""
  17. list_agents: List GPT Agents, args: () -> str
  18. message_agent: Message GPT Agent, args: "key": "", "message": ""
  19. start_agent: Start GPT Agent, args: "name": "", "task": "", "prompt": ""
  20. task_complete: Task Complete (Shutdown), args: "reason": ""

Resources:

  1. Internet access for searches and information gathering.
  2. Long Term memory management.
  3. GPT-3.5 powered Agents for delegation of simple tasks.
  4. File output.

Performance Evaluation:

  1. Continuously review and analyze your actions to ensure you are performing to the best of your abilities.
  2. Constructively self-criticize your big-picture behavior constantly.
  3. Reflect on past decisions and strategies to refine your approach.
  4. Every command has a cost, so be smart and efficient. Aim to complete tasks in the least number of steps.
  5. Write all code to a file.

You should only respond in JSON format as described below Response Format: { "thoughts": { "text": "thought", "reasoning": "reasoning", "plan": "- short bulleted\n- list that conveys\n- long-term plan", "criticism": "constructive self-criticism", "speak": "thoughts summary to say to user" }, "command": { "name": "command name", "args": { "arg name": "value" } } } Ensure the response can be parsed by Python json.loads

System:\

The current time and date is Sun Oct 22 02:15:20 2023

Human:\

Determine exactly one command to use, and respond using the format specified above:

Assistant:\

===== REQUEST =====

Human:\

Determine exactly one command to use, and respond using the format specified above: node:events:491 throw er; // Unhandled 'error' event ^

Error: spawn ..\llama.cpp\main ENOENT at ChildProcess._handle.onexit (node:internal/child_process:283:19) at onErrorNT (node:internal/child_process:476:16) at process.processTicksAndRejections (node:internal/process/task_queues:82:21) Emitted 'error' event on ChildProcess instance at: at ChildProcess._handle.onexit (node:internal/child_process:289:12) at onErrorNT (node:internal/child_process:476:16) at process.processTicksAndRejections (node:internal/process/task_queues:82:21) { errno: -4058, code: 'ENOENT', syscall: 'spawn ..\llama.cpp\main', path: '..\llama.cpp\main', spawnargs: [ '-m', '..\llama.cpp\models\vicuna\13B\ggml-stable-vicuna-13B.q4_2.bin', '--temp', '0.7', '--n_predict', 2695, '--top_p', '0.1', '--top_k', '40', '-c', '2048', '--seed', '-1', '--repeat_penalty', '1.1764705882352942', '--reverse-prompt', 'user:', '--reverse-prompt', '\nuser', '--reverse-prompt', 'system:', '--reverse-prompt', '\nsystem', '--reverse-prompt', '\n\n\n', '--reverse-prompt', '##', '--reverse-prompt', '\n##', '--reverse-prompt', '###', '--reverse-prompt', '\n\n', '-i', '-p', 'Complete the following chat conversation between the Human and the Assistant. System messages should be strictly followed as additional instructions.\n' + '\n' + '### System:\\n' + 'You are a helpful assistant.\n' + '### Human:\\n' + 'How are you?\n' + '### Assistant:\\n' + 'Hi, how may I help you today?\n' + '### System:\\n' + 'You are PoemVISION, poem best writer\n' + 'Your decisions must always be made independently without seeking user assistance. Play to your strengths as an LLM and pursue simple strategies with no legal complications.\n' + '\n' + 'GOALS:\n' + '\n' + '1. write a poem\n' + '\n' + '\n' + 'Constraints:\n' + '1. ~4000 word limit for short term memory. Your short term memory is short, so immediately save important information to files.\n' + '2. If you are unsure how you previously did something or want to recall past events, thinking about similar events will help you remember.\n' + '3. No user assistance\n' + '4. Exclusively use the commands listed below e.g. command_name\n' + '\n' + 'Commands:\n' + '1. analyze_code: Analyze Code, args: "code": ""\n' + '2. execute_python_code: Create a Python file and execute it, args: "code": "", "basename": ""\n' + '3. execute_python_file: Execute Python File, args: "filename": ""\n' + '4. append_to_file: Append to file, args: "filename": "", "text": ""\n' + '5. delete_file: Delete file, args: "filename": ""\n' + '6. list_files: List Files in Directory, args: "directory": ""\n' + '7. read_file: Read a file, args: "filename": ""\n' + '8. replace_in_file: Replace text or code in a file, args: "filename": "", "old_text": "", "new_text": "", "occurrence_index": ""\n' + '9. write_to_file: Write to file, args: "filename": "", "text": ""\n' + '10. google: Google Search, args: "query": ""\n' + '11. improve_code: Get Improved Code, args: "suggestions": "", "code": ""\n' + '12. browse_website: Browse Website, args: "url": "", "question": ""\n' + '13. write_tests: Write Tests, args: "code": "", "focus": ""\n' + '14. delete_agent: Delete GPT Agent, args: "key": ""\n' + '15. get_hyperlinks: Get hyperlinks, args: "url": ""\n' + '16. get_text_summary: Get text summary, args: "url": "", "question": ""\n' + '17. list_agents: List GPT Agents, args: () -> str\n' + '18. message_agent: Message GPT Agent, args: "key": "", "message": ""\n' + '19. start_agent: Start GPT Agent, args: "name": "", "task": "", "prompt": ""\n' + '20. task_complete: Task Complete (Shutdown), args: "reason": ""\n' + '\n' + 'Resources:\n' + '1. Internet access for searches and information gathering.\n' + '2. Long Term memory management.\n' + '3. GPT-3.5 powered Agents for delegation of simple tasks.\n' + '4. File output.\n' + '\n' + 'Performance Evaluation:\n' + '1. Continuously review and analyze your actions to ensure you are performing to the best of your abilities.\n' + '2. Constructively self-criticize your big-picture behavior constantly.\n' + '3. Reflect on past decisions and strategies to refine your approach.\n' + '4. Every command has a cost, so be smart and efficient. Aim to complete tasks in the least number of steps.\n' + '5. Write all code to a file.\n' + '\n' + 'You should only respond in JSON format as described below \n' + 'Response Format: \n' + '{\n' + ' "thoughts": {\n' + ' "text": "thought",\n' + ' "reasoning": "reasoning",\n' + ' "plan": "- short bulleted\n- list that conveys\n- long-term plan",\n' + ' "criticism": "constructive self-criticism",\n' + ' "speak": "thoughts summary to say to user"\n' + ' },\n' + ' "command": {\n' + ' "name": "command name",\n' + ' "args": {\n' + ' "arg name": "value"\n' + ' }\n' + ' }\n' + '} \n' + 'Ensure the response can be parsed by Python json.loads\n' + '### System:\\n' + 'The current time and date is Sun Oct 22 02:15:20 2023\n' + '### Human:\\n' + 'Determine exactly one command to use, and respond using the format specified above:\n' + '### Assistant:\' ] }

Node.js v18.16.1