nomic-ai / gpt4all

GPT4All: Run Local LLMs on Any Device. Open-source and available for commercial use.
https://nomic.ai/gpt4all
MIT License
69.12k stars 7.59k forks source link

Integration with Auto-GPT? #277

Closed Villagerjj closed 1 year ago

Villagerjj commented 1 year ago

I know this AI is not the brightest, but I was wondering if this would be suitable to replace the GPT-4 api calls in AutoGPT (here:https://github.com/Torantulino/Auto-GPT) with GPT4ALL? that can allow this AI to grow and learn more on its own, rather than having to re-train it.

Samsuditana commented 1 year ago

Came here looking for this too. Or if not gpt4all, alpaca.. or any of the offline 4models

GracefuSlayer commented 1 year ago

Hi, I also came here looking for something similar. I am completely new to github and coding so feel free to correct me but since autogpt uses an api key to link into the model couldn't we do the same with gpt4all? Not sure if there is an api key we could use after the model is installed locally. Anyway just a thought, if anyone has any ideas please feel free to let me know!

maheshbasavaraj commented 1 year ago

same. looking to reduce costs by using free models like Alpaca.

hexicgrind commented 1 year ago

Yeah, needing API access is a huge bottleneck. Being able to utilize local/offline solutions like GPT4All would allow us to run Continuous Mode in Auto-GPT for as long as necessary

Samsuditana commented 1 year ago

Has anyone with api token access considered asking auto gpt to write a version of itself that can utilize offline models? Lol

Mrs-Feathers commented 1 year ago

i would also benefit by being able to use a local api of this, if it were run as a service.

Mrs-Feathers commented 1 year ago

i do believe llama however has an api that is run as a service locally

FrancescoSaverioZuppichini commented 1 year ago

I am bored tonight, will try to do it

Dan-paull commented 1 year ago

I have been trying and I can get it to duplicate itself from Github but it then generated another version inside this version... going to keep testing and will let you know how I go. Once I have it duplicated I will see if it can modify the new version to use an offline model.

*Update it just duplicated itself successfully and when I ran it manually in the terminal it worked.

FrancescoSaverioZuppichini commented 1 year ago

My issue so far is that I have no idea what prompt is good for a chat-like experience, I am not even sure the base model was instruction finetuned - anyone could help me solve my doubts? 🙏

DaydreamAtNight commented 1 year ago

Check out this repo rhohndorf/Auto-Llama-cpp. I just found it. The author says it is unstable and slow since the model has a small context window limit and can't always return a valid JASON format. But still, its free and I am going to try it.

hexicgrind commented 1 year ago

My issue so far is that I have no idea what prompt is good for a chat-like experience, I am not even sure the base model was instruction finetuned - anyone could help me solve my doubts? 🙏

It definitely works for me and is definitely trained. You might check out the new version that GPT4All just dropped. It has an easy installer, is faster, and even has a commercial license. GPT4All

rastinder commented 1 year ago

any updates?

FrancescoSaverioZuppichini commented 1 year ago

Anyone knows the right prompt for a chat-like experience? I am a noob at nlp

DoingFedTime commented 1 year ago

I would love to see AutoGPT and GPT4ALL combined. This would be a sick combo!

hexicgrind commented 1 year ago

Anyone knows the right prompt for a chat-like experience? I am a noob at nlp

You just ask it things like "list 10 types of dogs" or "explain how to make spaghetti". Pretty much anything

FrancescoSaverioZuppichini commented 1 year ago

Anyone knows the right prompt for a chat-like experience? I am a noob at nlp

You just ask it things like "list 10 types of dogs" or "explain how to make spaghetti". Pretty much anything

ehm no, you need to define the chat bot persona, keep track of the history in the prompt etc

iamfiscus commented 1 year ago

@FrancescoSaverioZuppichini Do you mean prompt engineering?

{
  "model": "gpt-3.5-turbo",
  "messages": [
      {"role": "system", "content": "Act as a top Italian chef and online instructor. Write directions "},
      {"role": "system", "content": "You can suggest delicious recipes that include foods which are nutritionally beneficial but also easy & not time consuming enough therefore suitable for busy people like us among other factors such as cost-effectiveness so overall dish ends up being healthy yet economical at the same time! "},
      {"role": "user", "content": "My first request – \"explain how to make spaghetti\""}
  ]
}

https://platform.openai.com/docs/api-reference/chat/create https://prompts.chat/#act-as-a-chef

FrancescoSaverioZuppichini commented 1 year ago

@iamfiscus thanks for the reply, unfortunately I think is not related to my question. My question is: what is the correct prompt to make this model act like a chatbot?

hexicgrind commented 1 year ago

@iamfiscus thanks for the reply, unfortunately I think is not related to my question. My question is: what is the correct prompt to make this model act like a chatbot?

The only pre-"prompt" that ChatGPT normally gets is this:

"Assistant is a large language model trained by OpenAI. Knowledge cutoff: 2021-09 Current date: 2023-04-28 Browsing: disabled"

But you shouldn't need to do that for GPT4All to get it to behave like a chatbot as far as I am aware. That is just all that it knows how to do, is be a chatbot.

FrancescoSaverioZuppichini commented 1 year ago

@hexicgrind thanks for your input, I am not sure you have tried the model tough since by default it looks like it don't behave like a chatbot - if you did and you had good results, can you share your setup/prompts? Thanks a lot

ag333 commented 1 year ago

@hexicgrind thanks for your input, I am not sure you have tried the model tough since by default it looks like it don't behave like a chatbot - if you did and you had good results, can you share your setup/prompts? Thanks a lot

does this help with your problem? https://github.com/go-skynet/LocalAI

FrancescoSaverioZuppichini commented 1 year ago

No, It doesn't. I just would like to know what is the right prompt template to take this model and make it behave like a chatbot - while I appreciate all the comments, I'd say let's keep the future replies for people who are knowledgeable

Samsuditana commented 1 year ago

I think part of the problem, is that we dont really understand what you are asking for.  Auto-gpt is a more task oriented model.  You want something like stablevicuna, or lama, or gpt4all, that are chatbot oriented models.  Get the oobabooga ui, and start pulling chat models. Lord of the GeeksOn May 1, 2023, at 7:20 AM, Francesco Saverio Zuppichini @.***> wrote: No, It doesn't. I just would like to know what is the right prompt template to take this model and make it behave like a chatbot - while I appreciate all the comments, I'd say let's keep the future replies for people who are knowledgeable

—Reply to this email directly, view it on GitHub, or unsubscribe.You are receiving this because you are subscribed to this thread.Message ID: @.***>

FrancescoSaverioZuppichini commented 1 year ago

auto gpt uses chatGPT to my understanding

DannyFGitHub commented 1 year ago

@FrancescoSaverioZuppichini

chat-gpt4all uses this prompt by default for gpt4all based models:

### Instruction:
The prompt below is a question to answer, a task to complete, or a conversation to respond to; decide which and write an appropriate response.
### Prompt:
%1
### Response:

Stable vicuna seems to have been trained on something similar to:

### Human:
%1
### Assistant:
DannyFGitHub commented 1 year ago

@FrancescoSaverioZuppichini

Here are some further thoughts that might be helpful.

ChatGPT has roles which I dont know whether this is converted to a prompt that we don't get to see behind the API or whether its a deeper injection at model layer. ChatGPT api has roles: https://platform.openai.com/docs/guides/chat

AutoGPT uses these roles and changes accordingly to the type of instruction as seen in app.py chat.py and agent_manager.py.

Initial AutoGPT Prompt Template:

The start prompt flow on AutoGPT starting up the ChatGPT based on what I understand looks like this:

Here the chat is being intiated "You are {name}. Respond with: "Acknowledged"." Agent_intro is just cosmetic and I dont think the model is aware of it.

app py

That message is sent to ChatGPT with a user role.

Screenshot 2023-05-02 at 11 41 59 pm

And also to assistant role.

After that the first prompt is sent to the model.

Here is how the model is given context with a system role:

chat-py

I guess and assume the what the gpt3.5 or gpt4 model sees is something like: "### System Message: ${prompt}" or similar depending on chatgpt actual processed input training data. I guess this may (or may not be knowing openai) documented somewhere.

Ongoing prompt templates with context history

Every time updates full message history, for chatgpt ap, it must be instead commited to memory for gpt4all-chat history context and sent back to gpt4all-chat in a way that implements the role: system, context.

Screenshot 2023-05-03 at 12 19 19 am

Filters to relevant past prompts, then pushes through in a prompt marked as role system: "The current time and date is 10PM. This reminds you of these events from your past:\n{relevant_memory}\n\n"" Where relevant_memory is an array of relevant memory embeddings using the create_embeddings openai api endpoint and list of all messages sent between AI and user, up until the input token limit is reached. For each prompt, the generate_context method tries to push as much history as possible until model input language token limit is reached. From embeddings and conversation history.

AutoGPT uses OpenAI embeddings, need a way to do implement embeddings without OpenAI.

Commands folder has more prompt template and these are for specific tasks. There is more prompts across the lifecycle of the AutoGPT program and finding a way to convert each one to one that is compatible with Vicuna or Gpt4all-chat sounds like the task in hand.

DannyFGitHub commented 1 year ago

This is great, its even using Llama embeddedings. And I think futher prompt engineering and testing and separate prompt templates for Vicuna and Llama models may help with the "### Assistant" issue.

Check out this repo rhohndorf/Auto-Llama-cpp. I just found it. The author says it is unstable and slow since the model has a small context window limit and can't always return a valid JASON format. But still, its free and I am going to try it.

@FrancescoSaverioZuppichini

Putting all these thoughts together I reckon you can skip the "Acknowledge" intialisation prompt into one initialisation prompt:

### Instruction: 
You are {name here}.
The prompt below is a question to answer, a task to complete, or a conversation to respond to; decide which and write an appropriate response.
### Prompt:
%1
### Response:

Still not exactly sure best way to implement, system, user, and assistant roles. Maybe experimenting with something where slightly modified Instruction is considered as system, prompt as user, and to include assistant prompts.

### Instruction:
You are {name here}

The current time and date is {time.strftime('%c')}
This reminds you of these events from your past: 

{relevant memory}

The prompt below is a question to answer, a task to complete, or a conversation to respond to; decide which and write an appropriate response.
### Prompt:
%1
### Response:

The next followup prompt template could be the same but with an inflated relevant memory from the embeddings and message history as per generate_context method from AutoGPT.

Vicuna might need experiment with something that may look like this:

### Instruction:
You are {name here}

The current time and date is {time.strftime('%c')}
This reminds you of these events from your past: 

{relevant memory}

The prompt below is a question to answer, a task to complete, or a conversation to respond to; decide which and write an appropriate response.
### Human
%1
### Assistant:

Some templates found here: Prompt Templates

--- Edit:

LocalAI without a template prompt produces strange output. When using a corresponding template prompt the LocalAI input (that follows openai specifications) of: {role: user, content: "Hi, how are you?"} gets converted to:

The prompt below is a question to answer, a task to complete, or a conversation to respond to; decide which and write an appropriate response.
### Prompt:
user Hi, how are you?
### Response:

My thoughts are in response to the question: "What is the correct prompt to make this model act like a chatbot?"

@iamfiscus thanks for the reply, unfortunately I think is not related to my question. My question is: what is the correct prompt to make this model act like a chatbot?

Within the context of "and imitating ChatGPT" so it is compatible with AutoGPT.

Local AI's chat endpoint achieves a bridge to AutoGPT, but as I have not had good results with LocalAI without the template prompt to guide gpt4all-j, I recommend using and improving the template prompt.

Without a template the raw prompt that gpt4all-j sees for the input {role:"user", "Hi, how are you?"} is: user Hi,how are you?

Samsuditana commented 1 year ago

I dont know anything about this, but have we considered an “adapter program” that takes a given model and produces the api tokens that auto-gpt is looking for, and we redirect auto-gpt to seek the local api tokens instead of online gpt4

———— from flask import Flask, request, jsonify import my_local_llm # Import your local LLM module

app = Flask(name)

def generate_api_token(llm_input):

Use your local LLM to generate an API token based on the input

api_token = my_local_llm.generate_token(llm_input)
return api_token

@app.route('/generate-token', methods=['POST']) def handle_generate_token_request(): data = request.get_json() llm_input = data.get('input') api_token = generate_api_token(llm_input) return jsonify({'api_token': api_token})

if name == 'main': app.run(port=5000) ——

FrancescoSaverioZuppichini commented 1 year ago

I am a little confuse @DannyFGitHub , autoGPT is calling chatGPT Open AI, my idea was to have the same interface as Open AI and proxy the request, so you don't need to touch autoGPT code. My issue was that I don't know how to prompt gpt4all in order to make it similar to chatGPT

FrancescoSaverioZuppichini commented 1 year ago

@Samsuditana exactly what I was thinking about ❤️

Samsuditana commented 1 year ago

Its not a matter of just prompting your local llm to act like chatgpt. You have to induce an api token system for autogpt to reference.Lord of the GeeksOn May 2, 2023, at 1:03 PM, Francesco Saverio Zuppichini @.***> wrote: @Samsuditana exactly what I was thinking about ❤️

—Reply to this email directly, view it on GitHub, or unsubscribe.You are receiving this because you were mentioned.Message ID: @.***>

ag333 commented 1 year ago

I know I am not a developer, but that was the solution I was putting forward with https://github.com/go-skynet/LocalAI. This is a tool that AutoGPT could connect to through an API as it is a way to run GPT4All along with other AI's behind an API.

Its not a matter of just prompting your local llm to act like chatgpt. You have to induce an api token system for autogpt to reference.Lord of the GeeksOn May 2, 2023, at 1:03 PM, Francesco Saverio Zuppichini @.> wrote: @Samsuditana exactly what I was thinking about ❤️ —Reply to this email directly, view it on GitHub, or unsubscribe.You are receiving this because you were mentioned.Message ID: @.>

hexicgrind commented 1 year ago

If we can't get GPT4All working with AutoGPT, then perhaps we can get it working with BabyAGI, which is built to do the same thing. https://github.com/miurla/babyagi-ui

niansa commented 1 year ago

Should work via OpenAPI feature now