Significant-Gravitas / AutoGPT

AutoGPT is the vision of accessible AI for everyone, to use and to build on. Our mission is to provide the tools, so that you can focus on what matters.
https://agpt.co
MIT License
166.29k stars 44.01k forks source link

The model: `gpt-4` does not exist #12

Closed stan-voo closed 1 year ago

stan-voo commented 1 year ago

Hi, Amazing work you're doing here! ❤ I'm getting this message:

Traceback (most recent call last): File "/Users/stas/Auto-GPT/AutonomousAI/main.py", line 154, in assistant_reply = chat.chat_with_ai(prompt, user_input, full_message_history, mem.permanent_memory, token_limit) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/stas/Auto-GPT/AutonomousAI/chat.py", line 48, in chat_with_ai response = openai.ChatCompletion.create( ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/usr/local/lib/python3.11/site-packages/openai/api_resources/chat_completion.py", line 25, in create return super().create(*args, kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/usr/local/lib/python3.11/site-packages/openai/api_resources/abstract/engine_apiresource.py", line 153, in create response, , api_key = requestor.request( ^^^^^^^^^^^^^^^^^^ File "/usr/local/lib/python3.11/site-packages/openai/api_requestor.py", line 226, in request resp, got_stream = self._interpret_response(result, stream) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/usr/local/lib/python3.11/site-packages/openai/api_requestor.py", line 619, in _interpret_response self._interpret_response_line( File "/usr/local/lib/python3.11/site-packages/openai/api_requestor.py", line 679, in _interpret_response_line raise self.handle_error_response( openai.error.InvalidRequestError: The model: gpt-4 does not exist**

Does it mean I don't have an API access to gpt-4 yet? How can I use previous versions?

Torantulino commented 1 year ago

You can apply for access to the GPT-4 API here:

https://openai.com/waitlist/gpt-4-api

Unfortunately, in my testing when I've tried to run Auto-GPT using GPT3.5 it does not function at all as the model does not understand it's task.

stan-voo commented 1 year ago

I'm playing with v3.5 thanks to @Koobah fork: https://github.com/Koobah/Auto-GPT It seems to be able to do some things: https://www.loom.com/share/9ddf4b806dfb4a948c7d728669ebac28

Got stopped by this error:

Traceback (most recent call last): File "/Users/stas/dev/Auto-GPT/AutonomousAI/main.py", line 134, in assistant_reply = chat.chat_with_ai(prompt, user_input, full_message_history, mem.permanent_memory, token_limit) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/stas/dev/Auto-GPT/AutonomousAI/chat.py", line 50, in chat_with_ai response = openai.ChatCompletion.create( ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/usr/local/lib/python3.11/site-packages/openai/api_resources/chat_completion.py", line 25, in create return super().create(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/usr/local/lib/python3.11/site-packages/openai/api_resources/abstract/engine_apiresource.py", line 153, in create response, , api_key = requestor.request( ^^^^^^^^^^^^^^^^^^ File "/usr/local/lib/python3.11/site-packages/openai/api_requestor.py", line 226, in request resp, got_stream = self._interpret_response(result, stream) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/usr/local/lib/python3.11/site-packages/openai/api_requestor.py", line 619, in _interpret_response self._interpret_response_line( File "/usr/local/lib/python3.11/site-packages/openai/api_requestor.py", line 679, in _interpret_response_line raise self.handle_error_response( openai.error.InvalidRequestError: This model's maximum context length is 4097 tokens. However, your messages resulted in 4121 tokens. Please reduce the length of the messages.

stan-voo commented 1 year ago

It did it! Completed a task on v.3.5! Here's a screencast if anybody is curious: https://www.loom.com/share/9bf888d9c925474899257d072f1a562f

Torantulino commented 1 year ago

Wow! 🤯 I didn't know that was possible, great work guys! @stan-voo @Koobah

I'd tried getting 3.5 to work in the past and it refused to acknowledge the prompt. Great idea asking it to parse it's final output as JSON.

If you want to submit a pull request that would be a huge help!:

GPT3.5 is so much cheaper, allowing for much more testing and development.

In future we could even get GPT4 instances of Auto-GPT to cheaply spin up entire sub-instances of Auto-GPT running GPT3.5 for multiple step tasks...

Koobah commented 1 year ago

Guys, glad that it helped you. Just be very cautious when using my code. This is my very first programming attempt. I have no clue what I am doing :)

Koobah commented 1 year ago

Btw, the idea I am working on is to create specialist GPT instances project managers, marketers, operators), where the bot would have it's own complex prompt replicating what a person in such role would do

Torantulino commented 1 year ago

Brilliant! Make sure you're up-to-date, as I'm adding features all the time.

Please share your journey with us over at Discussions, I'd love to see how things progress as you go.

0xcha05 commented 1 year ago

https://github.com/Torantulino/Auto-GPT/pull/19

xSNYPSx commented 1 year ago

I'm playing with v3.5 thanks to @Koobah fork: https://github.com/Koobah/Auto-GPT It seems to be able to do some things: https://www.loom.com/share/9ddf4b806dfb4a948c7d728669ebac28

Got stopped by this error:

Traceback (most recent call last): File "/Users/stas/dev/Auto-GPT/AutonomousAI/main.py", line 134, in assistant_reply = chat.chat_with_ai(prompt, user_input, full_message_history, mem.permanent_memory, token_limit) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/stas/dev/Auto-GPT/AutonomousAI/chat.py", line 50, in chat_with_ai response = openai.ChatCompletion.create( ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/usr/local/lib/python3.11/site-packages/openai/api_resources/chat_completion.py", line 25, in create return super().create(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/usr/local/lib/python3.11/site-packages/openai/api_resources/abstract/engine_apiresource.py", line 153, in create response, , api_key = requestor.request( ^^^^^^^^^^^^^^^^^^ File "/usr/local/lib/python3.11/site-packages/openai/api_requestor.py", line 226, in request resp, got_stream = self._interpret_response(result, stream) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/usr/local/lib/python3.11/site-packages/openai/api_requestor.py", line 619, in _interpret_response self._interpret_response_line( File "/usr/local/lib/python3.11/site-packages/openai/api_requestor.py", line 679, in _interpret_response_line raise self.handle_error_response( openai.error.InvalidRequestError: This model's maximum context length is 4097 tokens. However, your messages resulted in 4121 tokens. Please reduce the length of the messages.

I dont understand it, I download Koobah's fork, how to run it with gpt 3.5 key ? I used command git clone https://github.com/Koobah/Auto-GPT , but I dont find main.py in his repository

Update: I am roll it out. Just need to create keys.py (read readme) in autonomousAI path and run main.py !

alreadydone commented 1 year ago

Note that another "autonomous agent" experiment by Yohei (which is more quite popular on Twitter, but not open source) has produced impressive demos using GPT3. Yohei has recently publicized the architecture of the system, and I think there are things to learn from it, e.g. using a task queue, and using a vector store for long-term memory rather than files. But that system doesn't implement code execution yet, let alone code improvement.

jcp commented 1 year ago

@Taytay and @0xcha05, an alternative solution involves PR #39. Rather than using command line arguments, you could introduce an environment variable that overrides the default model. This approach offers more flexibility, especially as the codebase becomes more modular.

Taytay commented 1 year ago

Nice!

I think we should use both. I prefer dotenv, and was thrilled to see that PR, but was trying to keep my PR from fixing all the things at once. ;)

0xcha05 commented 1 year ago

I agree, @jcp. I think this decision makes my PRs not useful anymore. lessons learned, don't format on save(when contributing to OSS), make the PRs small, and address each issue separately.

Torantulino commented 1 year ago

Absolutely agree with the "Make PR's small" part. Big pull requests are actually slowing things down right now, there's a lot to get through.

PurrsianMilkman commented 1 year ago

i would love to see local model support soon because i now have my api setup and would love to use my own language model. openai gets expensive.